This reverts commit 39bb800f16.
This change disabled a number of features so it isn't suitable for the
generic VMware templates. We need to re-trace our steps to list exactly
what tools/systems weren't accepting the linux26 type.
The new python script check_root uses data that portage already
maintains on what shared libraries packages need or provide instead of
re-scanning whatever ELF files that can be found. This is much more
comprehensive but there is a bit of a transition issue for folks with
long-lived SDKs: packages built with portage older than 2.2.18 do not
include this data. As such for now the check is non-fatal and provides a
command you can use to refresh locally installed packages.
The code checking for conflicts between top level directories and /usr
has also been rewritten. Both tests now are considerably faster.
Rebuilds packages that are linked against old libraries that have been
upgraded or removed from the system. Skipping this can lead to shared
library checks looking ok in the build root but then built images have
broken library dependencies.
The --rebuild-if-unbuilt emerge option will recompile packages if any
build-time dependencies changed. This can be a useful safety to make
sure applications get re-linked and so forth. However recent versions of
portage have grown better defaults for choosing when to rebuild so lets
turn the build_packages flag around and leave emerge to do its default
behavior and see if that is sufficiently safe.
The distfiles cache is always under .cache in the repo tree but there is
a lot of extra logic to make that configurable along with compatibility
symlinks for previous locations. Just yank it all out.
The version of repos.conf/coreos.conf that catalyst needs isn't valid
for normal SDK chroots and causes env-update to spew errors when it is
run prior to update_chroot which configures portage properly.
A step in reducing the amount of initialization code required: drop
needless symlinks under /usr/local/portage to the portage trees. Just
configure portage to point directly at the source instead. Only crossdev
remains in that location because it is a locally managed overlay.
SDK tarballs have a .DIGESTS file but it is created by catalyst instead
of the upload_image function. In order to support plain GPG signing but
not avoid re-generating .DIGESTS we need to move that code out of
upload_image to a new function. upload_files shouldn't do it itself
because it is also used for portage binary packages which shouldn't be
signed (there is no point, nothing would verify the signatures).
Previously we attempted to speed up the first build of a board by
pulling in binary packages regardless of their use flags, assuming that
if the package was already built the loop would not be an issue. Usually
this was true but not always. Now things like util-linux will always get
compiled when setting up a board for the first time but at least it
won't be as likely to fail.