Instead of handling toolchain packages in make_chroot and telling
update_chroot to skip the toolchains just depend on update_chroot to do
it properly. Reduces our code duplication by a tiny but worthwhile bit.
Right now there is some funky logic to either use a previous build as a
seed or the current SDK tarball if it happens to have been downloaded.
This is a bit confusing and doesn't work reliably since it is reasonable
for there to be neither a previous build or the current SDK available if
the SDK chroot was created some time ago. Fix this by using the new SDK
library and always use the latest SDK, downloading it if needed.
The current logic for downloading SDK tarballs is in cros_sdk and
written in python which isn't super convenient for re-using in the rest
of our shell scripts. This is a start of rewriting that logic into a
re-usable library but does not yet replace the functionality in cros_sdk.
A number of places refer to these paths and that number is going to
grow. Since the standard pattern is to use environment variables for
commonly used paths it is time to add ones for these:
REPO_CACHE_DIR
REPO_MANIFESTS_DIR
This scheme only works robustly with kexec. Until the happy day that
kexec is supported on Xen (or when Xen is dead, long live Xen!) we
shouldn't bother trying. This allows us to use kernel modules again.
Currently we don't have a good way to upload packages from different
jobs to the same location. The 'Packages' index file is only generated
locally so the second upload would always win.
The stats upload has been removed so there is no longer a need to
capture the emerge output to parse the logged output. Remove a bit of
dead chromeos logic too.
The new toolchain utils define chost, portage profiles, and portage arch
per board. Replace the tricker logic from the old platform/dev repo and
switch to setting the profile with the standard eselect tool.
A few cleanups here and there, replacing echo with info and renames.
Previously the code in base_image_util.sh properly handled the disk
layout command line flag but the spaghetti code later on calls a
function from disk_layout_util.sh which only returned 'base' resulting
in a bit of a mess if something other than 'base' is used. Sync up the
two code paths to avoid that...
Use 2*CPUs for the target load average but add load average throttling
to emerge in addition to make. Also work around how catalyst sets
FEATURES so we can disable extra locking for hopefully faster builds.
We've had trouble with eclean and equery vanishing in our SDKs from time
to time. Although I don't know the root cause it seemed to be some
confusion in the ebuild environment, perhaps a mis-match between the
eclasses, profiles, and ebuilds. Updating all of those seemed to resolve
the issue and to make sure other environments are ok force a re-install
of portage and gentoolkit to clean things up.
This replaces the cross-toolchain compile step in bootstrap_sdk and adds the
ability to build native toolchains using the cross toolchain. This is just
the first step towards actually providing the native toolchain in a container.
We don't need to reserve space on disk just to reserve partition
numbers. And now that partitions are aligned these blanks spots grew
from 512 bytes to 1MB which is not much but still silly.
When using anything other than classic spinning disks with 512 sectors
it is generally best to maintain some alignment with the underlying
physical sector or erase block size. The default alignment most
partitioning tools use these days is 1MB (2048 sectors). Also sometimes
qemu-img requires disk sizes to be aligned to 64KB.
The existing code arbitrarily multiplies START_SECTOR by 512 converting
from blocks/sectors to bytes, but blocks was the correct unit to begin
with. Also the secondary GPT area is not considered but that was OK
because the bogus unit conversion oversized our disks by almost 16MB.
Instead of relying on bugs properly reserve 34 sectors at each end of
the disk. (Well, we could get away with only 33 at the end since it
doesn't have a MBR but meh.)