For a long time these scripts have always set images as public
regardless of whether the image was a working production image or not.
This may lead users to boot random development images if they happen to
pop up to the top of Amazon's terrible AMI search page.
If util-linux has a binary package it will be used, but if that binary
package has +udev it will pull in systemd. systemd has a loop that needs
to be broken too so if the util-linux loop breaker doesn't also handle
the systemd one it all falls apart.
In short the comment above the loop breaker code noting that we can try
this until it gets wonky. Well, it is wonky and we need to re-do how
build_packages works as a result. This is just a temporary workaround
until we figure out a larger restructuring.
Fix 98684560 which in turn tried to fix 0d29e735. This time the option
to download binary packages was lost so building from scratch worked but
not the normal usage of using binary packages. *sigh*
Needed for portage 2.2. Sync URIs are included but not very useful yet
because portage only can do `git pull` but not `git clone`. An extra
helper script will be required to do the initial clone it seems.
Binary packages may be useful for re-installing a package with a
different INSTALL_MASK. Can be used to install debug symbols.
Instead of gluing in a special PROD_INSTALL_MASK for all images use
profiles to configure the differences between the base build root,
production images, and developer images. This offers much more
flexibility and is needed for providing a full dev environment in
developer images.
Using parallel_emerge has been disabled by default for all commands
except build_image for quite a while now, build_image kept it just
because it was still a bit faster than normal emerge. Keeping
parallel_emerge complicates future changes to build_image so it needs to
drop it entirely. Since that means nothing uses it by default we might
as well just rip out support for it entirely.
When I created the new AMI build host I just accepted the default
'wizard' security group which seems to have placed the host in a VPC.
There doesn't seem to be a way to fix this and as-is the build host
cannot access the private addresses on the test VMs it launches.
Switching to the public ones work fine though. Didn't notice this at
first because it is only a problem when etcd sends a redirect.
Previously /etc/os-release was installed both by set_lsb_release and
the baselayout package. Now it is only installed by set_lsb_release but
when baselayout is upgraded it removes /etc/os-release. So the first
update_chroot works but the second detects the chroot's version
incorrectly and tries to apply the one time updates in this directory.
Both of them are very old so we can just delete them. The second run
will now fix up /etc/os-release and we can all move on and be happy.
VHD was just for testing, raw is more useful for published images.
coreos-install will now be able to install working xen instances:
coreos-install -d /dev/xvda -o xen -c cloud-config.yml
The host system's PATH may not be match the one required by the SDK.
When going through the enter_chroot script it gets reset because bash is
invoked as a login shell but this doesn't happen when using the plain
old chroot command.
Fixes https://github.com/coreos/scripts/pull/290