This script uses the EC2 volume import tools instead of attaching and
writing to an EBS volume. This mechanism will be useful for creating
AMIs in isolated EC2 regions and can be run from any host with API
access and the EC2 tools.
TODO: Allow region to be specified and automatically create region-local
S3 buckets as needed. This version hard codes a bucket only usable by
our dev AWS account, not prod. Later on: move to a more compact disk
format like VMDK.
The v1 API has been removed, so use v2 instead. The 10-second sleep was
added because the fleet tests were failing without it. My guess is that
etcd needed some time to warm up before we flooded it with requests.
previously check_etag.sh would not create a blank file if it did
not exist. The result was that the first time check_etag.sh was
run it would always exit non-zero.
If additional EBS volumes are mapped to a PV instance using a "sd*" name
they will always be ordered by the hypervisor before "xvd*" devices,
again ignoring the root device definition. This applies to all PV
instance types so we cannot get away with just poo-pooing m1.small.
We will need to call attention to this since it requires users who set
the volume size via APIs to use the name "/dev/sda" again.
For a long time these scripts have always set images as public
regardless of whether the image was a working production image or not.
This may lead users to boot random development images if they happen to
pop up to the top of Amazon's terrible AMI search page.
When I created the new AMI build host I just accepted the default
'wizard' security group which seems to have placed the host in a VPC.
There doesn't seem to be a way to fix this and as-is the build host
cannot access the private addresses on the test VMs it launches.
Switching to the public ones work fine though. Didn't notice this at
first because it is only a problem when etcd sends a redirect.
I started to move board files under a boards/ directory similar to how
the SDK is under sdk/ but didn't do so everywhere. This should finish
the job so everything is consistent now.
Note: This prefix is only used in developer and buildbot uploads. When
final releases are copied to $channel.release.core-os.net it doesn't use
the prefix since a) I already published urls without the prefix and b)
no sdk files are ever posted to the public release locations.
- Automated builds drop SDK and binary packages into
gs://builds.developer.core-os.net/ and the new download URL is
http://builds.developer.core-os.net/ (COREOS_DEV_BUILDS)
- Change default upload path to gs://users.developer.core-os.net/ for
misc developer builds. Official builds go elsewhere and will just be
configured in buildbot/jenkins so some COREOS_OFFICIAL stuff is gone.
- Automated builds of images go to a private bucket,
gs://builds.release.core-os.net which later gets copied to
gs://alpha.release.core-os.net and friends by core_promote.
The key/cert authentication method doesn't work any more, just rely on
sourcing a file with the right env vars exported.
Re-enable parallel copy.
Add group option to wrapper and custom google storage url options.
Fix some copying issues:
- Don't set AMI permissions until it is out of the pending state
- Set name and description properly
- Handle each region in parallel, mostly (these Java apps use lots of
CPU for some reason so parallelism is limited, hence the sleeps).
Less important but also included here:
- Add run.sh and test_ami2.sh which are currently used in my release
process. The alternate test script is used because the autotest stuff
in the other script is broken right now.
I've observed networking between ec2 instances not start working for
somewhere between 40-50 seconds earlier today which caused the test to
fail despite the fact that everything came up properly eventually.
Upping to 90 seconds should better cope with the surprises Amazon has to
offer.
This is particularly important for the image availability pre-check
because without it we don't detect that the image is in-fact unavailable
when it doesn't exist and the 404 results in a error from bzip2.
The build host will start generating production ami disk images so to
simplify the next step this script can automatically fetch them from
that location by version. The default sticks with the existing 'master'
versioning scheme. Added logging and turned off -x by default to make
the output log more readable.
Removing the zip_and_ship script since it isn't useful with officially
built disk images and only works with locally built images and a very
particular ec2 host. A different long term automation scheme will have
to be found.