The initial MVP of the OEM sysext usage we release won't have updates
for the sysext image and, therefore, it is not bound to the OS version.
The special name suffix instead of the version hints bootengine at using
it if no matching version is found. The name will also be used at hint
for update-engine to clean it up when versioned sysext images arrive.
We don't have an update process of the OEM sysexts implemented yet, so
use a fake "initial" version for them and make them independent from
OS version.
It isn't doing much as nothing QEMU-specific was being installed into
the OEM partition.
With that done, we opt into building an OEM sysext image for QEMU
platform.
This package will be used for the sysext image, instead of for
installing files into /usr/share/oem. This means that we can drop some
files or move them elsewhere. The systemd service file is not needed,
because it is installed by the app-emulation/wa-linux-agent package
now. This also means that the ignition file as lost its purpose. The
grub.cfg and oem-release must be installed in /usr/share/oem, next to
the sysext raw image file, so handling of these files is moved to the
newly added coreos-base/common-oem-files package. `eject` symlink to
`/usr/bin/true` is installed in the newly added manglefs.sh script.
With this done, we also opt into building an OEM sysext image for
Azure platform.
That way we can see a report of what emerge is going to do and the
status of the use flags for the installed packages. The downside is
that we are going to have reports about using deprecated and
unsupported profile in even more places.
Emerge flags are cryptic in general, but short flags even more so, so
expand them. While at it, I noticed some places where bash arrays
could be used, so convert those places too.
There is some cruft left after grub hashes generation. After the
contents are zipped into archive, they don't need to be around any
more.
Try to remove the rootfs directory after unmounting the
image. disk_util can recreate it again if there is a need for it.
Remove the build directory used for generating ACI images - it's not
needed after successful installation.
This change adds a new flag called --image_compression_format which
allows us to output the final VM image as one of the supported formats:
bz2 (default), gz, zip or none
if the compression format is "none" or "", the image will not be compressed.
Signed-off-by: Gabriel Adrian Samfira <gsamfira@cloudbasesolutions.com>
Azure requires disks to be fixed-size VHD files when uploading to blob storage
in order to create image/gallery objects from them. This is documented here[1].
To prevent mistakes from happening create disks in that format directly so that
any azure compatible tool can upload them, though azcopy is recommend because
it handles their sparseness best.
This has not been an issue for us so far because kola uses code from an older
utility that transparently handled the dynamic-to-fixed-size conversion for VHD
files (azure-vhd-utils). But people working with these things for the first
time fall into this trap.
[1]: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/create-upload-generic#resizing-vhds.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The vmlinuz kernel image gets installed to /usr/boot/ but isn't usable
for dm-verity until it gets copied over to /boot/flatcar/ and the hash
gets embedded at a particular offset. The file in /usr/boot/ uses space
while it's not having a real purpose as long as dm-verity is used.
Delete the vmlinuz file under /usr/boot/ to free up space. When
generating the ISO image we use the vmlinuz file from /boot/flatcar/
which also has the advantage that we only distribute a single vmlinuz
file with one particular checksum.
The `--jobs` parameter that some scripts defined was not used anywhere
in jenkins or mantle. So the value of the parameter always ended up
being equal to `${NUM_JOBS}` set by `common.sh`. Also, even if the
`--jobs` parameter was used for some script, that script usually
didn't forward the jobs value to other scripts, so the other scripts
ended up using `${NUM_JOBS}` again. Also, the `${FLAGS_jobs}` variable
was used by some functions in the build library, and those functions
were sometimes invoked by scripts that didn't define the
`${FLAGS_jobs}` variable. It is tedious to track which script should
actually define the parameter, and where it should be forwarded.
Just get rid of this half-working pretense. If you want to affect how
many jobs `emerge` uses, export the `NUM_JOBS` environment variable
before calling any script.
For `EMERGE_FLAGS` and `REBUILD_FLAGS` we unconditionally specify the
`--jobs` flag's value to `${NUM_JOBS}` because they are passed to
`emerge`. On the other hand we drop the `--jobs` parameter from the
`UPDATE_ARGS` variable, because this variable passed to `setup_board`
or `update_chroot`, which don't have this flag any more.
Write out an iPXE script file for Packet.
The script uses relative URLs to refer to
the other PXE files and thus can be copied
along with the files to any server.
This is useful because it saves the creation
of an iPXE script for a release/channel on a
third-party service. For CI testing it is
also helpful because the script does not only
end up on the release server but also already
on the Google buckets, refering to unpublished
PXE payloads.
The custom sys-firmware/edk2 package has been replaced by Gentoo's
sys-firmware/edk2-ovmf package now that only amd64 is supported.
This partially reverts 1761d9d071 .
This will run on ESXi 6.0 and above, and all non-EOL versions of Fusion
and Workstation.
Also enable a few useful VMX features (HPET; CPU and memory hotplug) that
are added by VMware Workstation 14.1.1's Change Hardware Compatibility
wizard. Correspondingly, enable CPU/memory hotplug in the OVF; omit
HPET because there's no obvious way to enable it.
This just sets the code file size to the var file size, so it gets
zero-padding without having to pipe commands together.
From: David Michael <david.michael@coreos.com>
[Rebased]
Signed-off-by: Geoff Levand <geoff@infradead.org>
This reverts the vagrant image back to using oem-vagrant because we
don't want to break the existing images. It moves the new,
Ignition-powered virtualbox flavor of vagrant into a new image.
This changes the oem-package for vagrant to vagrant-virtualbox,
which uses ignition instead of cloud-clonfig and sets the oem id
to "virtualbox" so that ignition can handle the machine correctly
Allow "coreos-install -o vmware_raw" to install Container Linux with
the vmware OEM.
Use base DISK_LAYOUT to reduce the minimum disk size.
Fixescoreos/bugs#359.
Two new image types have been added:
1. parallels - this produces VM images with extension pvm.tgz that can be loaded directly into Parallels Desktop
2. vagrant_parallels - this produces a Vagrant box that works with parallels vagrant provider (http://parallels.github.io/vagrant-parallels/)
Just like vmdk and others we rely on qemu-img to convert raw images. Support for Parallels disk images was added to qemu-img in version 2.4.
I also removed the box files from the actual image since there are not needed in /usr/share/oem.
Signed-off-by: Bassam Tabbara <bassam.tabbara@quantum.com>
The ACI root is created by reusing the create_prod_image function
to install a base meta-package. It then runs a script to customize
the file structure as required by agent software (if necessary),
writes a manifest file from a supplied template, and then packages
it all into a tar file.
This resolves two issues:
- Large dependencies are *never* built during image_to_vm,
build_packages must now handle that.
- Since build_packages can't resonably do the oem-* packages (they all
conflict with eachother) we do want to build them from the ebuild.
This is now enforced so a old binpkg is never used. This resolves
confusing issues people have always had while when editing oem
ebuilds but getting a stale build instead.
Most vm images have an expanded root partiton to make them practical to
use as-is. Some deployments may not want such a large root, putting most
storage on other volumes.
This variable was semi-deprecated ages ago so `version.txt` could follow
a similar variable naming pattern to `os-release`. Finally drop usage of
it here in favor of `$COREOS_VERSION`.
This reverts commit 39bb800f16.
This change disabled a number of features so it isn't suitable for the
generic VMware templates. We need to re-trace our steps to list exactly
what tools/systems weren't accepting the linux26 type.
Add qemu_uefi_secure target for building Secure Boot images. These are
identical to qemu_uefi images with the exception that the test keys have
been installed into the flash image, enabling Secure Boot by default. In
addition, sign the grub binary with the test keys during build when
producing unofficial images.
To aid testing things under Xen it helps to have a machine locally that
actually runs Xen! This isn't a particularly great setup but it works
well enough to simplify my own testing. Must be used with a developer
image and packages built with `USE=vm-testing` set to include the Xen
userspace tools.
Version 4 is too low. Some VMware products even crash trying to
upgrade it to a greater version (VMware Fusion 6 Pro). Having at
least 7 will allow us to use some modern features in most VMware
products, such as enabling vmxnet3 virtual network adapters or adding
much more memory and cpu cores to virtual machines.
This sets the IMG_FORCE_OEM_PACKAGE variable to the supplied string. If a
':' is present, what follows it gets put in the IMG_FORCE_OEM_USE variable
and what precedes in the former.
_get_vm_opt() has been modified to generally support forced overrides such
as this one, simply set variables named IMG_FORCE_$opt.
Now you can do things like:
for fmt in cloudstack \
digitalocean \
ec2-compat:ec2 \
ec2-compat:openstack \
ec2-compat:brightbox \
exoscale \
gce \
hyperv \
rackspace \
rackspace-onmetal; do
./image_to_vm.sh --format=qemu --oem_pkg=$fmt
../build/images/amd64-usr/latest/coreos_developer_qemu.sh -curses
done
rather than having to modify build_library/vm_image_util.sh to test oem
builds in qemu.
We have long since stopped installing anything to the /boot directory of
the root filesystem. Mount the ESP partition to /boot for consistancy
with the discoverable partition spec.