The special Brightbox image uses the OpenStack userdata in Ignition but
lacked Afterburn usage. It actually works to use the OpenStack image and
directly which also enables Afterburn, thus we can drop the special
image.
Don't build a special image for Brightbox but recommend to use OpenStack
images directly. A symlink is added to help with the download of
hardcoded user scripts.
For Brightbox we can use the OpenStack image but the import only works
with unpacked images. After we enabled internal qcow2 compression the
.gz or .bz2 external compression doesn't provide any benefits and makes
the import more complicated.
Provide the OpenStack image without external compression in addition.
The other files are kept for now but we could also delete them if we
announce this in advance.
The official release variable is used to decide whether a build ID gets
appended to the FLATCAR_VERSION (or VERSION in os-release) or not. It
was set for the image job but not for the vms job, causing the
build_sysext script to get the build ID appended to the FLATCAR_VERSION
which causes a mismatch with the one from the image job.
Set the official release variable in the vms job as well.
Getting the contents of the directory in the buildcache involves using
rsync with some ssh invocation to log in as a bincache user. It's not
a thing that will work locally unless the user gets ahold of the SSH
key allowing the user to log in to buildcache as a bincache user.
Replace it by downloading two files that are actually needed for
building vms: an image file and the version file. This just uses curl
and is accessible for everyone.
For embargoed releases it is useful to apply patches locally to build
with them before they are public. This allows to push the same patches
to the repo during the Flatcar release at the embargo lift. The result
is the same (as long as the scripts patches did not change parts of the
setup logic that was running before they got applied), we can just build
earlier and thus do the Flatcar release directly on the embargo lift
instead of having to wait with the build because it would require the
patches to be in the repos.
The new build pipeline compresses images already but uploaded both the
compressed and uncompressed files because the whole build folder gets
uploaded.
Add a new flag "--only_store_compressed" to the image generation which
deletes the uncompressed file after compression is done. Uncompressed
images are still supported if specified in the flag
"image_compression_formats".
Closes https://github.com/flatcar-linux/Flatcar/issues/793
The image job builds an image container that is multiple GBs big and
takes >10 mins to be loaded in the vms job. The vms job can also do its
work by running from the packages container from the packages job when
it fetchs the built image from bincache first and assuming the images
job copies it there.
Skip generating the image container and instead use the packages
container for VM image building by copying the image folder first to
bincache and then retrieving it from there. While reworking this we
also address the issue that the VMs container had used the same name
for both architectures, causing a race when both run in parallel on
the same worker.
It uses the SIGNER environment variable to decide whether the
signatures should be created or not. It expect the key of the SIGNER
to exist in GPGHOME, and that's what gpg_setup.sh is already doing.
In some places we need to recursively change the owner of the
directory that contains artifacts to be signed, otherwise we won't be
able to create new files with signatures there. This is because some
of the artifacts are either created inside the SDK container (so the
created files belong to root outside the container) or are created
with `sudo`.
The functions are sourcing other files that define global variables,
so they will spill into the callers shell unnecessarily. We will also
add some functionality that uses traps in follow-up commits, so it's
good to limit the scope of traps too.
The kola test scripts are named by the platforms. The image naming is
also quite difficult to know and remember, e.g., whether "ami" or
"ami_vmdk" is needed for AWS tests and whether it's "vmware" or
"vmware_ova".
To address these problems the vms build stage now accepts the platform
names as format input, and for each platform it will automatically
generate the needed image types to run the tests.
this is required to keep "packet" in the SDK linguo while the user can
use "equinix_metal" term.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
Co-authored-by: Krzesimir Nowak <knowak@microsoft.com>
Add suggestions by @pothos from code review
- use `cp --reflink=auto`
- spelling error fixes
Co-authored-by: Kai Lüke <pothos@users.noreply.github.com>
ci-automation builds on the SDK container and simplifies CI automation
build tasks (SDK bootstrap, SDK container, packages, image, VMs).
See ci-automation/README.md for a brief introduction.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>