Instead of depending on default value of build_image's base_sysext
parameter, create a file that explicitly lists which base sysexts will
be built for each architecture. The file can be sourced by other
scripts that need this kind of information. Currently, image.sh and
image_changes.sh use this file.
`build_image` depends on accesss to the torcx manifest and the "content
addressable nature" of the directory. We currently rely on the torcx output
root structure being preserved in the container image.
While we're moving the torcx output root out of the container image, preserve
its contents so that they can be restored from bincache.
For embargoed releases it is useful to apply patches locally to build
with them before they are public. This allows to push the same patches
to the repo during the Flatcar release at the embargo lift. The result
is the same (as long as the scripts patches did not change parts of the
setup logic that was running before they got applied), we can just build
earlier and thus do the Flatcar release directly on the embargo lift
instead of having to wait with the build because it would require the
patches to be in the repos.
To review the image changes and the changelog more easily and in case
of fixes, iterate over it without rebuilding the image, move this logic
to its own file where a new job could call it.
The image comparison was done against the old release in the channel
we release to instead of the previous release with the same major
version. This means when a channel transition happens we see a large
diff instead of the diff against the previous release. While not bad
for finding problems, this is normally not needed. However, we want
to have two changelogs generated, one against the old release in the
channel we relese to and one against the previous release with the same
major version when a transition happens. There was no changelog
printing yet, and this is added now.
The new build pipeline compresses images already but uploaded both the
compressed and uncompressed files because the whole build folder gets
uploaded.
Add a new flag "--only_store_compressed" to the image generation which
deletes the uncompressed file after compression is done. Uncompressed
images are still supported if specified in the flag
"image_compression_formats".
Closes https://github.com/flatcar-linux/Flatcar/issues/793
The original pipeline has package-diff commands to print out image
differences compared to the last release. This is used for the release
Go/No-Go QA checks.
Add the same logic to the new pipeline.
The image job builds an image container that is multiple GBs big and
takes >10 mins to be loaded in the vms job. The vms job can also do its
work by running from the packages container from the packages job when
it fetchs the built image from bincache first and assuming the images
job copies it there.
Skip generating the image container and instead use the packages
container for VM image building by copying the image folder first to
bincache and then retrieving it from there. While reworking this we
also address the issue that the VMs container had used the same name
for both architectures, causing a race when both run in parallel on
the same worker.
The functions are sourcing other files that define global variables,
so they will spill into the callers shell unnecessarily. We will also
add some functionality that uses traps in follow-up commits, so it's
good to limit the scope of traps too.
The image needs to be set into official mode through a helper script
(see jenkins/images.sh) and the COREOS_OFFICIAL variable needs to be
set for prod_image_util.sh/build_image_util.sh/grub_install.sh.
For now we had only "developer" images in the new pipeline.
Based on the git tag like "alpha-1234.0.0" set the channel (group) for
the image and also use this logic when finding the channel in the QEMU
update test.
ci-automation builds on the SDK container and simplifies CI automation
build tasks (SDK bootstrap, SDK container, packages, image, VMs).
See ci-automation/README.md for a brief introduction.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>