`build_image` depends on accesss to the torcx manifest and the "content
addressable nature" of the directory. We currently rely on the torcx output
root structure being preserved in the container image.
While we're moving the torcx output root out of the container image, preserve
its contents so that they can be restored from bincache.
For embargoed releases it is useful to apply patches locally to build
with them before they are public. This allows to push the same patches
to the repo during the Flatcar release at the embargo lift. The result
is the same (as long as the scripts patches did not change parts of the
setup logic that was running before they got applied), we can just build
earlier and thus do the Flatcar release directly on the embargo lift
instead of having to wait with the build because it would require the
patches to be in the repos.
Torcx manifest may contain paths and URLs as locations of
packages. There are two kinds of packages - vendored and
extra. Vendored packages normally have two locations - path to the
directory inside the image where the package is (which is why it's
called vendored), and a URL to the package on some remote
server. Extra packages only have a URL. But the URLs are added only
when we tell the build_torcx_store script to upload the packages at
the same time, which is what the old build pipeline was doing. With
the new pipeline, the upload happens as a separate step, thus the
upload is disabled when invoking build_torcx_store, and so the
packages are not getting URLs set. This change went unnoticed, because
a kola test checking the generated torcx manifest was only checking if
there is at least one location, either path or URL, and all the new
releases have no extra packages, only vendored ones.
When backporting the new pipeline to old LTS, the kola tests started
to fail, because old LTS had one extra package, and this is how I
noticed the problem.
We were running the run_sdk_container script with passing a value of a
variable named version to the script through the -v flag. But nowhere
is the variable defined. This worked under jenkins, because jenkins
job has a version parameter that gets exported into environment under
the same name. But running it manually outside jenkins revealed the
bug.
The script should have been using a vernum variable. Now, the
difference between this variable and the version variable is that
"version" was in form of <channel>-<version>-<build_id>, whereas
"vernum" comes without the channel part. Fortunately,
"run_sdk_container" was stripping the channel part before using this
value, so it makes no difference whether we pass
main-3333.0.0.0-some-id or just 3333.0.0-some-id.
When the build system runs the packages jobs for both architectures in
parallel and has to create a new tag, tagging fails due to the race in
the tagging.
Move the git tagging to its own script that is run from a new top-level
job that starts the packages jobs for both architectures.
It uses the SIGNER environment variable to decide whether the
signatures should be created or not. It expect the key of the SIGNER
to exist in GPGHOME, and that's what gpg_setup.sh is already doing.
In some places we need to recursively change the owner of the
directory that contains artifacts to be signed, otherwise we won't be
able to create new files with signatures there. This is because some
of the artifacts are either created inside the SDK container (so the
created files belong to root outside the container) or are created
with `sudo`.
The functions are sourcing other files that define global variables,
so they will spill into the callers shell unnecessarily. We will also
add some functionality that uses traps in follow-up commits, so it's
good to limit the scope of traps too.
When a nightly build is started that pushes the version file to the
branch it was doing so only at the end of the build, causing the push
to fail if something else got merged in between.
Push the version file early by generating it the same way it would be
generated by the run_sdk_container/bootstrap_sdk_container scripts.
In the case of the SDK the version file gets the same version for the
OS and the SDK. Add some explanations about the version formats. Note
that the scripts will still rewrite the file but it should be a no-op.
The pipeline created two tags if an SDK was built, one for the SDK and
one for the OS build (which was a free-standing tag or a local state
that was equivalent to the existing tag of the same name). The
nightlies created update commits on the main branch, even if no change
was done, and on the release branches we lacked these commits.
Create the release tag in the nightly SDK bootstrap already and reuse
it for the nightly OS build. Instead of local state, checkout the
existing tags explicitly. Extend the nightly update commit logic to
cover release branches and detect if we can skip building because no
changes were done.
This change adds copying test results to the build cache server, and
adds respective deletion to the garbage collector.
Also, the patch fixes an issue with torcx publishing (manifest
publishing had arch hard-coded).
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
This change updates the package build script to publish the torcx
manifest file to the build cache so it can be used by tests.
It also updates the generic test script to use the SDK container instead
of the packages container image, and to download and use the torcx
manifest from the build cache.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
ci-automation builds on the SDK container and simplifies CI automation
build tasks (SDK bootstrap, SDK container, packages, image, VMs).
See ci-automation/README.md for a brief introduction.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>