- updated github actions for runc, containerd, and docker to not handle
nonexistent ebuilds in app-torcx/ anymore
- removed spurious package_run_dependencies from build_image_util.sh
- build_sysext: generate pkginfo before mangle script runs
use zstd for compression; add cli flag to select compression
- ci_automation_common.sh: remove spurious `/` from match string
- coreos, board-packages, bootengine: bump ebuild revisions
- kernel commonconfig: add squashfs zstd support
Signed-off-by: Thilo Fromm <thilofromm@microsoft.com>
We push a commit with the nightly SDK tag to the main branch if the
SDK was built from the main branch. Which is what happens when we
build the nightly intermediate SDK. The final nightly SDK is not built
from the main branch, but rather from the nightly intermediate SDK
tag. Both of them point to the exactly same commit, but the difference
is in what `git rev-parse --abbrev-ref HEAD` returns for each of
those. When the main branch is checked out, the command will return
"main". When the nightly intermediate SDK tag is checked out, the
command will return "HEAD". So when nightly final SDK is being built,
the command returns a string different than "main" and thus decides
not to push the commit with the final nightly SDK tag to the main
branch. Rework it to assume that if `git rev-parse HEAD` and `git
rev-parse origin/main` return the same commit hash (and it's the
nightly build and all that) then the commit should be pushed.
We use "origin/main" instead of just "main" just in case the main
branch was not checked out before, for some reason (may come up in
testing with different names for the main branch when testing).
Since https://github.com/flatcar/scripts/pull/950 was merged,
tarball files `flatcar-{packages,sdk}-*.tar.zst` have been created
with mode 0600 instead of 0644. As a result, the files with mode 0600
were uploaded to bincache, but afterwards `copy-to-origin.sh` that in
turn runs rsync from bincache to the origin server could not read the
tarballs.
To fix that, it is necessary to chmod from 0600 to 0644 to make it
readable by rsync during the release process.
All of that happens because zstd sets the mode of the output file to
0600 in case of temporary files to avoid race condition.
See also https://github.com/facebook/zstd/pull/1644,
https://github.com/facebook/zstd/pull/3432.
We currently use gzip together with pigz (parallel gzip) for importing
container images, and this is a lengthy operation (takes multiple minutes). By
moving to zstd we gain on all fronts: zstd produces smaller files, and is
faster to decompress/compress then pigz while using less resources.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
For embargoed releases it is useful to apply patches locally to build
with them before they are public. This allows to push the same patches
to the repo during the Flatcar release at the embargo lift. The result
is the same (as long as the scripts patches did not change parts of the
setup logic that was running before they got applied), we can just build
earlier and thus do the Flatcar release directly on the embargo lift
instead of having to wait with the build because it would require the
patches to be in the repos.
The container image was only created if it didn't exist locally. This
would result in fixes not being in a downstream job that is scheduled
to a different worker node on Jenkins that has a stale copy.
For the build automation we will now always download the latest
container tar ball based on comparing the image ID from a new artifact,
and for registry images we pull the container image to make sure that
we don't use a stale copy when we rebuild.
When there is no SDK container image in the registry, the fallback
looks at bincache but bincache isn't backed up and may be cleaned of
old releases. While this won't be the regular case, the container
image registry may be unavailable (or renamed as happened now), or
people would like to rerun the image job which relies on the packages
container.
I found a duplicate function and verified that it's the only one via
comm -12 <(sort ci-automation/ci_automation_common.sh) <(sort sdk_lib/sdk_container_common.sh) | grep function
I'm not sure if this is due to a case where we only import one but
can't import the other, hence I'm not deleting it now.
This failed when used from ( secret_to_file ... VAR ; cat $VAR )
because ( ) starts a new subshell PID and secret_to_file's returned
/proc/PID/fd/X path was then using the wrong PID.
I made a mistake and wrote a version like main-3363-0.0-stuff (note a
dash instead of a dot after the first number). Surprisingly the build
chugged along just fine almost until the end of the image job - it
detected invalid version string when the job wanted to create a
version.txt file:
ERROR build_image: script called: build_image '--board=amd64-usr' '--group=developer' '--output_root=/home/sdk/build/images' '--only_store_compressed' '--torcx_root=/home/sdk/build/torcx' 'prodtar' 'container'
ERROR build_image: Backtrace: (most recent call is last)
ERROR build_image: file build_image, line 196, called: split_ver '3363' 'SPLIT'
ERROR build_image: file common.sh, line 192, called: die 'Invalid version string '3363''
ERROR build_image:
ERROR build_image: Error was:
ERROR build_image: Invalid version string '3363'
Let's have a stricter version check in the beginning of the build
process, so the process fails sooner rather than later.
The image job builds an image container that is multiple GBs big and
takes >10 mins to be loaded in the vms job. The vms job can also do its
work by running from the packages container from the packages job when
it fetchs the built image from bincache first and assuming the images
job copies it there.
Skip generating the image container and instead use the packages
container for VM image building by copying the image folder first to
bincache and then retrieving it from there. While reworking this we
also address the issue that the VMs container had used the same name
for both architectures, causing a race when both run in parallel on
the same worker.
It uses the SIGNER environment variable to decide whether the
signatures should be created or not. It expect the key of the SIGNER
to exist in GPGHOME, and that's what gpg_setup.sh is already doing.
In some places we need to recursively change the owner of the
directory that contains artifacts to be signed, otherwise we won't be
able to create new files with signatures there. This is because some
of the artifacts are either created inside the SDK container (so the
created files belong to root outside the container) or are created
with `sudo`.
The pipeline created two tags if an SDK was built, one for the SDK and
one for the OS build (which was a free-standing tag or a local state
that was equivalent to the existing tag of the same name). The
nightlies created update commits on the main branch, even if no change
was done, and on the release branches we lacked these commits.
Create the release tag in the nightly SDK bootstrap already and reuse
it for the nightly OS build. Instead of local state, checkout the
existing tags explicitly. Extend the nightly update commit logic to
cover release branches and detect if we can skip building because no
changes were done.
The nightly SDK image is not pushed to a registry but has to be
downloaded from the build server as tar ball.
Fall back to the tar ball import for a better user experience.
To reuse the ci logic it had to support the "docker" env variable.
The use of the pigz container is not always needed if the user has
pigz available.
This change has sdk_bootstrap update the origin branch when run from the
main branch, updating the SDK and OS version in 'main' for each SDK
bootstrap build.
Release / maintenance branches have the SDK version set in the
versionfile at release time. But main is never updated.
Updating the versionfile in main when a new SDK is built ensures that
dev branches based on main will also use the correct SDK version (e.g.
in subsequent CI builds).
- Git author configuration moves to tagging function and put under a
condition so as to not pollute peoples' workspaces.
- curl now less verbose since it was spamming logs with TLS debug
information.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
For test builds the commit that updates the submodules can be free-
standing but for releases we need to push it to the branch and also
sign the tag.
Add optional arguments that are used by the tag-release script in
flatcar-build-scripts.
ci-automation builds on the SDK container and simplifies CI automation
build tasks (SDK bootstrap, SDK container, packages, image, VMs).
See ci-automation/README.md for a brief introduction.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>