Otherwise, it was failing since we check for unbound variable:
```
/bin/bash: line 1: PORTAGE_BINHOST: unbound variable
```
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
Otherwise, the variable is empty and it creates errors later. Default
value is `gs://flatcar-jenkins`. Not `GS_DEVEL_ROOT` because if we check
the previous behavior, `DOWNLOAD_ROOT` was hardcoded with:
```shell
DOWNLOAD_ROOT_SDK=https://storage.googleapis.com/flatcar-jenkins/sdk
```
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
`$verify_key` actually holds `--verify-key=verify.asc` so of course
`systemd-nspawn` fails since it does not expect `--verify-key` value.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
The catalyst build uses the same SDK version as seed as the current SDK, but
will only reuse the cached tarball if a DIGESTS file exists and is correct.
Prefetch this file to prevent the build from trying to access google storage
anonymously.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
because we need to pass google credentials to update_chroot, and 'cork update'
doesn't support that.
Add --sdk-url-path to sdk.sh for new cork default.
in this commit we make sure to use GCS bucket for dev container tests by
providing the required credentials and the associated fetch command.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
The cl.basic and cl.internet tests are different tests which wasn't
clear before. Also, the grep process returns an exit code of 1 if it
didn't find a match, causing the job to cancel. The list of tests is
space separated and should not be quoted but on the other hand, we
do have to handle a literal *.
Look for the right test and handle the grep exit code, and disable
globs for the subshell for preserving a literal *.
The Linux 5.10 stable kernel introduced a regression that we didn't
catch because we only run kola on one hardware type in Equinix Metal.
Validate that a simple network test works on various instance types of
the current hardware generation.
It has some weird semantics that seem to trip us up after updating
bash to 5.1. We tried to use it inside functions to clean up some
stuff after function returns. This can be emulated with an EXIT trap
within a subshell. Fortunately all the users of the RETURN trap were
not setting any global variables - modifications of such variables are
local to the subshell and are lost when the subshell exits.
Included is a dockerfile that installs system deps of kola in an debian:11
image. For the test script, the control flow is:
qemu_uefi.sh
qemu_uefi_arm64.sh
(docker)
qemu_common.sh
qemu_common uses the 'NATIVE_ARM64' variable passed by the jenkins job to control the behavior.
The differences are:
* use git directly to fetch (and verify) the manifest
* setup some symlinks so that /var/tmp is on the same BTRFS partition as $PWD/tmp
* setup symlinks so that we don't have to fixup installation of mantle to chroot
* run things directly instead of in chroot through cork
The whole script is executed as root, because kola requires root privileges
anyway and making kvm and sudo work with an arbitrary host user inside the
container would require a custom entrypoint to setup groups.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
This requires passing the --azure-hyper-v-generation=V2 argument to kola. The
vhd/image is the same as for azure gen1 vms, the azure_gen2 specifier is only
for jenkins usage.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The newly enabled update test performs an update from the built image
to itself. This is useful to test that the update mechanism didn't
break but it doesn't say if the built image will be accepted as update
from the previous official release.
Introduce an additional kola run that begins from the previous official
release and tests to update to the built image. Since the test does two
updates it also covers the case of updating from the built image to the
built image. Thus, we can skip the test in the normal run.
This new kola run is done first to keep the qemu-latest symlink valid
for the main test suite.
The kola update tests need a dev-key-signed update payload. This was
lacking and caused the update tests to be skipped.
Generate the test update payload for both dev builds and release builds
and run the kola tests for both. The test update payload has a special
name to not confuse it with the real update payload for releases, and
we keep the previous behavior to sign releases. Therefore, the
generate_update function wasn't used but the extract_update function
extended with generating the additional test payload.
The logic of the inline bash scripts of each job was sometimes
separated into the flatcar-scripts/jenkins/*.sh helpers but mostly
part of the Groovy file. This coupling had its advantages but also
downsides when special cases needed to be added for different release
versions. Other issues were that the inline scripts needed the
backslash character to be escaped twice and Jenkins was not good in
terminating the child processes when stopping a job. Having inline
bash scripts in Groovy also mandated the use of Jenkins to build and
release Flatcar Container Linux which hinders test builds in other CI
platforms.
Move the inline bash scripts fully to to the files in
flatcar-scripts/jenkins/ and create new ones for job that didn't have
a script there yet. Also invoke them through a systemd-run wrapper
script which ensures that all child processes are terminated and also
sets up /opt/bin as additional path for the static lbzcat binary.
A workaround for bash 4 was needed to use a temporary file instead of
the <(cmd) bash feature which caused a strange syntax error, otherwise
the bash commands are moved as they are.
Setting the invalid CCACHE_ variables resulted in strange failure
in projects depending on meson, newer version like 0.55.3. For example
systemd build fails like the following errors:
```
* ACCESS DENIED: utimes: /mnt/host/source/ccache
* ACCESS DENIED: utimes: /mnt/host/source/ccache
F: utimes
S: deny
P: /mnt/host/source/ccache
A: /mnt/host/source/ccache
R: /mnt/host/source/ccache
C: ccache cc /build/amd64-usr/var/tmp/portage/sys-apps/systemd-246/work/systemd-246-abi_x86_64.amd64/meson-private/sanitycheckc.c -o /build/amd64-usr/var/tmp/portage/sys-apps/systemd-246/work/systemd-246-abi_x86_64.amd64/meson-private/sanitycheckc.exe -O1 -pipe -pipe -D_FILE_OFFSET_BITS=64
```
We should not set up ccache at all, as it has been already disabled in
coreos-overlay repo.
Before, we were relying on the toolchains job to build and upload
packages that were part of the SDK. With this change, all packages that
should be part of the SDK are built and uploaded by the SDK job. The
toolchains job only builds toolchain packages specific for the release.
This change includes several adjustments done to both the SDK and the
toolchains jobs to make this work:
* Make the SDK job build all cross toolchains, including Rust
* Stop building Rust in the toolchains job and use the one in the SDK
instead.
* In toolchain_util.sh: detect when the symlink folder for crossdev
packages is missing and run crossdev to create it during
update_chroot setup.
* Make it possible to build the SDK starting from stage 4 instead of
stage 1, to make the SDK building faster for PR branches / nightlies
(full build should still be done for releases / weeklies).
The dev build SDKs are not in $FLATCAR_DEV_BUILDS/sdk but published under
$FLATCAR_DEV_BUILDS/developer/sdk.
Add an environment variable to specify where the SDK is to be found
but default to $FLATCAR_DEV_BUILDS/sdk if it is not specified.
From Jenkins this variable is exported as DOWNLOAD_ROOT_SDK.
When the ebuild file changed but not its version nor its cross-workon
commit, the binary package would be used. Also some dependencies are not
encoded to trigger a rebuild of depending packages.
Allow to exclude some binary packages so that we can be sure that they
are rebuilt.
More space can be saved by removing things that get overwritten on
the next job run, but they are used after this script runs (e.g.
for fingerprinting). Drop the cleanup from these scripts and move
it all to the post-build pipeline stage.
We are going to restore the split-script setup from the old Jenkins
server. This ensures that the each version's release process is
actually running with scripts in the correct release branch. It
also allows branching the VM format lists.
Note that the scripts added here only cover the currently active
jobs in the main build pipeline. There is no reason to add other
jobs, since they are mostly just running a single command using a
mantle binary from its master branch.
The scripts in this repository pick up after Jenkins has set up an
environment with all parameters and credentials defined, and an SDK
was prepared and validated.