When setting up portage configuration, we set up a symlink in
${ROOT}/etc/portage/patches that points to the coreos/user-patches
directory inside the coreos-overlay. That way, we can add our custom
patches to coreos-overlay without the need for moving the packages
from portage-stable to coreos-overlay only to apply an extra patch.
Flatcar has net-fs/samba 4.15.4-r3, greater than 4.13.8, so it is
not necessary to keep GLSA 202105-22 in GLSA_ALLOWLIST.
Allow 202209-12 for now, as update to grub 2.06 is still in progress.
We need to fix another symlink created by gcc-config, so extend the
function that was doing it for some other links and rename it - it's
not about only liblto any more.
Some packages are currently missing from the /usr/share/SLSA directory
compared to flatcar_production_image_packages.txt. For torcx packages,
extract the reports from the torcx bundle when adding it to the rootfs.
For initramfs packages, as a substitute we enumerate build dependencies
of coreos-kernel (image_packages_implicit()). At this time these are
bootengine and intel-microcode.
Prod images need libstdc++.so and other libraries produced by
sys-devel/gcc build, but because we don't want all of gcc in the image,
the binpkg is manually unpacked instead of installed with emerge. Make
sure to preserve SLSA metadata when unpacking as well.
`./setup_board --nousepkg --nogetbinpkg` currently fails with a
circular dependency due to pulling in the whole systemd-cryptsetup-udev
dependency chain. This is due to several issue:
* `emerge --root=$ROOT --emptytree` considers ROOT=/ to also be empty,
so it pulls in all host packages. This must've not always been the case.
So we need to pipe the dependency package list through `egrep $ROOT`
to filter only those that would get installed into the desired ROOT
* if SYSROOT=/ and not SYSROOT=ROOT, then virtual/os-headers is missing
from $ROOT package list
* the final filter expression tries to previously looked like this:
(=sys-devel/gcc|sys-devel/binutils-0.9) which also matches
sys-devel/gcc-config and sys-devel/binutils-config, which are
necessary dependencies. Rework the match expression to not filter
those out.
This made no difference back when lib was a symlink to lib64, but now that they are separate,
libs belongs in /usr/lib64. This mostly doesn't show up because ldconfig configures the ld.so cache
to include both locations, but when updating from an older release ld.so.cache is out of date.
Unfortunately ld.so.cache does not get updated until after multipathd, which causes
multipathd to dump core. This may also affect other packages that need access to
libgcc early.
See also: https://github.com/flatcar-linux/Flatcar/issues/809
Since v0.51.0 syft supports generating parsing the gentoo package
database. This is a first go at integrating that into our image build
process. This doesn't yet include packages inside torcx packages, or the
kernel, or initramfs-only packages.
There is some cruft left after grub hashes generation. After the
contents are zipped into archive, they don't need to be around any
more.
Try to remove the rootfs directory after unmounting the
image. disk_util can recreate it again if there is a need for it.
Remove the build directory used for generating ACI images - it's not
needed after successful installation.
The new build pipeline compresses images already but uploaded both the
compressed and uncompressed files because the whole build folder gets
uploaded.
Add a new flag "--only_store_compressed" to the image generation which
deletes the uncompressed file after compression is done. Uncompressed
images are still supported if specified in the flag
"image_compression_formats".
Closes https://github.com/flatcar-linux/Flatcar/issues/793
Make qemu_template.sh work on MacOS
Line 14; The nproc command is only available on systems with GNU coreutils installed. The getconf _NPROCESSORS_ONLN alternative will work on a wider range of UNIX systems.
Line 114; The mktemp syntax used only works on GNU implementation.
Line 159; added hvf (MacOS) and tcg (no acceleration) options as a fallback. By doing this qemu-system-x86_64 will try to use kvm, but when it fails try hvf, and when that fails switch to the tcg accelerator.
This was already being done for gcc but not for binutils. Binutils is
also slotted and when we run the sdk (stage3/stage4) job in the CI, the
seed sdk already contains crossdev packages that we may want to update.
The os-release file was not only accessible through /usr/lib/ but
also through /usr/lib64 because "lib" was just a symlink.
Now that we split them up into two directories, add a compatibility
symlink in case /usr/lib64 was used to access os-release. A check
is added to also work without the split which is useful if the split
is not done for the SDK at the same time.
The standard location is /usr/lib/modules but on Flatcar "lib" was a
symlink to "lib64". Now this is going to be split up in separate
directories but with compatibility symlinks.
Add the new location to the ignore list.
Sysext images have a compatibility matching mechanism that searches for
the matching OS version or custom sysext level setting. On Flatcar
there is just the OS version set in /etc/os-release until now which
means that sysext images can't easily be used together with autoupdates
that change the OS version.
Define a sysext level for Flatcar so that users can refer to it instead
of the OS version when they have images that don't rely on a particular
Flatcar version.
Here an example of the now possible metadata:
/etc/extensions/NAME/usr/lib64/extension-release.d/extension-release.NAME
ID=flatcar
SYSEXT_LEVEL=1.0
and a symlink /etc/extensions/NAME/usr/lib → /etc/extensions/NAME/usr/lib64
to work around the problem that using lib/ as path destroys Flatcar's
lib → lib64 symlink.
In the future the matching logic hopefully gets more flexible because
now it is just a string comparison. Also, the architecture is not
matched either for now - we should work with upstream to improve this.
Closes: https://github.com/flatcar-linux/Flatcar/issues/643
Rename sztd to zst and amend the changelog. The zstd binary generates a
compressed file with the .zst extension by default.
Signed-off-by: Gabriel Adrian Samfira <gsamfira@cloudbasesolutions.com>
This change adds a new flag called --image_compression_format which
allows us to output the final VM image as one of the supported formats:
bz2 (default), gz, zip or none
if the compression format is "none" or "", the image will not be compressed.
Signed-off-by: Gabriel Adrian Samfira <gsamfira@cloudbasesolutions.com>
Disable ebuild-locks for the emerge command that creates the image.
Ebuild-locks protect unsandboxed ebuild phases from running
concurrently, but also slow things down greatly when a lot of
concurrency would otherwise be possible. The image build phase merges a
big amount of binary packages, and I am not aware of us having any
phases that risk concurrently modifying shared files.
I have been testing this for the last months and have not seen any
failures. The time savings are significant: this cuts image build time
from 20m to 10m for me.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
Package users nowadays get created through systemd-sysuser files.
Gentoo uses the acct-user|groups packages to allocate stable IDs for
these users. Since they get created at runtime, we have the problem
that they end up in /etc/passwd at boot time which would be fine if
they follow the acct-user allocations but it could also be that there
is a package that uses its own sysuser files, leading to dynamic ID
allocation which we can't control and may result in ugly user ID
mismatches that are hard to resolve again. Normally we intend to ship
all system users under /usr/share/baselayout/passwd so that /etc/passwd
is really left to the user's own entries.
Generate the /etc/passwd sysuser entries at image build time and move
these entries over to /usr/share/baselayout/passwd so that all
system users reside in this database. We should still ensure to have
acct-user packages for all system users or at least hardcoded user
IDs, therefore, add a check for that.
201908-24: polkit 0.120-r2, so not affected
201909-01: perl 5.34.0, so not affected
202003-26: python 3.9.8, so not affected
202005-09: python 3.9.8, so not affected
202006-03: perl 5.34.0, so not affected
202008-01: python 3.9.8, so not affected
202101-18: python 3.9.8, so not affected
202104-04: python 3.9.8, so not affected
202105-34: bash 5.1_p8, so not affected
202107-31: polkit 0.120-r2, so not affected
202107-48: systemd 250.3, so not affected
Azure requires disks to be fixed-size VHD files when uploading to blob storage
in order to create image/gallery objects from them. This is documented here[1].
To prevent mistakes from happening create disks in that format directly so that
any azure compatible tool can upload them, though azcopy is recommend because
it handles their sparseness best.
This has not been an issue for us so far because kola uses code from an older
utility that transparently handled the dynamic-to-fixed-size conversion for VHD
files (azure-vhd-utils). But people working with these things for the first
time fall into this trap.
[1]: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/create-upload-generic#resizing-vhds.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
source_on_disk() so far relied on the 'sourcePackage' field, which contains the
primary dependency of a torcx packge (app-torcx/docker ->
app-emulation/docker). Now the 'metaPackage' field (app-torcx/docker) is used,
which lets us look at RDEPENDS and figure out all packages that are indirectly
installed when installing a torcx package. torcx_dependencies() does just that,
so move it's definition to torcx_manifest.sh.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The torcx_manifest.json file currently has a 'sourcePackage' field which is
extracted from the first runtime dependency of the torcx package ebuild. This
is a convention, and causes sourcePackage to hold 'app-emulation/docker' for
the 'app-torcx/docker' package. This does not carry enough information to be
able to figure out what other packages are part of the torcx package.
Store an additional field, 'metaPackage', in the manifest which contains the
name of the torcx package. With the right ebuild it is then possible to figure
out what other packages are part of a given torcx package. This can then be
used to add that information to the image packages list.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The default image group is already encoded in
/usr/share/flatcar/update.conf but it was written to
/etc/flatcar/update.conf as well. This can cause problems when the user
switches channels by forcing an update to a specific release from the
different channel (e.g., through the flatcar-update tool) as it leaves
the file under /etc/flatcar/update.conf out of sync with the new
channel version in /usr/share/flatcar/update.conf.
Since we don't really need to write a specific channel to /etc on new
images as we can rely on the value from /usr, we now leave any possible
overwriting of the value in /etc entirely to the user.
Since the update of dev-python/certifi, running the command
`./image_to_vm.sh --format gce --board=amd64-usr` fails due to a
dangling symlink. This symlink is located in
/usr/lib64/python3.9/site-packages and is not supposed to be installed
in the first place because of this INSTALL_MASK entry in
coreos-overlay/profiles/coreos/targets/generic/oem-aci/make.defaults:
INSTALL_MASK="${INSTALL_MASK}
/usr/*/python3*
"
There is an open upstream bug that INSTALL_MASK doesn't work correctly on
symlinks (https://bugs.gentoo.org/678462).
The best we can do at this time is to ignore the dangling symlink.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
This change adds a job for publishing binary packages to the build cache
server to the ci automation.
Also, setup_board is updated to use the buildcache package cache if a
nightly build version is detected.
Signed-off-by: flatcar-ci <infra+ci@flatcar-linux.org>
Stop relying on github redirects, they are a mixed blessing and using
them broke emerge-gitclone inside dev-container in silent way. The
script could not find a desired revision of portage-stable or
coreos-overlay, because it tried to pull from kinvolk instead of
flatcar-linux github org. The redirects seem to hinder fetching a
specific commit, so the script pulled something else (HEAD or main?).
Flatcar is in the NIST CPE dictionary. Let's programmatically build the
`CPE_NAME` in the build process in order to be scanned.
`CPE_NAME` is part of `/etc/os-release` with the following manual entry:
```
CPE_NAME=
A CPE name for the operating system, in URI binding syntax, following the Common Platform Enumeration Specification[2] as proposed by the NIST.
This field is optional. Example: "CPE_NAME="cpe:/o:fedoraproject:fedora:17""
...
[^2]: Common Platform Enumeration Specification
http://scap.nist.gov/specifications/cpe/
```
Which indicates that the current version of CPE is 2.3.
Closes: https://github.com/flatcar-linux/Flatcar/issues/536
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
This updates the default settings in build scripts to use
https://mirror.release.flatcar-linux.net/
instead of the google storage bucket if no binhost or FLATCAR_DEV_BUILDS
is specified.
Defaults are updated for
* update_chroot (runs at SDK initialisation time)
* setup_board (creates /boards/[ARCH]/) chroots
* the development container
* set_version
The documentation says it always returns zero, which is not true -
portageq could return a non-zero return value and that would be the
return value of the function. Fix the function to actually follow the
documentation - apparently the function should just return an empty
string in case of failure (like package not found).
It has some weird semantics that seem to trip us up after updating
bash to 5.1. We tried to use it inside functions to clean up some
stuff after function returns. This can be emulated with an EXIT trap
within a subshell. Fortunately all the users of the RETURN trap were
not setting any global variables - modifications of such variables are
local to the subshell and are lost when the subshell exits.
Now that GLSA metadata was updated as of 2021-09-03, we need to
add the following entries to the GLSA allow list, to avoid build
failures caused by `glsa-check -t all`.
202006-03: perl 5.26.2, only SDK, allowlist
202008-01: python 2.7.15 & 3.6.5, only SDK, allowlist
202101-18: python 2.7.15 & 3.6.5, only SDK, allowlist
202104-04: python 2.7.15 & 3.6.5, only SDK, allowlist
202105-22: samba 4.12.9, not affected, samba has no ldap flag, no smbd.
202105-34: bash 4.3, non-trivial to update
202107-31: polkit 0.113, in-progress
202107-48: systemd 247.9, backported the fixes to v247.9.
201904-13: git 2.26.3, so not affected
201909-08: dbus 1.12.20, so not affected
201911-01: openssh 8.6, so not affected
202003-12: sudo 1.9.5, so not affected
202003-20: systemd 246+, so not affected
202003-24: file 5.39, so not affected
202003-30: git 2.26.3, so not affected
202003-31: gdb 9.2, so not affected
202003-52: samba 4.12.9, so not affected
202004-10: openssl 1.1.1l, so not affected
202004-13: git 2.26.3, so not affected
202005-02: qemu 5.2, so not affected
disk_util sometimes bails out during build with an ASCII conversion
error:
Traceback (most recent call last):
File "/mnt/host/source/src/scripts/build_library/disk_util", line 1114, in <module>
main(sys.argv)
File "/mnt/host/source/src/scripts/build_library/disk_util", line 1110, in main
options.func(options)
File "/mnt/host/source/src/scripts/build_library/disk_util", line 779, in Verity
Tune2fsReadWrite(options, part, disable_rw=True)
File "/mnt/host/source/src/scripts/build_library/disk_util", line 716, in Tune2fsReadWrite
image.write(chr(flag_value))
UnicodeEncodeError: 'ascii' codec can't encode character '\xff' in position 0: ordinal not in range(128)
Curiously, the error does not reproduce every time (though the code
leading to the error is straightforward).
This change converts the integer to be written to a byte array (of size
1) instead of using chr(). Also, the file to be written is explicitly
opened in binary mode.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
This file used to be imported by scripts coreos-base/cros-devutils,
which we have dropped. Now it is imported only from some other script
in build library so move it there. This leaves lib as a directory
where we keep shflags library.
This is some fallout from converting scripts from python2 to
python3. Output received from the functions in subprocess module now
return bytearrays, but we operate on them as if they were a text. So
decode the bytearrays to strings. Otherwise we are either getting some
junk values passed to the command line utilities (for example:
`b'/dev/loop2'` instead of `/dev/loop2`), or exceptions are thrown,
because a function expected a string.
This is just a conversion done by 2to3 with a manual updates of
shebangs to mention python3 explicitly. The fixups for bytearray vs
string issues will follow up.
To be able to support arm64 native SDK without cross builds, we should
make generate_au_zip support both architectures, amd64 and arm64.
Without doing that, `build_image` fails with `ERROR : Required
WHITE_LIST items ld-linux-x86-64.so.2 not found!!!`, because the
script recognizes only amd64 libs in WHITE_LIST.
We should first determine the architecture in build_image, before
running generate_au_zip, and pass the architecture, either amd64 or
arm64. Also add allow_list and ld_linux parameters to necessary
functions.
The vmlinuz kernel image gets installed to /usr/boot/ but isn't usable
for dm-verity until it gets copied over to /boot/flatcar/ and the hash
gets embedded at a particular offset. The file in /usr/boot/ uses space
while it's not having a real purpose as long as dm-verity is used.
Delete the vmlinuz file under /usr/boot/ to free up space. When
generating the ISO image we use the vmlinuz file from /boot/flatcar/
which also has the advantage that we only distribute a single vmlinuz
file with one particular checksum.
For consistency with code further down in the file: aarch64 cross compilation only applies when CBUILD is x86,
for native aarch64 builds rust is guaranteed to have aarch64 rustlibs.
The defaults already give more space than the ext4 defaults but it's
recommended to use the mixed mode for filesystems smaller than 1-5 GB.
Another aspect is the duplication of metadata and while it currently is
off it's actually related to the underlying block device and could
change as soon as the block device type changes.
Select the mixed mode that uses a merged area for data and metadata
blocks. Also ensure that no metadata duplication gets enabled
automatically.
The compression feature of btrfs allows us to store more in the
size-limited /usr and OEM partitions. The size should of course still
be monitored to not bloat the image but more headroom helps to try
things out quickly without hitting the hard limit which fails the
build.
Use btrfs for the OEM partition but with zlib compression because
the outdated GRUB version doesn't support zstd yet.
New subvolumes currently can't be used for the OEM partition as default
subvolumes because GRUB tries to read the grub.cfg from the top
subvolume (at least with our old version). (We could however use
subvolumes for the /usr partition when switching to btrfs if that
makes any sense.)
The limited /usr and OEM partiton size is a challenge when adding new
packages or updating a package. Since the disk layout can't be changed
for compatibility reasons when updating an existing instance, we can't
simply try out something without ensuring first that enough space is
there by removing something else. This situation can be relaxed by
leveraging btrfs compression. There was some support for btrfs but it
was a bit outdated and didn't allow to configure compression or setting
read-only flags.
Fix the btrfs support, allow to mark the default subvolume as read only
and add a compression variable that allows to select a compression
algorithm. Instead of enabling compression by setting the mount option,
we can set the filesystem attribute which has the benefit that
compression is still used with the default mount options for this (top)
directory and its contents. While for the ext2 /usr partition a hack
existed to force read-only mode by modifying some bytes and checking
these bytes could also be used to know if read-only should be used to
prevent corruption of dm-verity data, we rather check directly whether
dm-verity is active for this partition and mount it read-only (and
with the norecovery option to really prevent any write attempt).
The rust ebuild has some magic to detect cross-toolchains present on the
system and enable building additional cross targets. The code to trigger
the rebuild of rust is part of install_cross_rust, and checks whether
the cross directories exist in the rust installation. If they don't,
then rust is removed and rebuilt to allow for the auto-detection to
happen.
Right now there are two issues with the code. Firstly, the path that is
checked is wrong, which leads to rust always being removed and rebuilt.
The path checked is /usr/lib/rust-*/rustlib but /usr/lib/rustlib is
where the files are installed.
The second issue is that it checks for aarch64 dirs when CHOST is
aarch64-cros-linux-gnu. However, on an aarch64 host the aarch64 dirs
will already exist from building the sdk itself. The rust ebuild is not
ready to handle aarch64 hosts yet and blows up. The correct behavior is
to combine the check for CHOST with a check for the right CBUILD.
On an aarch64 host we should presumably check for the x86 CHOST and rust
dirs, but that can be added later, because it needs more work.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The arm64 profiles don't specify SYMLINK_LIB=yes, which makes sense
since arm64 systems don't support multilib in the way that we are used
to from x86. What this means is that build artifacts are installed into
separate lib and lib64 directories. The root overlay installed in stage4
needs to check for SYMLINK_LIB before trying to create a symlink,
otherwise it fails to be applied because it collides with the directory
in the rootfs.
This uncovered a second minor issues - the rust toolchain bootstrap
scripts checked for /usr/lib64/rust*, but the ebuild installs to
/usr/lib/rust.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The previously used uuid 4f68bce3-e8cd-4db1-96e7-fbcaf984b709 is valid
for x86_64 root partitions, which resulted in the dev container not
working with systemd-nspawn on aarch64. systemd-nspawn fails with:
No suitable root partition found in image
Change the partition uuid to the architecture agnostic one documented
in the man page:
A GUID partition table (GPT) with a single partition of type 0fc63daf-8483-4772-8e79-3d69d8477de4.
This makes systemd-nspawn happy on aarch64.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The kola update tests need a dev-key-signed update payload. This was
lacking and caused the update tests to be skipped.
Generate the test update payload for both dev builds and release builds
and run the kola tests for both. The test update payload has a special
name to not confuse it with the real update payload for releases, and
we keep the previous behavior to sign releases. Therefore, the
generate_update function wasn't used but the extract_update function
extended with generating the additional test payload.
This change removes 8 years old code from the toolchains build which
tries to update SDK libraries for unknown reasons, breaking the
toolchains build in the glibc-2.33 update.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
This change to stage 4 of the SDK bootstrap process will keep a
snapshot of coreos-overlay in the SDK tarball. This snapshot can be
used in future SDK bootstraps' stage1 to ensure a clean stage 1 output
without any package updates.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
This change updates the stage1 SDK bootstrap build to use local
("known good") package ebuilds only, preventing updated package ebuilds
to apply in stage 1. This fixes SDK build breakage we observed when
upgrading core libraries like readline.
The change also removes the seed update from stage 1 as it should not
be needed anymore now that we postpone any package updates to stage 2.
The following package ebuild repos are used for stage 1:
- for portage-stable, we simply copy /var/gentoo/repos/gentoo
from the SDK root.
- coreos-overlay is more complicated since ebuilds are missing from
the SDK. So we grok the version the SDK was built with from
/mnt/host/source/.repo/manifests/default.xml
and then we create a local stage 1 clone of
https://github.com/kinvolk/coreos-overlay.git
in which we then check out the revision noted in the default mnifest.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
The `--jobs` parameter that some scripts defined was not used anywhere
in jenkins or mantle. So the value of the parameter always ended up
being equal to `${NUM_JOBS}` set by `common.sh`. Also, even if the
`--jobs` parameter was used for some script, that script usually
didn't forward the jobs value to other scripts, so the other scripts
ended up using `${NUM_JOBS}` again. Also, the `${FLAGS_jobs}` variable
was used by some functions in the build library, and those functions
were sometimes invoked by scripts that didn't define the
`${FLAGS_jobs}` variable. It is tedious to track which script should
actually define the parameter, and where it should be forwarded.
Just get rid of this half-working pretense. If you want to affect how
many jobs `emerge` uses, export the `NUM_JOBS` environment variable
before calling any script.
For `EMERGE_FLAGS` and `REBUILD_FLAGS` we unconditionally specify the
`--jobs` flag's value to `${NUM_JOBS}` because they are passed to
`emerge`. On the other hand we drop the `--jobs` parameter from the
`UPDATE_ARGS` variable, because this variable passed to `setup_board`
or `update_chroot`, which don't have this flag any more.
The script needs to be ported, because it is importing portage code
which became python3 only.
The porting I did is likely a lousy job, but at least it stopped
failing with some p(yt)hony errors.
I have no idea how this thing worked before - the repos never were in
/usr/portage nor in /usr/local/portage… But the newer version of
portage seems to be pretty picky about the validity of repos location,
so fix them.
When a license file is newly added, portage may not yet have it in the
shared folder and the license inclusion step fails.
Fall back to the source repositories and look for the license file
there, too. Print a warning if not found instead of failing to build.
The build_image script invokes the create_dev_container function, and
passes the `FLAGS_group` as param. Use the param, to generate the
binhost URL instead of using the DEFAULT_GROUP which stays as developer
always.
Fixes: kinvolk/Flatcar#298
Signed-off-by: Sayan Chowdhury <sayan@kinvolk.io>
Kernel source tree started to have a broken link
`tools/testing/selftests/powerpc/copyloops/memcpy_mcsafe_64.S`.
Especially in case of Kernel 5.8.18, like:
```
broken link: /usr/src/linux-5.8.18-coreos/tools/testing/selftests/powerpc/copyloops/memcpy_mcsafe_64.S
ERROR build_packages: test_image_content: Failed symlink check
```
Ignore the symlink when checking broken symlinks.
Setting the invalid CCACHE_ variables resulted in strange failure
in projects depending on meson, newer version like 0.55.3. For example
systemd build fails like the following errors:
```
* ACCESS DENIED: utimes: /mnt/host/source/ccache
* ACCESS DENIED: utimes: /mnt/host/source/ccache
F: utimes
S: deny
P: /mnt/host/source/ccache
A: /mnt/host/source/ccache
R: /mnt/host/source/ccache
C: ccache cc /build/amd64-usr/var/tmp/portage/sys-apps/systemd-246/work/systemd-246-abi_x86_64.amd64/meson-private/sanitycheckc.c -o /build/amd64-usr/var/tmp/portage/sys-apps/systemd-246/work/systemd-246-abi_x86_64.amd64/meson-private/sanitycheckc.exe -O1 -pipe -pipe -D_FILE_OFFSET_BITS=64
```
We should not set up ccache at all, as it has been already disabled in
coreos-overlay repo.
The SDK now includes a Rust version with the aarch64 cross-compilation
libraries and the toolchain job doesn't build it anymore. Yet it was
still recompiled because the path had changed.
Remove the adjustment of the download URL and any automatic building
of Rust. Just issue a warning so that any problem can be spotted easily.
This change does not affect the SDK bootstrapping (full or just stage4)
but affects ./build_packages and the toolchains job. For the toolchains
job the crossdev setup is missing anyway and rebuilding wouldn't help
but only downloading, yet since in stage4 there are no binary package
URLs at all, it's best to remove this step and if it is needed later,
the warning will help.
The default update seed command does only specify gcc which leads to
an error because »The short ebuild name "gcc" is ambiguous«.
Choose the standard package name instead of the cross compiler packages
which are only known to emerge because we build them as part of an SDK
release now.
The VM hardware and OS type versions were outdated and resulted in
features not being available by default.
Choose a newer ESXi host version (requires 6.5) and set the guest
OS type to Linux 3.x 64 bit.
Before, we were relying on the toolchains job to build and upload
packages that were part of the SDK. With this change, all packages that
should be part of the SDK are built and uploaded by the SDK job. The
toolchains job only builds toolchain packages specific for the release.
This change includes several adjustments done to both the SDK and the
toolchains jobs to make this work:
* Make the SDK job build all cross toolchains, including Rust
* Stop building Rust in the toolchains job and use the one in the SDK
instead.
* In toolchain_util.sh: detect when the symlink folder for crossdev
packages is missing and run crossdev to create it during
update_chroot setup.
* Make it possible to build the SDK starting from stage 4 instead of
stage 1, to make the SDK building faster for PR branches / nightlies
(full build should still be done for releases / weeklies).
For some unicode characters in ca-certificates file names "rev" complains
about an "invalid or incomplete multibyte or wide character"
and gives no output.
Filter out any unexpected characters for "rev" and replace them with "?"
so that "ls some?name" will still resolve the original name.
Toolchain utils have installed only `dev-lang/rust`. It could result
in version mismatch between `virtual/rust` and `dev-lang/rust`, because
`dev-lang/rust` does not automatically pull in `virtual/rust`.
So install `virtual/rust` instead of `dev-lang/rust`.
Install `virtual/rust` to avoid version conflicts that happen in case of
rust versions in the SDK being different from those in the new ebuilds.
`/usr/share/catalyst/targets/stage1/stage1-chroot.sh` installs gcc and
its dependencies, including `dev-lang/rust`, while `virtual/rust` does
not get updated. That results in version conflicts between
`virtual/rust` and `dev-lang/rust`. To avoid such an issue, we should
update also `virtual/rust` when building stage1. Since `virtual/rust`
automatically pulls in `dev-lang/rust`, we do not need to explicitly
specify `dev-lang/rust` here.
The license JSON file did only include the package names but not
any other metadata. Also since the file was not on the image itself,
it had to be downloaded.
Add more metadata to the license JSON and store it on the image.
systemd and sudo are already fixed. Git was fixed by updating to 2.23.2,
not 2.24.1. Samba is 2 years old and customized, thus difficult to update.
file, Python, and gdb are only in the SDK.
If a user or old software creates the flag file on the old CoreOS location,
nothing would happen.
Check the old location, too, so that Ignition is rerun.
The dev build SDKs are not in $FLATCAR_DEV_BUILDS/sdk but published under
$FLATCAR_DEV_BUILDS/developer/sdk.
Add an environment variable to specify where the SDK is to be found
but default to $FLATCAR_DEV_BUILDS/sdk if it is not specified.
From Jenkins this variable is exported as DOWNLOAD_ROOT_SDK.
Two Flatcar versions were used in /etc/portage/make.conf both in the SDK
and in the boards.
Use only a single version by default to get the expected results and not
something else when using binary packages.
The Rust crossdev package was never uploaded to /sdk/ and always
had to be compiled again.
Upload it in a separate toolchain-arm64 directory because /Packages in /crossdev/
doesn't refer to the Rust package and its use flags.
The BINHOST was still configured to be the CoreOS CL upstream location
which does not work for independent Flatcar CL releases. This broke
binary package installation in the development container.
Use the correct BINHOST to fix installation of binary packages in the
development container.
The configuration variables for the Ignition configuration also serve as
data source for coreos-cloudinit config data (which includes plain scripts).
Document them properly and also call out that the networking variables only
work if coreos-cloudinit data is used.
For some use cases, too few networking variables were available. Add secondary
routing variables for the main network interface and add a second interface.
There was a logical mistake in Ignition that caused ignition.config.*
only to work when it was part of the ovfenv. Thus they were added but
the old CoreOS variables marked deprecated and kept. With both as OVF
variables each of them worked but directly specifying ignition.config.*
as guest variable still didn't because of the logical mistake.
Now there is a fix and both work well when specified directly as guest
variable (https://github.com/flatcar-linux/ignition/pull/11).
Delete the old CoreOS OVF variables because they just clutter the UI
and only the Ignition variables should be used in the UI.
Write out an iPXE script file for Packet.
The script uses relative URLs to refer to
the other PXE files and thus can be copied
along with the files to any server.
This is useful because it saves the creation
of an iPXE script for a release/channel on a
third-party service. For CI testing it is
also helpful because the script does not only
end up on the release server but also already
on the Google buckets, refering to unpublished
PXE payloads.
For the Ignition variables to be usable they need to be
specified in the OVF.
Call out that the CoreOS variables are deprecated to
reduce confusion when both are displayed besides each other.
Create a tar ball with the contents of the / and /usr partitions
to be used as follows with systemd-nspawn (via machinectl):
machinectl import-tar flatcar-container.tar.gz flatcar-container
machinectl start flatcar-container
machinectl shell flatcar-container
or with docker by converting it to an OCI image:
docker import -c "CMD /bin/bash" flatcar-container.tar.gz flatcar-container
Since the new "prodtar" command relies on the results of the "prod" command,
it bundles it so that "prod prodtar" and "prodtar" is the same.