* pass additional ldflags so that `syft version` prints the package
version.
* keyword stable for amd64 and arm64 (to reduce differences between the
two).
This pulls in
https://github.com/flatcar-linux/bootengine/pull/47
which creates the grub.cfg file if it does not exist when the Ignition
kargs directive is used, preventing an error when it tried to read the
current settings from it.
When the GnuPG keyserver is set to `keys.openpgp.org`, `gpg --recv-keys`
occasionally fails with the following error:
```
gpg: key E52F0DB391453C45: no user ID
```
We need to make GnuPG accept keys even without UIDs.
Original patches come from
f292beac11/debian/patches/import-merge-without-userid .
See also https://dev.gnupg.org/T4393 .
Based on commit ff9200d8d3fce1feaa1eaa751a0dd2a50acbaae0 .
As gdb 11 or newer requires gmp libs as dependency, a cross build of
gdb 11.2 started to fail when its configure scripts try to detect if
gmp exists. The failure occurs mainly because the build still passes
'-L/usr/lib64` to LDFLAGS. Let's say, for example, host toolchains
outside of sysroot have amd64 libs, while the target inside of
sysroot should have arm64 libs. However, configure scripts of gdb 11.2
still try to find its libs outside of sysroot, /usr/lib64, although it
should find its libs inside of sysroot, e.g. /build/arm64/usr/lib64.
To fix the cross build issues, pass --with-sysroot as well as --libdir,
correctly with ${ESYSROOT}.
As a side note, for some reason, upstream gdb configure scripts are not
able to correctly make use of its gmp-specific options like --with-gmp
or --with-gmp-lib. Passing those options does not bring anything.
Also configure must have both --with-sysroot and --libdir, to make the
build work.
To fix build issues that happen in adcli 0.9 with glibc 2.34, we should
sync adcli with upstream Gentoo, where the build issue is already fixed.
As Gentoo has the ebuild under the category `app-crypt`, we simply move
from adcli from coreos-overlay to portage-stable, move adcli to the
app-crypt category, and update the version to 0.9.1-r2.
`docker.service` has a dependency to `containerd.service`:
```
$ systemctl list-dependencies docker.service
docker.service
containerd.service
...
```
If `docker.service` is not started (explicitly or via socket activation)
`containerd.service` won't start.
To ensure a seamless transition to kubernetes-1.24 let's enable by
default `containerd.service`.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
We add `sys-apps/ignition` as a `coreos-base/coreos` dependency to get
`/usr/libexec/ignition-rmcfg` available on the _real_ root.
Now we want `/usr/bin/ignition` to be in the chroot until it's being copied
to the initramfs but we don't want it on the actual root.
With `PKG_INSTALL_MASK`, we'll prevent `/usr/bin/ignition` to be added
to the image in the `./build_image` - at this time, initramfs is already
created and `sys-apps/ignition` is a binary package.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
this helper removes config from VMWare and Virtualbox and should not be
directly used by the user.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
This change adds multiple tools to ARM64 which were formerly only
present in the X86-64 image.
Added for ARM64:
net-fs/cifs-utils
sys-auth/realmd
app-admin/adcli
app-crypt/go-tspi
This leaves only the xenserver-pv-version and xenstore packages
exclusively on X86-64.
The change un-masks keywords amd64 and arm64 for sys-libs/liburing-2.1-r2
and keyword arm64 for dev-libs/ding-libs-0.6.1-r1, overwriting Gentoo
upstream defaults in portage-stable.
Partially fixes https://github.com/flatcar-linux/Flatcar/issues/689.
Fixes https://github.com/flatcar-linux/Flatcar/issues/690.
Disabling it per-package is a no-op since we disable berkdb globally
through the make.defaults file.
Also drop redundant enabling of berkdb in sys-libs/gdbm in target
profile, because we already do it in the base profile.
It seems to be picked up for some reason during SDK build, instead of
using python 3.9.9:
emerge: there are no ebuilds to satisfy "dev-lang/python-exec[python_targets_python3_10(-)]".
(dependency required by "dev-lang/python-3.10.2_p1::portage-stable" [ebuild])
(dependency required by "sec-policy/selinux-base-2.20200818-r2::coreos" [ebuild])
(dependency required by "sec-policy/selinux-base-policy-2.20200818-r2::coreos" [ebuild])
(dependency required by "sec-policy/selinux-unconfined-2.20200818-r2::portage-stable" [ebuild])
Fix build issues with Rust 1.61.0 when applying
gentoo-musl-target-specs.patch.
```
error[E0308]: mismatched types
-->
compiler/rustc_target/src/spec/aarch64_gentoo_linux_musl.rs:6:24
|
6 | base.llvm_target =
"aarch64-gentoo-linux-musl".to_string();
| ---------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected enum `Cow`, found struct `std::string::String`
| |
| expected due to the type of this binding
|
= note: expected enum `Cow<'static, str>`
found struct `std::string::String`
```
Replace `to_string` with `into`.
Based on Gentoo commit 445f23597c942b087145b869ac588fc1c1eac759.
In the `init.sh` of the OEM GCE container, we have the following
section:
```bash
wait -n "${daemon_pids[@]}" || :
kill "${daemon_pids[@]}" || :
test -n "$stopping" || exit 1
exec /usr/bin/google_metadata_script_runner --script-type shutdown
```
`shutdown` script was not executed because container was receiving a
`SIGKILL`, the started processes was not properly terminated.
According to the `systemd-nspawn` manual:
```bash
If --boot is not used and this option is not specified
the container's processes are terminated abruptly via SIGKILL
```
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
Add a symlink-usr USE flag for keeping a minimal set of terminfo
files in /usr/share/terminfo.
Also allow writes to /dev/ptmx, which sometimes causes the sandbox
to fail Jenkins builds.
Based on 09951dc3db0f79294eb223a9154f372e24c1d99d.
- remove unecessary files
- drop `pkg_postint`
- create `/etc/ssl` with tmpfiles
- mark openssl as stable for arm64 and amd64
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
We have updated pythong and the related eclasses some time ago, so I
think this ebuild should be working fine now. Also, it needs updating,
because net-fs/samba started to require a newer version of it.
- Add a minimal USE flag for only installing libraries
- Change the Perl run-time dep to build-time only
- Disable building libraries requiring Python
- Limit the size of bundled libraries
Since linux-firmware 20220509, intel/ice/ddp/ice-1.3.26.0.pkg was
updated to ice-1.3.28.0.pkg. As a result the symlink ice.pkg needs to be
also updated so it points to the correct version of the file.
Create a variable for the ICE DDP version for better maintenance.
Use Go 1.18 instead of 1.17 by default in all ebuilds.
Note, we still keep building app-emulation/docker{,-cli} with Go 1.17,
to be consistent with upstream Docker 20.10.x, which still builds with
Go 1.17. That should avoid potential unexpected regressions that
happened in the past.
Update the default version of dev-lang/go to 1.18.2.
Keep go1.17 as well to build docker{,-cli} with Go 1.17.
Use EAPI=7 for all versions.
See also https://go.dev/doc/go1.18.
We should update EAPI from 6 to 7, to deprecate old EAPIs in general.
To make it work with EAPI=7, replace get_version_component_range with
ver_cut, as get_version_component_range does not work any more with EAPI
7. As a result, the versionator eclass is not needed any more.
There was a kernel regression on Xen HVM with regard to MSI interrupts that
affected certain AWS instances (m4 and similar). We reverted the patch that
broke networking, but in the meantime upstream found the actual cause and
provided a proper fix which is part of 5.15.38. Remove the obsolete patch.
Link: https://lore.kernel.org/all/20220504153056.686401990@linuxfoundation.org/
To be able to distinguish changelog entries from each other, we should
write a specific project name, e.g. coreos-overlay, instead of `PR`.
Changelog entries with a simple `PR` usually cause so much additional
rework when doing actual releases.
The GitHub Actions were defined for the LTS stream directly but we can
now follow the approach used for the other channels. This means that
in the future we could decide to create new Actions for 2022 by copying
the current one and modifying it when 2023 gets the new current LTS -
anyway some manual work would be required to set up Actions for both
old and new at the same time (we have no "previous" symlink on Origin).
We could retire the old LTS Actions immediately because the releases
don't occur on a fixed schedule but I think the automation is nice to
keep.
use upstream ignition (coreos/ignition) and apply our patches on top of
it.
It's currently done in the same way with coreos/afterburn.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
The removal of the mantle ebuild file also meant that dnsmasq isn't
installed into the SDK anymore, yet we actually need it to run kola
QEMU tests in the SDK on the original CI pipeline. As long as the
original CI pipeline is kept, we have to keep kola's dependencies
like QEMU and dnsmasq around.
pahole is a build-time dependency of our kernel build, due to us setting
CONFIG_BTF_DEBUG_INFO. If pahole is missing, a `make modules_prepare` with our
kernel config results in symbols in the config changing. This will affect
people building kernel modules against coreos-sources in the developer
container, but not the SDK because pahole is already in sdk-depends.
pahole is now an (explicit) BDEPEND of all the coreos-kernel/coreos-modules
packages, and we'll make it an RDEPEND of coreos-sources so that it is pulled
in whenever it might be necessary. Also add it to the coreos-dev package so
that it is included in developer container by default, uncompressed size
increase is <1MB.
This is the fallback path that nvidia publishes for verifying device node
creation was successful. It now handles multiple gpus and creating the
nvidia-uvm node, with a dynamic major.
The weird thing is that nvidia-smi and nvidia-modprobe also create some device
nodes and files under /dev, but this does not appear to be well documented. So
keep the static creation.
This involves putting libraries under /usr/lib64 and kernel modules under
/usr/lib/module. This is an experiment at making the nvidia installation work
as a sysext as well, but there are still some issues around that. The major
issue was that `systemd-sysext refresh` would remove the OEM symlink and I
don't feel comfortable with `systemctl restart systemd-sysext` from within
another unit.
If anyone wants to try it, it's now a matter of:
ln -s /opt/nvidia/current /run/extensions/nvidia-driver
Bonus points for moving nvidia binaries from /opt/bin to
/opt/nvidia/current/usr/bin.
Since we no longer need to run emerge in the developer container, we can as
well just treat the developer container more like a container image and use an
ephemeral overlay.
Currently the setup-nvidia script fails when re-executed. It should work in
cases when the driver is already built and just needs to be loaded, or when it
needs to be rebuilt for a new kernel (but driver version may not have changed).
To make this work, several changes where necessary:
* `./nvidia*.run -x -s` fails when already unpacked. Allow it so that we can
rebuild
* there are several module dependencies for nvidia modules that are implicit,
related to i2c/ipmi. Probe those explicitly.
* `[ -f /dev/nvidia* ]` fails because those are character devices, so need a
`[ -c ...]` check.
* `nvidia-modprobe` previously always failed, because it doesn't actually know
the location of the modules and can only call modprobe (modprobe looks into
/lib/modules/). We now explicitly probe the important modules, at that point
nvidia-modprobe just creates additional device nodes.
* `is_nvidia_installation_required` checks whether building and loading is needed.
Factor out the loading check so that we can reload the module after an update.
Currently the script will reuse a developer container that was downloaded once,
without ensuring that the same version is used as the running image. This works
on the first boot, but wouldn't be correct after an OS update.
To resolve this, add a version number to the downloaded filename, and check for
the versioned dev container file. When the file is missing we also cleanup all
other dev container files via glob remove.
...by providing /etc/flatcar/nvidia-metadata. Newer driver packages do not
support some older Nvidia cards. An example is the Tesla K80 cards in
Standard_NC6 VMs on Azure, which are only supported up to the 470.x driver
version. To allow users to continue using those, give them a way to override
the driver version through /etc/flatcar/nvidia-metadata. For example, this
entry could be used to pin a specific driver version:
NVIDIA_DRIVER_VERSION=470.103.01
There are two ways to build the nvidia-driver - either against a full kernel
source tree in /usr/src/linux, or against a slim kernel-devel equivalent in
/lib/modules/*/build. The /lib/modules/*/build is provided by
sys-kernel/coreos-module, see `install_build_source`. The interesting thing is
that in absence of --kernel-source-path, nvidia-installer will autodetect which
to use and already builds against /lib/modules/*/build on Flatcar right now. By
passing --kernel-name, we make that choice explicit and this allows us to skip
the emerge steps of the build.
Since this runs in the developer container, there is also no point in trying to
execute systemctl or depmod, so pass the flags to disable usage of those.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>