Put them into targets/generic profile instead of duplicating them in
amd64/generic and arm64/generic profiles. There's isn't anything
arch-specific in those USE flags.
These are not Flatcar specific modifications per se. We just bump the
version from 9.0.0099 to 9.0.0469 and drop a patch that was already
applied upstream.
These are not Flatcar specific modifications per se. We just bump the
version from 9.0.0099 to 9.0.0469 and drop a patch that was already
applied upstream.
If git show-ref returns an error, i.e. the branch already exists,
then we should not create a pull request, but simply return error.
Otherwise, the Github Actions would always try to create pull
requests even when the branch still exists.
Now that the upstream Docker 20.10.18 started building the source
with Go 1.18 instead of 1.17, we should also remove code to force
building with 1.17 and simply build with 1.18.
Otherwise the build fails like:
```
vendor/archive/tar/common.go:541:32: undefined: any
vendor/archive/tar/strconv.go:204:15: undefined: strings.Cut
vendor/archive/tar/strconv.go:254:20: undefined: strings.Cut
vendor/archive/tar/strconv.go:276:13: undefined: strings.Cut
```
See also https://github.com/moby/moby/commit/3d4616f943b3.
This pulls in https://github.com/flatcar/mayday/pull/10
to update the package name after the github org move.
It also changes the homepage to use our repo instead of the archive.
The "flatcar-linux" github org was renamed to "flatcar". There are no
github redirects in this case, thus we have to fix the links.
Left to do are the patch files.
This Flatcar dependency needs to be now explicitly pulled in the OS
since this commit: 4a06200e9d
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
To make Github Actions of LTS-2021 work with SDK containers,
checkout_branches needs to take an additional parameter
CHECKOUT_SCRIPTS. That defaults to true, but false only for LTS-2021.
To be able to make each apply patch script run with SDK containers,
we need to pass additional env variables like PACKAGES_CONTAINER or
SDK_NAME.
Note, in case of LTS-2021, we need to also pass CHECKOUT_SCRIPTS=false,
to make LTS-2021 run with the script run_sdk_container.
Now that Flatcar SDK does not support cork of mantle any more,
we need to migrate the Github Actions of coreos-overlay to the
new container SDK based approach.
Simply download a container image of the latest Flatcar release,
run the container, generate patches from there.
Note, since the Flatcar scripts repo of LTS-2021 still does not
have necessary Container SDK scripts like run_sdk_container, we
need to skip checking out a specific base branch in case of
LTS-2021.
Since rsync 3.2.4, IUSE_CPU_FLAGS_X86="sse2" does not exist any
more in upstream ebuilds. So it is not necessary to disable
`cpu_flags_x86_sse2` USE flag for avoiding cross toolchain build
failures.
- Fix config install paths, use systemd-tmpfiles (all configs should
be installed to /usr and tmpfiles should be used to create and fix
directory permissions instead of the ebuild's postinst.)
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
The m3.small.x86 instance type had no serial console output because
ttyS0 was used because the GRUB CPU check didn't trigger. It seems that
most instances had i386 reported but this new one not (maybe EFI is
used here?).
Extend the GRUB check to cover both i386 and x86_64 when setting up the
serial console. For arm64 this still shouldn't be needed and the
defaults worked so far.
- Carry over our custom tmpfiles and securetty files
- Remove /etc files and install them to /usr, use tmpfiles
- Switch /etc/login.defs edits to /usr/share/shadow/login.defs
- Drop moving passwd out of /usr since we don't have split-usr
- Drop pkg_postinst
- Make BDEPEND independent from DEPEND (The `BDEPEND` is a
build-time requirement, so it should not be included in the whole
`DEPEND` list. If it does, an installation of `sys-auth/sssd`
causes other dependencies to be installed not only in the
`/build`, but also under the SDK. That's not what we want, so we
need to exclude `BDEPEND` from the list.)
- Move runstatedir option from configure to make (Now that the
upstream sssd 2.3.1 does not support `--runstatedir` option from
its configure script, we need to remove the option, to unblock the
configure issue like `unrecognized option --runstatedir`. Instead
we need to pass `runstatedir=` to emake commands.)
- Disable realm check for nsupdate (At the moment bind-tools does
not enable `gssapi`, so its `nsupdate` tool is also not able to
run `realm` command. As a result, configure script of `sssd` fails
when running `echo realm | nsupdate`, like `syntax error`.
To avoid such issues, we need to disable the nsupdate check for
now. After we could enable `gssapi` for the SDK correctly, we can
bring back the nsupdate check in the future.)
- Add patch for CVE-2021-3621
- Set the conf dir path explicitly (Without passing the
--with-systemdconfdir flag, the configure script will query
pkg-config for the directory itself. In the cross-compilation
setup that we have, this will result in a path sysroot prepended
to the path twice. systemd.eclass has a workaround for this issue,
but it does not provide an elegant getter of the system
configuration directory, thus we call `_systemd_get_dir`
ourselves.)
- Make it compatible with newer python versions.
- Fix samba version detection by exporting the CPP variable. For
some reason it was empty after the toolchain updates.
- take care of nscd.conf via tmpfiles, add files/nscd-conf.tmpfiles.
- don't run sanity checks in pkg_pretend to prevent gcc checks when
only the binary package is installed.
- comment out 'dostrip -x' to force the OS image binaries to be stripped
- remove everything glibc wants to put under /etc since we use
baselayout to provide that
## Description
When an EC2 instance boots up with a flatcar image (even the latest) the kubelet fails.
The userdata defines (and should do so) that the `/etc/eks/bootstrap.sh` should run, which it does.
This seems to add a ExecStartPre to the kubelet.service:
`ExecStartPre=/usr/share/oem/eks/download-kubelet.sh`
Both the `bootstrap.sh` and the `download-kubelet.sh` are consistent with:
https://github.com/flatcar-linux/coreos-overlay/blob/main/coreos-base/flatcar-eks/files/bootstrap.shhttps://github.com/flatcar-linux/coreos-overlay/blob/main/coreos-base/flatcar-eks/files/download-kubelet.sh
The `download-kubelet.sh` fails with `Unsupported Kubernetes version` because in the case statement on line 24->50 (https://github.com/flatcar-linux/coreos-overlay/blob/main/coreos-base/flatcar-eks/files/download-kubelet.sh#L25) only has values for kubernetes version 1.15 -> 1.21
If I manually alter the file and add 1.22 (when I test this on 1.22.9 kubernetes version deployment) and re-run the `bootsrap.sh` it works fine as far as I can see, the node than joins the cluster and shows up as `Ready` and pods starting running on the node.
The last PR I can see on this particular thing was done about a year ago f0da7f8c9e
## Impact
No EKS support for kubernetes versions higher than 1.21
## Environment and steps to reproduce
1. **Set-up**: Create an EKS cluster with the latest flatcar AMI in the worker nodes
2. **Task**: SSH into the node (probably through a Bastion)
3. **Action(s)**: No actions needed
4. **Error**: kubelet.service fails because the download-kubelet.sh doesn't have download locations for kubernetes version above 1.21
## Expected behavior
Download locations for kubernetes versions 1.22 and 1.23 (EKS doesn't have support for 1.24 yet it seems) should be located inside the download-kubelet.sh
## Additional information
By running `aws s3 ls s3://amazon-eks/` you can list the available locations of the other versions, so for it should result in this:
``` sh
case $CLUSTER_VERSION in
1.23)
S3_PATH="1.23.9/2022-07-27/"
;;
1.22)
S3_PATH="1.22.12/2022-07-27/"
;;
1.21)
S3_PATH="1.21.2/2021-07-05"
;;
1.20)
S3_PATH="1.20.4/2021-04-12"
;;
1.19)
S3_PATH="1.19.6/2021-01-05"
;;
1.18)
S3_PATH="1.18.9/2020-11-02"
;;
1.17)
S3_PATH="1.17.12/2020-11-02"
;;
1.16)
S3_PATH="1.16.15/2020-11-02"
;;
1.15)
S3_PATH="1.15.12/2020-11-02"
;;
*)
echo "Unsupported Kubernetes version"
exit 1
;;
esac
```
We fetch the latest release of calico from calicoproject/calico
releases instead of from calico-version.yaml file in tigera/operator
repo. This is because we download the Tigera Operator manifest from
the calico repository, so we can expect that when the release happens,
both calico and the operator agree on versions used (so we expect that
calico 3.24.0 is using operator version 1.28.0, and the operator
1.28.0 is using calico 3.24.0).
Update keywords to stable amd64 and arm64.
Note, fix-dos patch is not necessary any more, because 1.3.2-r1 from
upstream Gentoo already has the patch.
Based on commit f3150e4b458e8d8979a37a91e44a7e1d2334d2aa.
and refresh other patches. The changes in PCI irq masking on hyperv resulted in
the previous set of patches not building on arm64. Resolve this by taking
another 2 patches. Patch z0006 makes the non-compiling code x86 specific
(fixing the build failure on arm64) and patch z0007 fixes a subsequent "not
used function" error.
ORIG_HEAD is the previous HEAD, so it is not what we are after. HEAD
only contains the hash if we are in a detached head situation, otherwise
it will contain a ref and we need to resolve it. `git rev-parse HEAD`
should work as well but hits an issue with git's new `safe.directory`
setting, I have not found a way to set this parameter for a signle call.
For toolchain packages are built with catalyst, and the HEAD value needs
to pre-resolved because we do not have access to the whole git
repository. So build_toolchains will need to inject the correct HEAD
file contents.
If the uri points to a path within the repo then the format is
git+https://repo@ref#path. ORIG_HEAD is actually the previous HEAD, so read
use that to extract the correct ref.
This change adds initial support for SLSA provenance report generation.
Reports are generated in package build post-install hooks after
compilation.
See https://slsa.dev/ for SLSA and https://slsa.dev/provenance/v0.2 for
the provenance report syntax.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
Our ebuild modifies the systemd owned tmpfiles.d entry that creates the
/etc/resolv.conf symlink to point to resolv.conf instead of stub-resolv.conf.
The file that contains that entry changed from etc.conf.in to
systemd-resolve.conf, so update the ebuild to touch that file.
* pass additional ldflags so that `syft version` prints the package
version.
* keyword stable for amd64 and arm64 (to reduce differences between the
two).
This pulls in
https://github.com/flatcar-linux/bootengine/pull/47
which creates the grub.cfg file if it does not exist when the Ignition
kargs directive is used, preventing an error when it tried to read the
current settings from it.
When the GnuPG keyserver is set to `keys.openpgp.org`, `gpg --recv-keys`
occasionally fails with the following error:
```
gpg: key E52F0DB391453C45: no user ID
```
We need to make GnuPG accept keys even without UIDs.
Original patches come from
f292beac11/debian/patches/import-merge-without-userid .
See also https://dev.gnupg.org/T4393 .
Based on commit ff9200d8d3fce1feaa1eaa751a0dd2a50acbaae0 .
As gdb 11 or newer requires gmp libs as dependency, a cross build of
gdb 11.2 started to fail when its configure scripts try to detect if
gmp exists. The failure occurs mainly because the build still passes
'-L/usr/lib64` to LDFLAGS. Let's say, for example, host toolchains
outside of sysroot have amd64 libs, while the target inside of
sysroot should have arm64 libs. However, configure scripts of gdb 11.2
still try to find its libs outside of sysroot, /usr/lib64, although it
should find its libs inside of sysroot, e.g. /build/arm64/usr/lib64.
To fix the cross build issues, pass --with-sysroot as well as --libdir,
correctly with ${ESYSROOT}.
As a side note, for some reason, upstream gdb configure scripts are not
able to correctly make use of its gmp-specific options like --with-gmp
or --with-gmp-lib. Passing those options does not bring anything.
Also configure must have both --with-sysroot and --libdir, to make the
build work.
To fix build issues that happen in adcli 0.9 with glibc 2.34, we should
sync adcli with upstream Gentoo, where the build issue is already fixed.
As Gentoo has the ebuild under the category `app-crypt`, we simply move
from adcli from coreos-overlay to portage-stable, move adcli to the
app-crypt category, and update the version to 0.9.1-r2.
`docker.service` has a dependency to `containerd.service`:
```
$ systemctl list-dependencies docker.service
docker.service
containerd.service
...
```
If `docker.service` is not started (explicitly or via socket activation)
`containerd.service` won't start.
To ensure a seamless transition to kubernetes-1.24 let's enable by
default `containerd.service`.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
We add `sys-apps/ignition` as a `coreos-base/coreos` dependency to get
`/usr/libexec/ignition-rmcfg` available on the _real_ root.
Now we want `/usr/bin/ignition` to be in the chroot until it's being copied
to the initramfs but we don't want it on the actual root.
With `PKG_INSTALL_MASK`, we'll prevent `/usr/bin/ignition` to be added
to the image in the `./build_image` - at this time, initramfs is already
created and `sys-apps/ignition` is a binary package.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
this helper removes config from VMWare and Virtualbox and should not be
directly used by the user.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
This change adds multiple tools to ARM64 which were formerly only
present in the X86-64 image.
Added for ARM64:
net-fs/cifs-utils
sys-auth/realmd
app-admin/adcli
app-crypt/go-tspi
This leaves only the xenserver-pv-version and xenstore packages
exclusively on X86-64.
The change un-masks keywords amd64 and arm64 for sys-libs/liburing-2.1-r2
and keyword arm64 for dev-libs/ding-libs-0.6.1-r1, overwriting Gentoo
upstream defaults in portage-stable.
Partially fixes https://github.com/flatcar-linux/Flatcar/issues/689.
Fixes https://github.com/flatcar-linux/Flatcar/issues/690.
Disabling it per-package is a no-op since we disable berkdb globally
through the make.defaults file.
Also drop redundant enabling of berkdb in sys-libs/gdbm in target
profile, because we already do it in the base profile.
It seems to be picked up for some reason during SDK build, instead of
using python 3.9.9:
emerge: there are no ebuilds to satisfy "dev-lang/python-exec[python_targets_python3_10(-)]".
(dependency required by "dev-lang/python-3.10.2_p1::portage-stable" [ebuild])
(dependency required by "sec-policy/selinux-base-2.20200818-r2::coreos" [ebuild])
(dependency required by "sec-policy/selinux-base-policy-2.20200818-r2::coreos" [ebuild])
(dependency required by "sec-policy/selinux-unconfined-2.20200818-r2::portage-stable" [ebuild])
Fix build issues with Rust 1.61.0 when applying
gentoo-musl-target-specs.patch.
```
error[E0308]: mismatched types
-->
compiler/rustc_target/src/spec/aarch64_gentoo_linux_musl.rs:6:24
|
6 | base.llvm_target =
"aarch64-gentoo-linux-musl".to_string();
| ---------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected enum `Cow`, found struct `std::string::String`
| |
| expected due to the type of this binding
|
= note: expected enum `Cow<'static, str>`
found struct `std::string::String`
```
Replace `to_string` with `into`.
Based on Gentoo commit 445f23597c942b087145b869ac588fc1c1eac759.
In the `init.sh` of the OEM GCE container, we have the following
section:
```bash
wait -n "${daemon_pids[@]}" || :
kill "${daemon_pids[@]}" || :
test -n "$stopping" || exit 1
exec /usr/bin/google_metadata_script_runner --script-type shutdown
```
`shutdown` script was not executed because container was receiving a
`SIGKILL`, the started processes was not properly terminated.
According to the `systemd-nspawn` manual:
```bash
If --boot is not used and this option is not specified
the container's processes are terminated abruptly via SIGKILL
```
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
Add a symlink-usr USE flag for keeping a minimal set of terminfo
files in /usr/share/terminfo.
Also allow writes to /dev/ptmx, which sometimes causes the sandbox
to fail Jenkins builds.
Based on 09951dc3db0f79294eb223a9154f372e24c1d99d.
- remove unecessary files
- drop `pkg_postint`
- create `/etc/ssl` with tmpfiles
- mark openssl as stable for arm64 and amd64
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
We have updated pythong and the related eclasses some time ago, so I
think this ebuild should be working fine now. Also, it needs updating,
because net-fs/samba started to require a newer version of it.
- Add a minimal USE flag for only installing libraries
- Change the Perl run-time dep to build-time only
- Disable building libraries requiring Python
- Limit the size of bundled libraries
Since linux-firmware 20220509, intel/ice/ddp/ice-1.3.26.0.pkg was
updated to ice-1.3.28.0.pkg. As a result the symlink ice.pkg needs to be
also updated so it points to the correct version of the file.
Create a variable for the ICE DDP version for better maintenance.
Use Go 1.18 instead of 1.17 by default in all ebuilds.
Note, we still keep building app-emulation/docker{,-cli} with Go 1.17,
to be consistent with upstream Docker 20.10.x, which still builds with
Go 1.17. That should avoid potential unexpected regressions that
happened in the past.
Update the default version of dev-lang/go to 1.18.2.
Keep go1.17 as well to build docker{,-cli} with Go 1.17.
Use EAPI=7 for all versions.
See also https://go.dev/doc/go1.18.
We should update EAPI from 6 to 7, to deprecate old EAPIs in general.
To make it work with EAPI=7, replace get_version_component_range with
ver_cut, as get_version_component_range does not work any more with EAPI
7. As a result, the versionator eclass is not needed any more.
There was a kernel regression on Xen HVM with regard to MSI interrupts that
affected certain AWS instances (m4 and similar). We reverted the patch that
broke networking, but in the meantime upstream found the actual cause and
provided a proper fix which is part of 5.15.38. Remove the obsolete patch.
Link: https://lore.kernel.org/all/20220504153056.686401990@linuxfoundation.org/
To be able to distinguish changelog entries from each other, we should
write a specific project name, e.g. coreos-overlay, instead of `PR`.
Changelog entries with a simple `PR` usually cause so much additional
rework when doing actual releases.
The GitHub Actions were defined for the LTS stream directly but we can
now follow the approach used for the other channels. This means that
in the future we could decide to create new Actions for 2022 by copying
the current one and modifying it when 2023 gets the new current LTS -
anyway some manual work would be required to set up Actions for both
old and new at the same time (we have no "previous" symlink on Origin).
We could retire the old LTS Actions immediately because the releases
don't occur on a fixed schedule but I think the automation is nice to
keep.
use upstream ignition (coreos/ignition) and apply our patches on top of
it.
It's currently done in the same way with coreos/afterburn.
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
The removal of the mantle ebuild file also meant that dnsmasq isn't
installed into the SDK anymore, yet we actually need it to run kola
QEMU tests in the SDK on the original CI pipeline. As long as the
original CI pipeline is kept, we have to keep kola's dependencies
like QEMU and dnsmasq around.
pahole is a build-time dependency of our kernel build, due to us setting
CONFIG_BTF_DEBUG_INFO. If pahole is missing, a `make modules_prepare` with our
kernel config results in symbols in the config changing. This will affect
people building kernel modules against coreos-sources in the developer
container, but not the SDK because pahole is already in sdk-depends.
pahole is now an (explicit) BDEPEND of all the coreos-kernel/coreos-modules
packages, and we'll make it an RDEPEND of coreos-sources so that it is pulled
in whenever it might be necessary. Also add it to the coreos-dev package so
that it is included in developer container by default, uncompressed size
increase is <1MB.
This is the fallback path that nvidia publishes for verifying device node
creation was successful. It now handles multiple gpus and creating the
nvidia-uvm node, with a dynamic major.
The weird thing is that nvidia-smi and nvidia-modprobe also create some device
nodes and files under /dev, but this does not appear to be well documented. So
keep the static creation.
This involves putting libraries under /usr/lib64 and kernel modules under
/usr/lib/module. This is an experiment at making the nvidia installation work
as a sysext as well, but there are still some issues around that. The major
issue was that `systemd-sysext refresh` would remove the OEM symlink and I
don't feel comfortable with `systemctl restart systemd-sysext` from within
another unit.
If anyone wants to try it, it's now a matter of:
ln -s /opt/nvidia/current /run/extensions/nvidia-driver
Bonus points for moving nvidia binaries from /opt/bin to
/opt/nvidia/current/usr/bin.
Since we no longer need to run emerge in the developer container, we can as
well just treat the developer container more like a container image and use an
ephemeral overlay.
Currently the setup-nvidia script fails when re-executed. It should work in
cases when the driver is already built and just needs to be loaded, or when it
needs to be rebuilt for a new kernel (but driver version may not have changed).
To make this work, several changes where necessary:
* `./nvidia*.run -x -s` fails when already unpacked. Allow it so that we can
rebuild
* there are several module dependencies for nvidia modules that are implicit,
related to i2c/ipmi. Probe those explicitly.
* `[ -f /dev/nvidia* ]` fails because those are character devices, so need a
`[ -c ...]` check.
* `nvidia-modprobe` previously always failed, because it doesn't actually know
the location of the modules and can only call modprobe (modprobe looks into
/lib/modules/). We now explicitly probe the important modules, at that point
nvidia-modprobe just creates additional device nodes.
* `is_nvidia_installation_required` checks whether building and loading is needed.
Factor out the loading check so that we can reload the module after an update.
Currently the script will reuse a developer container that was downloaded once,
without ensuring that the same version is used as the running image. This works
on the first boot, but wouldn't be correct after an OS update.
To resolve this, add a version number to the downloaded filename, and check for
the versioned dev container file. When the file is missing we also cleanup all
other dev container files via glob remove.
...by providing /etc/flatcar/nvidia-metadata. Newer driver packages do not
support some older Nvidia cards. An example is the Tesla K80 cards in
Standard_NC6 VMs on Azure, which are only supported up to the 470.x driver
version. To allow users to continue using those, give them a way to override
the driver version through /etc/flatcar/nvidia-metadata. For example, this
entry could be used to pin a specific driver version:
NVIDIA_DRIVER_VERSION=470.103.01