For the main branch (so for nightly builds) the group in
`/usr/share/flatcar/update.conf` is not "main", but "developer". This
needs a small translation when turning it into a channel
information. Without that, we are trying to checkout a nonexistent tag
named `developer-3363.0.0-…` instead of `main-3363.0.0-…`, which
fails.
In developer builds version string contains version numbers and a
build ID with plus symbol sitting between them. Git tags are formatted
in similar way, but with a dash, instead of plus. Thus the plus needs
to be replaced to obtain a proper git tag.
Put them into targets/generic profile instead of duplicating them in
amd64/generic and arm64/generic profiles. There's isn't anything
arch-specific in those USE flags.
These are not Flatcar specific modifications per se. We just bump the
version from 9.0.0099 to 9.0.0469 and drop a patch that was already
applied upstream.
These are not Flatcar specific modifications per se. We just bump the
version from 9.0.0099 to 9.0.0469 and drop a patch that was already
applied upstream.
If git show-ref returns an error, i.e. the branch already exists,
then we should not create a pull request, but simply return error.
Otherwise, the Github Actions would always try to create pull
requests even when the branch still exists.
Now that the upstream Docker 20.10.18 started building the source
with Go 1.18 instead of 1.17, we should also remove code to force
building with 1.17 and simply build with 1.18.
Otherwise the build fails like:
```
vendor/archive/tar/common.go:541:32: undefined: any
vendor/archive/tar/strconv.go:204:15: undefined: strings.Cut
vendor/archive/tar/strconv.go:254:20: undefined: strings.Cut
vendor/archive/tar/strconv.go:276:13: undefined: strings.Cut
```
See also https://github.com/moby/moby/commit/3d4616f943b3.
This pulls in https://github.com/flatcar/mayday/pull/10
to update the package name after the github org move.
It also changes the homepage to use our repo instead of the archive.
The "flatcar-linux" github org was renamed to "flatcar". There are no
github redirects in this case, thus we have to fix the links.
Left to do are the patch files.
This Flatcar dependency needs to be now explicitly pulled in the OS
since this commit: 4a06200e9d
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
To make Github Actions of LTS-2021 work with SDK containers,
checkout_branches needs to take an additional parameter
CHECKOUT_SCRIPTS. That defaults to true, but false only for LTS-2021.
To be able to make each apply patch script run with SDK containers,
we need to pass additional env variables like PACKAGES_CONTAINER or
SDK_NAME.
Note, in case of LTS-2021, we need to also pass CHECKOUT_SCRIPTS=false,
to make LTS-2021 run with the script run_sdk_container.
Now that Flatcar SDK does not support cork of mantle any more,
we need to migrate the Github Actions of coreos-overlay to the
new container SDK based approach.
Simply download a container image of the latest Flatcar release,
run the container, generate patches from there.
Note, since the Flatcar scripts repo of LTS-2021 still does not
have necessary Container SDK scripts like run_sdk_container, we
need to skip checking out a specific base branch in case of
LTS-2021.
Since rsync 3.2.4, IUSE_CPU_FLAGS_X86="sse2" does not exist any
more in upstream ebuilds. So it is not necessary to disable
`cpu_flags_x86_sse2` USE flag for avoiding cross toolchain build
failures.
- Fix config install paths, use systemd-tmpfiles (all configs should
be installed to /usr and tmpfiles should be used to create and fix
directory permissions instead of the ebuild's postinst.)
Signed-off-by: Mathieu Tortuyaux <mtortuyaux@microsoft.com>
The m3.small.x86 instance type had no serial console output because
ttyS0 was used because the GRUB CPU check didn't trigger. It seems that
most instances had i386 reported but this new one not (maybe EFI is
used here?).
Extend the GRUB check to cover both i386 and x86_64 when setting up the
serial console. For arm64 this still shouldn't be needed and the
defaults worked so far.
- Carry over our custom tmpfiles and securetty files
- Remove /etc files and install them to /usr, use tmpfiles
- Switch /etc/login.defs edits to /usr/share/shadow/login.defs
- Drop moving passwd out of /usr since we don't have split-usr
- Drop pkg_postinst
- Make BDEPEND independent from DEPEND (The `BDEPEND` is a
build-time requirement, so it should not be included in the whole
`DEPEND` list. If it does, an installation of `sys-auth/sssd`
causes other dependencies to be installed not only in the
`/build`, but also under the SDK. That's not what we want, so we
need to exclude `BDEPEND` from the list.)
- Move runstatedir option from configure to make (Now that the
upstream sssd 2.3.1 does not support `--runstatedir` option from
its configure script, we need to remove the option, to unblock the
configure issue like `unrecognized option --runstatedir`. Instead
we need to pass `runstatedir=` to emake commands.)
- Disable realm check for nsupdate (At the moment bind-tools does
not enable `gssapi`, so its `nsupdate` tool is also not able to
run `realm` command. As a result, configure script of `sssd` fails
when running `echo realm | nsupdate`, like `syntax error`.
To avoid such issues, we need to disable the nsupdate check for
now. After we could enable `gssapi` for the SDK correctly, we can
bring back the nsupdate check in the future.)
- Add patch for CVE-2021-3621
- Set the conf dir path explicitly (Without passing the
--with-systemdconfdir flag, the configure script will query
pkg-config for the directory itself. In the cross-compilation
setup that we have, this will result in a path sysroot prepended
to the path twice. systemd.eclass has a workaround for this issue,
but it does not provide an elegant getter of the system
configuration directory, thus we call `_systemd_get_dir`
ourselves.)
- Make it compatible with newer python versions.
- Fix samba version detection by exporting the CPP variable. For
some reason it was empty after the toolchain updates.
- take care of nscd.conf via tmpfiles, add files/nscd-conf.tmpfiles.
- don't run sanity checks in pkg_pretend to prevent gcc checks when
only the binary package is installed.
- comment out 'dostrip -x' to force the OS image binaries to be stripped
- remove everything glibc wants to put under /etc since we use
baselayout to provide that
## Description
When an EC2 instance boots up with a flatcar image (even the latest) the kubelet fails.
The userdata defines (and should do so) that the `/etc/eks/bootstrap.sh` should run, which it does.
This seems to add a ExecStartPre to the kubelet.service:
`ExecStartPre=/usr/share/oem/eks/download-kubelet.sh`
Both the `bootstrap.sh` and the `download-kubelet.sh` are consistent with:
https://github.com/flatcar-linux/coreos-overlay/blob/main/coreos-base/flatcar-eks/files/bootstrap.shhttps://github.com/flatcar-linux/coreos-overlay/blob/main/coreos-base/flatcar-eks/files/download-kubelet.sh
The `download-kubelet.sh` fails with `Unsupported Kubernetes version` because in the case statement on line 24->50 (https://github.com/flatcar-linux/coreos-overlay/blob/main/coreos-base/flatcar-eks/files/download-kubelet.sh#L25) only has values for kubernetes version 1.15 -> 1.21
If I manually alter the file and add 1.22 (when I test this on 1.22.9 kubernetes version deployment) and re-run the `bootsrap.sh` it works fine as far as I can see, the node than joins the cluster and shows up as `Ready` and pods starting running on the node.
The last PR I can see on this particular thing was done about a year ago f0da7f8c9e
## Impact
No EKS support for kubernetes versions higher than 1.21
## Environment and steps to reproduce
1. **Set-up**: Create an EKS cluster with the latest flatcar AMI in the worker nodes
2. **Task**: SSH into the node (probably through a Bastion)
3. **Action(s)**: No actions needed
4. **Error**: kubelet.service fails because the download-kubelet.sh doesn't have download locations for kubernetes version above 1.21
## Expected behavior
Download locations for kubernetes versions 1.22 and 1.23 (EKS doesn't have support for 1.24 yet it seems) should be located inside the download-kubelet.sh
## Additional information
By running `aws s3 ls s3://amazon-eks/` you can list the available locations of the other versions, so for it should result in this:
``` sh
case $CLUSTER_VERSION in
1.23)
S3_PATH="1.23.9/2022-07-27/"
;;
1.22)
S3_PATH="1.22.12/2022-07-27/"
;;
1.21)
S3_PATH="1.21.2/2021-07-05"
;;
1.20)
S3_PATH="1.20.4/2021-04-12"
;;
1.19)
S3_PATH="1.19.6/2021-01-05"
;;
1.18)
S3_PATH="1.18.9/2020-11-02"
;;
1.17)
S3_PATH="1.17.12/2020-11-02"
;;
1.16)
S3_PATH="1.16.15/2020-11-02"
;;
1.15)
S3_PATH="1.15.12/2020-11-02"
;;
*)
echo "Unsupported Kubernetes version"
exit 1
;;
esac
```
We fetch the latest release of calico from calicoproject/calico
releases instead of from calico-version.yaml file in tigera/operator
repo. This is because we download the Tigera Operator manifest from
the calico repository, so we can expect that when the release happens,
both calico and the operator agree on versions used (so we expect that
calico 3.24.0 is using operator version 1.28.0, and the operator
1.28.0 is using calico 3.24.0).