The scripts that invoked `cros_workon` without specifying a path to
the script were not calling the internal `cros_workon` directly, but
rather a copy installed in `/usr/bin/cros_workon`.
`/usr/bin/cros_workon` comes from the `coreos-base/cros-devutil` and
is a wrapper script that sources `common.sh` file to figure the
location of the `scripts` and finally invokes the internal
`cros_workon`. Curious thing is that the sourced `common.sh` comes
from the `/usr/lib/crosutils` directory and contents of the directory
come from the `dev-util/crosutils` package. And that `common.sh` is
different from the one in the scripts directory, but fortunately the
part that detects the path to the `scripts` directory is the same. I'm
not sure where where exactly the copy of `common.sh` in
`/usr/lib/crosutils` comes from - likely from somewhere in
`https://chromium.googlesource.com/chromiumos/platform/crosutils`.
Just cut the middle layers and call the internal copy of `cros_workon`
directly.
Apparently the `coreos-devel/sdk-extras` was originally meant to work
as a meta package to pull in all the optional packages in the SDK at once.
It has been unmaintained since 2~3 years, so an attempt of `emerge
coreos-devel/sdk-extras` will give you a huge list of conflicts to
resolve. It is difficult to resurrect sdk-extras at the moment.
Delete `coreos-devel/sdk-extras` completely. Doing that, we can delete
more than 20 other packages from the source tree.
Now that coreos-devel/sdk-extras are gone, delete unnecessary configs
in profiles, for app-portage/repoman, dev-go/glide, dev-go/godep,
dev-python/awscli, dev-python/botocore, dev-python/s3transfer.
This version has an officially documented support for python3, so it
plays along our plans of removing python2 in favor of python3. When
the switch actually happens, we will need to update the ebuild to
mention the correct path to python modules. The path contains python
version, which is a hindrance. Would be nice to have it hidden behind
some variable.
There is also a version 2.4.0.2, but it's marked as a prerelease on
github, so decided to package 2.3.1.1 instead.
This is some fallout from converting scripts from python2 to
python3. Output received from the functions in subprocess module now
return bytearrays, but we operate on them as if they were a text. So
decode the bytearrays to strings. Otherwise we are either getting some
junk values passed to the command line utilities (for example:
`b'/dev/loop2'` instead of `/dev/loop2`), or exceptions are thrown,
because a function expected a string.
This is just a conversion done by 2to3 with a manual updates of
shebangs to mention python3 explicitly. The fixups for bytearray vs
string issues will follow up.
The script is potentially useful, but it seems to be unused anyway. We
can bring it back later if there's a need. Note that this will need
updating it to python3 first. Which is why I'm dropping it currently -
it's one python2 script less to port.
Upstream has switched to go 1.16, but still doesn't use go modules. The ebuilds
needed fixing up after the automated PR was created.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
To be able to support arm64 native SDK without cross builds, we should
make generate_au_zip support both architectures, amd64 and arm64.
Without doing that, `build_image` fails with `ERROR : Required
WHITE_LIST items ld-linux-x86-64.so.2 not found!!!`, because the
script recognizes only amd64 libs in WHITE_LIST.
We should first determine the architecture in build_image, before
running generate_au_zip, and pass the architecture, either amd64 or
arm64. Also add allow_list and ld_linux parameters to necessary
functions.
Set PYTHON_COMPAT to python 3.6 and 3.7 to be suitable for the current
code base.
Add a custom patch to replace error with warning when running autoconf
for cross builds, because libkrb5 is not able to detect
cross-compilation.
Based on 64e33c9f826d8fd951fd58ba1ed70debaf65be8d .
The SystemdCgroup=true setting is incompatible with kubelet
cgroupDriver: cgroupfs. So to prevent kube clusters from failing, we
will be freezing a nodes config.toml during an update. For that purpose,
we install a second configuration file that can then be selected using a
systemd drop-in unit.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
Now that Docker has been updated to 20.10, we can use cgroupv2 so have
systemd mount the unified cgroup hierarchy by default. Other ways of
achieving the same would have been to pass 'systemd.unified_cgroup_hierarchy=1'
on the kernel cmdline, but this way the change propagates nicely to all
OEM consumers.
Signed-off-by: Jeremi Piotrowski <jeremi.piotrowski@gmail.com>
The upstream docker repository location has changed to docker/docker.
Additionally, the cli component has been split out which which requires
fetching two hashes and updating two ebuilds. We also took the chance to
align the ebuild with gentoo's, which means there are is no more live ebuild
and no symlink.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
We are switching flatcar to cgroupv2 which is support by docker 20.10 and
kubernetes 1.19. This requires setting the systemd cgroup driver in the kubelet
config.
Due to the unified cgroup hierarchy, kubernetes <1.19 will not work so
remove all older versions.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>