source_on_disk() so far relied on the 'sourcePackage' field, which contains the
primary dependency of a torcx packge (app-torcx/docker ->
app-emulation/docker). Now the 'metaPackage' field (app-torcx/docker) is used,
which lets us look at RDEPENDS and figure out all packages that are indirectly
installed when installing a torcx package. torcx_dependencies() does just that,
so move it's definition to torcx_manifest.sh.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The torcx_manifest.json file currently has a 'sourcePackage' field which is
extracted from the first runtime dependency of the torcx package ebuild. This
is a convention, and causes sourcePackage to hold 'app-emulation/docker' for
the 'app-torcx/docker' package. This does not carry enough information to be
able to figure out what other packages are part of the torcx package.
Store an additional field, 'metaPackage', in the manifest which contains the
name of the torcx package. With the right ebuild it is then possible to figure
out what other packages are part of a given torcx package. This can then be
used to add that information to the image packages list.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
Instead of looping over the package list, pass all the packages to a single
emerge call and specify num jobs. This lets emerge build/install all of them in
parallel, shaving some time off the torcx build.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The entries added in changelog/security/ do not follow our existing
security section in the release notes:
https://www.flatcar.org/releases/#release-3033.2.0
Document the structure and an example to use the right format that we
need for release note generation.
The default image group is already encoded in
/usr/share/flatcar/update.conf but it was written to
/etc/flatcar/update.conf as well. This can cause problems when the user
switches channels by forcing an update to a specific release from the
different channel (e.g., through the flatcar-update tool) as it leaves
the file under /etc/flatcar/update.conf out of sync with the new
channel version in /usr/share/flatcar/update.conf.
Since we don't really need to write a specific channel to /etc on new
images as we can rely on the value from /usr, we now leave any possible
overwriting of the value in /etc entirely to the user.
Since the update of dev-python/certifi, running the command
`./image_to_vm.sh --format gce --board=amd64-usr` fails due to a
dangling symlink. This symlink is located in
/usr/lib64/python3.9/site-packages and is not supposed to be installed
in the first place because of this INSTALL_MASK entry in
coreos-overlay/profiles/coreos/targets/generic/oem-aci/make.defaults:
INSTALL_MASK="${INSTALL_MASK}
/usr/*/python3*
"
There is an open upstream bug that INSTALL_MASK doesn't work correctly on
symlinks (https://bugs.gentoo.org/678462).
The best we can do at this time is to ignore the dangling symlink.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The original intention of the "binpkg" prefix in the CI binary package
cache URL was to separate packages from other build artifacts like
containers, images, and SDK tarballs. Motivation was to separate
developer content (binary packages) from CI automation artifacts
(everything else); since binary packages are not used by the CI.
This broke assumptions in scripts which use the binary host URL for
other things than packages - e.g. SDK tarballs or images. These
scripts would get a bincache URL with "binpkg/" prepended, while CI
automation would *not* use that prefix.
This change removes the use of "binpkg/" altogether since it would not
work as intended without more significant changes to build scripts.
garbage_collect.sh was using 'docker_vernum' where it should have been
using 'vernum' (as push_pkgs.sh does).
Also, make sure release directories are removed, not just packages.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
This change adds a job for publishing binary packages to the build cache
server to the ci automation.
Also, setup_board is updated to use the buildcache package cache if a
nightly build version is detected.
Signed-off-by: flatcar-ci <infra+ci@flatcar-linux.org>
In bce3bd9031, we added support for podman
for building and running the SDK container. The presence of podman is
auto-detected in sdk_container_common.sh. However, podman is preverred
over docker, requiring users to use *sudo* (which podman requires and
docker does not).
This change uses docker when present, podman otherwise. It also improves
podman detection - 'podman' uses argv[0] in its version string, so if
'docker' is a symlink to 'podman', 'podman --version' output uses
'docker'. This broke the SDK container on hosts which have a 'docker'
symlink to 'podman' since 'podman' is then run w/o 'sudo'.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
At least with Podman it's not possible to call "container rm" on a
running container without the force flag.
Add the force flag which is also used elsewhere already.
sdk_entry.sh is expected to be called by the root user, so we set USER
root:root. Also we add a "root" entry to passwd and group since it does
not exist in the SDK tarball.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
The creation of the target version file failed:
/home/sdk/sdk_entry.sh: line 32: /build/amd64-usr/etc/target-version.txt: Permission denied
Use root permissions to create the file.
When the docker wrapper script for Podman is used, we need to
explicitly create a root user container with "sudo podman".
Podman also has its own bridge for root user containers which we need
to detect, and it requires to explicitly say to use the Docker Hub
Caddy image.
Add a "$docker" variable that uses sudo podman as needed, and also
check which bridge interface to use. The filter had to be changed
because it didn't work with Podman. Use the Docker Hub Caddy image
explicitly.
This change ensures the binpkg host is updated if the board (OS) version
differs from the SDK version.
This is to ensure /build/[arch] uses the correct binary package cache.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
For execution of the compiled binaries in /build/arm64-usr we rely on
qemu-user binfmt emulation and have to tell it where the root is with
QEMU_LD_PREFIX because build systems don't chroot into /build/arm64-usr
themselves (which also works just by chance on amd64 because we have
similar glibc versions and so on). The env var setup was done in
/etc/profile.d/qemu-aarch64.sh but is now not read anymore since the
container runs the shell not as login shell.
Add the login options to the bash and su calls when starting the
container.
For test builds the commit that updates the submodules can be free-
standing but for releases we need to push it to the branch and also
sign the tag.
Add optional arguments that are used by the tag-release script in
flatcar-build-scripts.
In 9fba5789f9 we introduced
--torcx_output_root as an optional command line parameter
and had it default to "${DEFAULT_BUILD_ROOT}", inadvertently
diverging from the previous default, which was
"${DEFAULT_BUILD_ROOT}/torcx".
This change sets the correct default root "${DEFAULT_BUILD_ROOT}/torcx" to bring
build_packages back into alignment with build_image.
run_sdk_container uses the sourcetree version to decide whether to
re-use existing containers or create new ones. However, containers were
not matched by exact name - instead, plain --filter name="..." was used,
leading to prefix matching. This change updates name="..." to use
regular expressions for exact matching.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>