The rust ebuild has some magic to detect cross-toolchains present on the
system and enable building additional cross targets. The code to trigger
the rebuild of rust is part of install_cross_rust, and checks whether
the cross directories exist in the rust installation. If they don't,
then rust is removed and rebuilt to allow for the auto-detection to
happen.
Right now there are two issues with the code. Firstly, the path that is
checked is wrong, which leads to rust always being removed and rebuilt.
The path checked is /usr/lib/rust-*/rustlib but /usr/lib/rustlib is
where the files are installed.
The second issue is that it checks for aarch64 dirs when CHOST is
aarch64-cros-linux-gnu. However, on an aarch64 host the aarch64 dirs
will already exist from building the sdk itself. The rust ebuild is not
ready to handle aarch64 hosts yet and blows up. The correct behavior is
to combine the check for CHOST with a check for the right CBUILD.
On an aarch64 host we should presumably check for the x86 CHOST and rust
dirs, but that can be added later, because it needs more work.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The arm64 profiles don't specify SYMLINK_LIB=yes, which makes sense
since arm64 systems don't support multilib in the way that we are used
to from x86. What this means is that build artifacts are installed into
separate lib and lib64 directories. The root overlay installed in stage4
needs to check for SYMLINK_LIB before trying to create a symlink,
otherwise it fails to be applied because it collides with the directory
in the rootfs.
This uncovered a second minor issues - the rust toolchain bootstrap
scripts checked for /usr/lib64/rust*, but the ebuild installs to
/usr/lib/rust.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The previously used uuid 4f68bce3-e8cd-4db1-96e7-fbcaf984b709 is valid
for x86_64 root partitions, which resulted in the dev container not
working with systemd-nspawn on aarch64. systemd-nspawn fails with:
No suitable root partition found in image
Change the partition uuid to the architecture agnostic one documented
in the man page:
A GUID partition table (GPT) with a single partition of type 0fc63daf-8483-4772-8e79-3d69d8477de4.
This makes systemd-nspawn happy on aarch64.
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
The kola update tests need a dev-key-signed update payload. This was
lacking and caused the update tests to be skipped.
Generate the test update payload for both dev builds and release builds
and run the kola tests for both. The test update payload has a special
name to not confuse it with the real update payload for releases, and
we keep the previous behavior to sign releases. Therefore, the
generate_update function wasn't used but the extract_update function
extended with generating the additional test payload.
This change removes 8 years old code from the toolchains build which
tries to update SDK libraries for unknown reasons, breaking the
toolchains build in the glibc-2.33 update.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
This change to stage 4 of the SDK bootstrap process will keep a
snapshot of coreos-overlay in the SDK tarball. This snapshot can be
used in future SDK bootstraps' stage1 to ensure a clean stage 1 output
without any package updates.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
This change updates the stage1 SDK bootstrap build to use local
("known good") package ebuilds only, preventing updated package ebuilds
to apply in stage 1. This fixes SDK build breakage we observed when
upgrading core libraries like readline.
The change also removes the seed update from stage 1 as it should not
be needed anymore now that we postpone any package updates to stage 2.
The following package ebuild repos are used for stage 1:
- for portage-stable, we simply copy /var/gentoo/repos/gentoo
from the SDK root.
- coreos-overlay is more complicated since ebuilds are missing from
the SDK. So we grok the version the SDK was built with from
/mnt/host/source/.repo/manifests/default.xml
and then we create a local stage 1 clone of
https://github.com/kinvolk/coreos-overlay.git
in which we then check out the revision noted in the default mnifest.
Signed-off-by: Thilo Fromm <thilo@kinvolk.io>
The `--jobs` parameter that some scripts defined was not used anywhere
in jenkins or mantle. So the value of the parameter always ended up
being equal to `${NUM_JOBS}` set by `common.sh`. Also, even if the
`--jobs` parameter was used for some script, that script usually
didn't forward the jobs value to other scripts, so the other scripts
ended up using `${NUM_JOBS}` again. Also, the `${FLAGS_jobs}` variable
was used by some functions in the build library, and those functions
were sometimes invoked by scripts that didn't define the
`${FLAGS_jobs}` variable. It is tedious to track which script should
actually define the parameter, and where it should be forwarded.
Just get rid of this half-working pretense. If you want to affect how
many jobs `emerge` uses, export the `NUM_JOBS` environment variable
before calling any script.
For `EMERGE_FLAGS` and `REBUILD_FLAGS` we unconditionally specify the
`--jobs` flag's value to `${NUM_JOBS}` because they are passed to
`emerge`. On the other hand we drop the `--jobs` parameter from the
`UPDATE_ARGS` variable, because this variable passed to `setup_board`
or `update_chroot`, which don't have this flag any more.
The script needs to be ported, because it is importing portage code
which became python3 only.
The porting I did is likely a lousy job, but at least it stopped
failing with some p(yt)hony errors.
I have no idea how this thing worked before - the repos never were in
/usr/portage nor in /usr/local/portage… But the newer version of
portage seems to be pretty picky about the validity of repos location,
so fix them.
When a license file is newly added, portage may not yet have it in the
shared folder and the license inclusion step fails.
Fall back to the source repositories and look for the license file
there, too. Print a warning if not found instead of failing to build.
The build_image script invokes the create_dev_container function, and
passes the `FLAGS_group` as param. Use the param, to generate the
binhost URL instead of using the DEFAULT_GROUP which stays as developer
always.
Fixes: kinvolk/Flatcar#298
Signed-off-by: Sayan Chowdhury <sayan@kinvolk.io>
Kernel source tree started to have a broken link
`tools/testing/selftests/powerpc/copyloops/memcpy_mcsafe_64.S`.
Especially in case of Kernel 5.8.18, like:
```
broken link: /usr/src/linux-5.8.18-coreos/tools/testing/selftests/powerpc/copyloops/memcpy_mcsafe_64.S
ERROR build_packages: test_image_content: Failed symlink check
```
Ignore the symlink when checking broken symlinks.
Setting the invalid CCACHE_ variables resulted in strange failure
in projects depending on meson, newer version like 0.55.3. For example
systemd build fails like the following errors:
```
* ACCESS DENIED: utimes: /mnt/host/source/ccache
* ACCESS DENIED: utimes: /mnt/host/source/ccache
F: utimes
S: deny
P: /mnt/host/source/ccache
A: /mnt/host/source/ccache
R: /mnt/host/source/ccache
C: ccache cc /build/amd64-usr/var/tmp/portage/sys-apps/systemd-246/work/systemd-246-abi_x86_64.amd64/meson-private/sanitycheckc.c -o /build/amd64-usr/var/tmp/portage/sys-apps/systemd-246/work/systemd-246-abi_x86_64.amd64/meson-private/sanitycheckc.exe -O1 -pipe -pipe -D_FILE_OFFSET_BITS=64
```
We should not set up ccache at all, as it has been already disabled in
coreos-overlay repo.
The SDK now includes a Rust version with the aarch64 cross-compilation
libraries and the toolchain job doesn't build it anymore. Yet it was
still recompiled because the path had changed.
Remove the adjustment of the download URL and any automatic building
of Rust. Just issue a warning so that any problem can be spotted easily.
This change does not affect the SDK bootstrapping (full or just stage4)
but affects ./build_packages and the toolchains job. For the toolchains
job the crossdev setup is missing anyway and rebuilding wouldn't help
but only downloading, yet since in stage4 there are no binary package
URLs at all, it's best to remove this step and if it is needed later,
the warning will help.
The default update seed command does only specify gcc which leads to
an error because »The short ebuild name "gcc" is ambiguous«.
Choose the standard package name instead of the cross compiler packages
which are only known to emerge because we build them as part of an SDK
release now.
The VM hardware and OS type versions were outdated and resulted in
features not being available by default.
Choose a newer ESXi host version (requires 6.5) and set the guest
OS type to Linux 3.x 64 bit.
Before, we were relying on the toolchains job to build and upload
packages that were part of the SDK. With this change, all packages that
should be part of the SDK are built and uploaded by the SDK job. The
toolchains job only builds toolchain packages specific for the release.
This change includes several adjustments done to both the SDK and the
toolchains jobs to make this work:
* Make the SDK job build all cross toolchains, including Rust
* Stop building Rust in the toolchains job and use the one in the SDK
instead.
* In toolchain_util.sh: detect when the symlink folder for crossdev
packages is missing and run crossdev to create it during
update_chroot setup.
* Make it possible to build the SDK starting from stage 4 instead of
stage 1, to make the SDK building faster for PR branches / nightlies
(full build should still be done for releases / weeklies).
For some unicode characters in ca-certificates file names "rev" complains
about an "invalid or incomplete multibyte or wide character"
and gives no output.
Filter out any unexpected characters for "rev" and replace them with "?"
so that "ls some?name" will still resolve the original name.
Toolchain utils have installed only `dev-lang/rust`. It could result
in version mismatch between `virtual/rust` and `dev-lang/rust`, because
`dev-lang/rust` does not automatically pull in `virtual/rust`.
So install `virtual/rust` instead of `dev-lang/rust`.
Install `virtual/rust` to avoid version conflicts that happen in case of
rust versions in the SDK being different from those in the new ebuilds.
`/usr/share/catalyst/targets/stage1/stage1-chroot.sh` installs gcc and
its dependencies, including `dev-lang/rust`, while `virtual/rust` does
not get updated. That results in version conflicts between
`virtual/rust` and `dev-lang/rust`. To avoid such an issue, we should
update also `virtual/rust` when building stage1. Since `virtual/rust`
automatically pulls in `dev-lang/rust`, we do not need to explicitly
specify `dev-lang/rust` here.
The license JSON file did only include the package names but not
any other metadata. Also since the file was not on the image itself,
it had to be downloaded.
Add more metadata to the license JSON and store it on the image.
systemd and sudo are already fixed. Git was fixed by updating to 2.23.2,
not 2.24.1. Samba is 2 years old and customized, thus difficult to update.
file, Python, and gdb are only in the SDK.
If a user or old software creates the flag file on the old CoreOS location,
nothing would happen.
Check the old location, too, so that Ignition is rerun.
The dev build SDKs are not in $FLATCAR_DEV_BUILDS/sdk but published under
$FLATCAR_DEV_BUILDS/developer/sdk.
Add an environment variable to specify where the SDK is to be found
but default to $FLATCAR_DEV_BUILDS/sdk if it is not specified.
From Jenkins this variable is exported as DOWNLOAD_ROOT_SDK.
Two Flatcar versions were used in /etc/portage/make.conf both in the SDK
and in the boards.
Use only a single version by default to get the expected results and not
something else when using binary packages.
The Rust crossdev package was never uploaded to /sdk/ and always
had to be compiled again.
Upload it in a separate toolchain-arm64 directory because /Packages in /crossdev/
doesn't refer to the Rust package and its use flags.
The BINHOST was still configured to be the CoreOS CL upstream location
which does not work for independent Flatcar CL releases. This broke
binary package installation in the development container.
Use the correct BINHOST to fix installation of binary packages in the
development container.
The configuration variables for the Ignition configuration also serve as
data source for coreos-cloudinit config data (which includes plain scripts).
Document them properly and also call out that the networking variables only
work if coreos-cloudinit data is used.
For some use cases, too few networking variables were available. Add secondary
routing variables for the main network interface and add a second interface.
There was a logical mistake in Ignition that caused ignition.config.*
only to work when it was part of the ovfenv. Thus they were added but
the old CoreOS variables marked deprecated and kept. With both as OVF
variables each of them worked but directly specifying ignition.config.*
as guest variable still didn't because of the logical mistake.
Now there is a fix and both work well when specified directly as guest
variable (https://github.com/flatcar-linux/ignition/pull/11).
Delete the old CoreOS OVF variables because they just clutter the UI
and only the Ignition variables should be used in the UI.
Write out an iPXE script file for Packet.
The script uses relative URLs to refer to
the other PXE files and thus can be copied
along with the files to any server.
This is useful because it saves the creation
of an iPXE script for a release/channel on a
third-party service. For CI testing it is
also helpful because the script does not only
end up on the release server but also already
on the Google buckets, refering to unpublished
PXE payloads.
For the Ignition variables to be usable they need to be
specified in the OVF.
Call out that the CoreOS variables are deprecated to
reduce confusion when both are displayed besides each other.
Create a tar ball with the contents of the / and /usr partitions
to be used as follows with systemd-nspawn (via machinectl):
machinectl import-tar flatcar-container.tar.gz flatcar-container
machinectl start flatcar-container
machinectl shell flatcar-container
or with docker by converting it to an OCI image:
docker import -c "CMD /bin/bash" flatcar-container.tar.gz flatcar-container
Since the new "prodtar" command relies on the results of the "prod" command,
it bundles it so that "prod prodtar" and "prodtar" is the same.
When loop device partition nodes aren't cleaned up, building images
will fail with:
mkfs.vfat: Partitions or virtual mappings on device '/dev/loop0', not making filesystem (use -I to override)
Just add the flag unconditionally to work around it.
The Perl update will break SDK bootstrapping during seed update, so
disable it again. This can be reverted after bumping the SDK to a
version that includes the new Perl.
The custom sys-firmware/edk2 package has been replaced by Gentoo's
sys-firmware/edk2-ovmf package now that only amd64 is supported.
This partially reverts 1761d9d071 .
Since EAPI=7 was supported, portage can no longer use different
ROOT and SYSROOT values. This adjusts the paths so that the first
phase builds cross-toolchains under /usr/${CHOST}, then the native
toolchains are built under /build/${BOARD} (as was being done
previously). Now that the cross-toolchain development files can't
be used when building the native toolchain, the headers and libs
are stupidly copied into the board root to be used used and then
overwritten by the board packages as they are built. Since this is
all done in a chroot, these changes shouldn't affect the SDK host.
This will run on ESXi 6.0 and above, and all non-EOL versions of Fusion
and Workstation.
Also enable a few useful VMX features (HPET; CPU and memory hotplug) that
are added by VMware Workstation 14.1.1's Change Hardware Compatibility
wizard. Correspondingly, enable CPU/memory hotplug in the OVF; omit
HPET because there's no obvious way to enable it.
It's been deprecated since QEMU 0.12. Fixes warning on QEMU startup:
qemu-system-x86_64: -net nic,vlan=0,model=virtio: 'vlan' is deprecated. Please use 'netdev' instead.
This is the same story as the others: our images will fail the GLSA
checks as long as we build old Go versions. However, this one will
fail for any version less than 1.10.1 now.
Setting `$VM_CDROM` in the qemu script does not work as expected when
installing Container Linux from the given bootable CDROM image. That's
probably because qemu-system-x86_64 expects another boot option `-boot
order=d` to be able to boot from the given CDROM drive. Let's specify
specify a `-boot` as well as `-drive` option for the given CDROM drive.
This is the same case as the previous one. Our Go 1.8 package has
the fix, but none of the older unsupported versions do. Since we
have multiple installed versions and this says anything less than
Go 1.9 is vulnerable, we have to whitelist it until all older
versions of Go are removed from the OS.
It provides no value when it works, and it's randomly causing
failures to build toolchains due to permissions problems after
certain releases. This also requires taking it out of FEATURES in
the portage profile (which is the SDK profile by default).
Test Jenkins runs of SDK and toolchains jobs both ran in the same
time as with ccache enabled.
Normally toolchains packages are prevented from upgrading. This
drops that restriction and explicitly removes old versions so that
conflicting tool profiles are not accidentally used.
This reverts commit 20975049b3.
This reverts commit 7f058d61a1.
Reverting because of bug 2284 [1] where grub will sometimes fail due to
memory corruption. This is _not_ the cause of the bug, and the bug can
even be reproduced with this reversion, but it seems to occur less when
not using fat32.
[1] https://github.com/coreos/bugs/issues/2284
mkfs.vfat was defaulting to FAT16 based on the size of the partition.
The UEFI spec (2.7 errata A, section 13.3) implies that only FAT32 is
necessarily supported on the ESP, and we've received a report of
hardware that doesn't recognize FAT16.
We handle Go differently than Gentoo, so our 1.8.4 package includes
the same security fixes. When all packages are built with Go 1.9,
the older Go packages shouldn't be installed anymore, so this line
can be dropped.
This omits the toolchain packages' version-pinning flag for the
binutils package while it is being upgraded. It also removes older
versions installed in parallel that cause unwanted rebuilds.
When stable has the upgraded version, this can be reverted.
This includes the source package of all torcx packages that are
installed on disk, including cases where multiple versions of the
same package are available.
This moves the default symlinking logic into build image as well.
This assumes that a torcx store is available locally with all images
referenced in the torcx manifest.
This is accomplished with a highly-indented double-for-loop, but I think
it's still decently readable.
Torcx is special in that it wishes to be uploaded under a prefixed
directory (torcx), typically wishes to be downloaded from there, but
ultimately wants to be downloaded from a location without that prefix.
In fact, I expect during a normal release process, it will be uploaded
with that prefix to the build bucket, copied without that prefix to the
final bucket (during pre-release), and then finally downloaded without
the prefix.
I think this set of variables ends up being the cleanest way to
represent this complexity.
This just sets the code file size to the var file size, so it gets
zero-padding without having to pipe commands together.
From: David Michael <david.michael@coreos.com>
[Rebased]
Signed-off-by: Geoff Levand <geoff@infradead.org>
This reverts the vagrant image back to using oem-vagrant because we
don't want to break the existing images. It moves the new,
Ignition-powered virtualbox flavor of vagrant into a new image.
This changes the oem-package for vagrant to vagrant-virtualbox,
which uses ignition instead of cloud-clonfig and sets the oem id
to "virtualbox" so that ignition can handle the machine correctly
This adds the option --torcx_store to specify the path to a
directory containing torcx images to be baked into the OS image. A
blank string can be given instead of a path to restore the previous
behavior and leave an empty vendor store.
The default value is the default path created by build_torcx_store,
which is used when build_packages updates torcx images. This means
that the current pattern "./build_packages && ./build_image prod"
should result in a fully updated OS image with all torcx images
available in the vendor store.
When OVA template is not being used, the default dhcp value yes will
trigger cloud-init to generate a 00-.network file, which will break
network connectivity Intermittently. Please see the details here:
https://github.com/coreos/bugs/issues/1802#issuecomment-297847614
When building dev images, the PORTAGE_BINHOST value during build
time is written to the image's make.conf. This breaks the default
binary package setup, since Jenkins is using gs:// URLs for signed
package verification and authenticated downloads, and the make.conf
doesn't inherit the GS_* variables to handle those schemes.
This should be reverted when signed packages are properly supported
by default in the dev images.
This changes the format from:
sys-apps/systemd-212-r8::coreos GPL-2 LGPL-2.1 MIT public-domain
to a JSON structure:
[
{
"project": "sys-apps/systemd-212-r8::coreos",
"license": ["GPL-2", "LGPL-2.1", "MIT", "public-domain"]
}
]
We don't have to worry about the changing format because the previous
format was never published. This is designed to match the
bill-of-materials [1] format so that it can be consumed by the site.
[1]: https://github.com/coreos/license-bill-of-materials
INFO build_oem_aci: Writing coreos_oem_gce_aci_stage_packages.txt
awk: cmd. line:1: fatal: cannot open file `/build/amd64-usr/var/db/pkg//DEPEND' for reading (No such file or directory)
INFO build_oem_aci: Writing coreos_oem_gce_aci_stage_licenses.txt
awk: cmd. line:1: fatal: cannot open file `/build/amd64-usr/var/db/pkg//DEPEND' for reading (No such file or directory)
We've always generated these license manifests (detailing which ebuilds
are covered by which license), but never published them. This adds these
manifests to the list of published files so that they are publicly
available.
os-release is requested in bug reports, and knowing which board
the problem occurred on is often helpful.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Detect first boot based on the existence of a coreos/first_boot file
in the EFI partition, and set "coreos.first_boot=detected" command line
argument when found. We use "detected" rather than "1" so the initramfs
knows that it should mount the ESP and delete the file. This lets us
defer clearing the first-boot flag until Ignition has run successfully,
without having to change the disk GUID after filesystems are mounted.
Continue detecting the first-boot disk GUID and adding the command-line
argument to randomize it, since we still want unique disk GUIDs
regardless of Ignition.
This is useful for emerges that are meant for incomplete rootfs's, such
as ACI building emerges. There are cases where the #! check is expected
to fail while doing those.
This reverts commit a7ffba9a9f.
The build_image script can build multiple formats. When our
releases and automated builds are creating developer containers and
production images from the same command, the verity flag would be
disabled while building the container and remain disabled when building
the production image. This resulted in no verity in all our builds.
Allow "coreos-install -o vmware_raw" to install Container Linux with
the vmware OEM.
Use base DISK_LAYOUT to reduce the minimum disk size.
Fixescoreos/bugs#359.
They're not in the root fs, but they are in the initramfs. Handle this
by augmenting the package list with packages that are both
- build dependencies of coreos-kernel, and
- configured to cause rebuilds of coreos-kernel when their sub-slot
changes.
To clean things up and prepare for arrm64 support move
all the enable_rootfs_verification processing into one
location and add some comments.
Signed-off-by: Geoff Levand <geoff@infradead.org>
To make verity work both enable_rootfs_verification and enable_verity
need to be set. Without one verity just gets half enabled. Remove
the enable_verity flag and do the full verity setup when
enable_rootfs_verification is set.
Signed-off-by: Geoff Levand <geoff@infradead.org>
The disable_read_write variable was just a copy of FLAGS_enable_rootfs_verification,
so to make things less confusing just use FLAGS_enable_rootfs_verification.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Add a new grub variable extra_options, the contents of which is
added to the linux command line. Use extra_options to set
the ACPI options needed for arm64.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Two new image types have been added:
1. parallels - this produces VM images with extension pvm.tgz that can be loaded directly into Parallels Desktop
2. vagrant_parallels - this produces a Vagrant box that works with parallels vagrant provider (http://parallels.github.io/vagrant-parallels/)
Just like vmdk and others we rely on qemu-img to convert raw images. Support for Parallels disk images was added to qemu-img in version 2.4.
I also removed the box files from the actual image since there are not needed in /usr/share/oem.
Signed-off-by: Bassam Tabbara <bassam.tabbara@quantum.com>
The ACI root is created by reusing the create_prod_image function
to install a base meta-package. It then runs a script to customize
the file structure as required by agent software (if necessary),
writes a manifest file from a supplied template, and then packages
it all into a tar file.
The Xen loader in GRUB never received support for our hacky scheme of
adding the verity hash to the kernel cmdline. Disable till that's fixed.
Partially reverts 2016567 and 533b1b9.
Consolidates two very similar flags into one and fix an issue where
verity could get enabled in the GRUB config when rootfs verification was
turned off (e.g. on arm64 which cannot use verity yet).
workaround for bootstrap_sdk on an Ubuntu host where /dev/shm is a
symlink to /run/shm. Since we mount the hosts /dev (for losetup) this
interferes with building python 2.7. The workaround is to disable the
/dev/shm during python builds. A longer term fix would be to not mount
the hosts /dev. Thanks for marineam for suggesting the fix on IRC.
If the gptprio.next command fails to give us something to boot we
shouldn't try! In order to diagnose why the failure happened halt
immediately so the user can see the error message.
Once we've built the packages, verify against the Gentoo Linux Security
Advisories to ensure that we're not shipping anything with known
vulnerabilities.
Instead of patching portage to support the `disabled` flag now we just
patch it to leave the `[gentoo]` section out of the default repos.conf.
Follow up to 585275b268
PROD_IMAGE is a flag that indicates a production image should be
built, and will be set for dev builds if the user specifies that
both dev and prod images should be built. build_image was
incorrectly using the PROD_IMAGE variable to conditionaly do some
setup depending on the image type.
Add a new variable IMAGE_BUILD_TYPE that can be tested for the type
of image currently being built and replace the PROD_IMAGE usage.
Signed-off-by: Geoff Levand <geoff@infradead.org>
A bunch of packages install PAM configuration fragments in /etc. Rather than
modify them all to install into /usr/lib, just move the entire directory at
image build time.
We need to ship some PCR measurements alongside images in order to make it
easier for admins to provide an appropriate policy. Add some tooling to
generate the appropriate hashes during build, pack those into a zip file
and upload it.
profile is already set up to source /usr/share/baselayout/profile.env
but it never has because I forgot to add this line during the migration
to amd64-usr images. Sure took us a while to notice that one... :(
This resolves two issues:
- Large dependencies are *never* built during image_to_vm,
build_packages must now handle that.
- Since build_packages can't resonably do the oem-* packages (they all
conflict with eachother) we do want to build them from the ebuild.
This is now enforced so a old binpkg is never used. This resolves
confusing issues people have always had while when editing oem
ebuilds but getting a stale build instead.
Allows build_image to be used without first running build_packages.
Note: setup_board --force is required before build_packages will work
properly after doing this since baselayout won't be installed otherwise.
- May be sourced early, so explicitly die if source fails.
- Add a function for getting the latest version of a package.
- Read PROVIDES metadata using portageq, enabling data to be read from
binary packages in addition to installed packages. The performance
issue is not an issue here and needed to support empty build roots.
Most vm images have an expanded root partiton to make them practical to
use as-is. Some deployments may not want such a large root, putting most
storage on other volumes.
This variable was semi-deprecated ages ago so `version.txt` could follow
a similar variable naming pattern to `os-release`. Finally drop usage of
it here in favor of `$COREOS_VERSION`.
The one-liner `[[ -z ${PIPESTATUS[*]#0} ]]` no longer works because the
expansion still includes spaces even if all the values are zero. Somehow
that didn't matter in bash 4.2 but it does mater in 4.3 to be consistent
with the general behavior of variables in [[ tests.
The console often contains very useful information in the event of a
hard crash, in such situations there's no ability to unblank the console
via keypress because the kernel won't handle the interrupt.
Since CoreOS is a server/cluster operating system, there won't generally
be monitors connected benefitting from a blanked console. Disabling the
blanking altogether allows the frame buffer contents to always be
visible, even when the kernel can't handle keypresses.
coreos.first_boot=1 will no longer trigger disk-guid randomization, so
manual ignition triggers in diskless/pxe scenarios may succeed. Instead
we explicitly request the randomization when first_boot=1 was added by
grub finding the 00000000-0000-0000-0000-000000000001 disk-guid.
In order to boot properly we need `rootflags=rw mount.usrflags=ro` on
the command line. These have been build into the kernel directly but for
arm64 builds the built in options seem to be ignored.
Add the necessary variables in grub.cfg and populate the EFI
partition with arm64 efi executable and modules.
Signed-off-by: Andrej Rosano <andrej@inversepath.com>
Now that ccache is turned on by default in the profile portage complains
a lot if ccache isn't actually installed, sleeping 5 seconds for each
error message. Since pkgcache is in use ccache isn't going to make that
much of a difference but getting rid of those 5 second sleeps will. :)
Failing to explicitly set the selinux policy store to operate on may
result in semodule installing the policy in an incorrect location. Pass
it on the command line in order to avoid this.
ldconfig does not work for non-native arches. Create a new
build_image routine run_ldconfig that uses qemu user emulation
to run the board ldconfig on the board rootfs when the board and
SDK arches are different.
See: http://code.google.com/p/chromium/issues/detail?id=378377
Prior to calling run_ldconfig the board rootfs must have ldconfig
installed. To arrange this move the call of run_ldconfig to after
the base package install.
Fixes build_image errors like these when building for arm64:
/sbin/ldconfig: /lib64/libXXX is for unknown machine 183.
Signed-off-by: Geoff Levand <geoff@infradead.org>