After the cloud-init package is installed you will need to run the
"setup-cloud-init" command to prepare the OS for cloud-init use.

This command will enable cloud-init's init.d services so that they are run
upon future boots/reboots. It also enables eudev's init.d services as udev is
used by cloud-init for both disk configuration and network card persistent
naming.



Sudo & Doas
-----------

Cloud-init has always had support for 'sudo' when adding users (via
user-data). As Alpine is now moving towards preferring the use of 'doas'
rather than 'sudo' support for 'doas' has been added to the cc_users_groups
module.

As a result, the Alpine cloud-init package no longer declares a dependency
on sudo - you must ensure to install either the 'doas' or 'sudo' package (or
indeed both), depending on which you wish to use, in order to be able to
create users that can run commands as a privileged user.



Doas
----

The cloud-init support for Doas has not (yet) been upstreamed, it only
exists in the Alpine cloud-init package. This functionality performs some
basic sanity checks of per-user Doas rules - it double-checks that the user
referred to in any such rules corresponds to the user the rule(s) is/are
defined for - if not then an error appears during 1st boot and no Doas rules
are added for that user.

It is recommended that you set up a Doas rule: "permit persist :wheel" so
that all members of group 'wheel' can have root access. The default
/etc/doas.d/doas.conf file has such a rule but it is commented out. The
/cloud-init global configuration file /etc/cloud/cloud.cfg defines the default
'alpine' user to be created upon 1st boot with a Doas rule giving root access.

To setup Doas rules for additional users, both existing and new, in the
user-data add a 'doas' entry to define one or more rules to be setup for
each, for example:

  users:
    - default
    - name: tester
      doas: ["permit tester as root"]

When cloud-init runs on 1st boot it creates the file
/etc/doas.d/cloud-init.conf containing all the per-user rules specified in
user-data, as well as the rule for the default 'alpine' user.



NTP
---

It is recommended that you enable a NTP client on the machine. Cloud-init
supports both Chrony (fully featured) and Busybox's NTP client (a minimal
implementation).

Chrony is the default NTP client in cloud-init for Alpine Linux.


To use Chrony as the NTP client:

  Install the chrony package and enable the chrony init.d service

    # apk add chrony
    # rc-update add chronyd default

  Specify a ntp section in your cloud-init User Data like so:

    ntp:
      pool:
        - 0.uk.pool.ntp.org
        - 1.uk.pool.ntp.org

  If you do not specify any pool or servers then 0.pool.ntp.org ->
  3.pool.ntp.org will be used.

  The file /etc/cloud/templates/chrony.conf.alpine.tmpl is used by cloud-init
  as a template to create the configuration file /etc/chrony/chrony.conf.


To use Busybox as the NTP client:


  Edit the /etc/conf.d/ntpd file and change the line:

    NTPD_OPTS="-N -p pool.ntp.org"

  so that it is instead:

    NTPD_OPTS="-N"

  This changes the NTP client from using the hardcoded NTP server
  "pool.ntp.org" to instead use the /etc/ntp.conf file which will be
  generated by cloud-init upon first boot.

  Enable the ntp init.d service:

    # rc-update add ntpd default

  Specify a ntp section in your cloud-init User Data like so:

    ntp:
      ntp_client: ntp
      servers:
        - 192.168.0.1
        - 192.168.0.2

  If you do not specify any servers then 0.pool.ntp.org -> 3.pool.ntp.org
  will be used.

  The file /etc/cloud/templates/ntp.conf.alpine.tmpl is used by cloud-init
  as a template to create the configuration file /etc/ntp.conf.



Network interface hotplugging
-----------------------------

Version 21.3 of cloud-init has added some support for network interface
hotplugging, that is the addition or removal of additional network
interfaces on an already running machine.

A simple daemon (cloud-init-hotplugd) runs at boot time that listens for udev
hotplug events and triggers cloud-init to look at them.

This daemon, via its init.d script, is *not* at present enabled by the
setup-cloud-init script as hotplug is currently *only* supported by the
ConfigDrive, EC2, and OpenStack DataSources.

In order to make use of network hotplug you will need to do the following
*two* things:

- firstly, add the /etc/init.d/cloud-init-hotplugd script to the "default"
run-level, i.e.

	rc-update add cloud-init-hotplugd default

- secondly, enable hotplug for the relevant DataSource by adding the following
to either the /etc/cloud.cfg file or else to the supplied user-data:

	updates:
	  network:
	    when: ['boot','hotplug']



Known Issues
============


Unable to SSH in as user(s) as the account(s) is/are locked
-----------------------------------------------------------

Issue: By default cloud-init will ensure that any user accounts have their
password locked. The OpenSSH sshd daemon has logic that, when PAM is not
enabled, for accounts with locked passwords it *also* decides to refuse
key-based SSH logins.

Solution: install the openssh-server-pam package (rather than
openssh-server) and edit /etc/ssh/sshd_config to ensure that it defines
"EnablePAM yes".

In the future this package may add "openssh-server-pam" as a dependency but
as it is possible some individuals may wish to use cloud-init without any
SSH daemon installed that decision is unclear.


Missing dependencies
--------------------

The cloud-init package declares dependencies for only commonly used
cloud-init modules - if deps for all supported modules were defined then the
dependency list would be quite large.

As a result when building cloud-init based disk images you may need to
manually install some packages required by some cloud-init modules.

The following modules should work, in general, with the defined dependencies:

	cc_bootcmd
	cc_ca_certs
	cc_debug
	cc_disable_ec2_metadata
	cc_disk_setup
	cc_final_message
	cc_growpart
	cc_key_to_console
	cc_locale
	cc_migrator
	cc_mount
	cc_package_update_upgrade_install
	cc_phone_home
	cc_power_state_change
	cc_resizefs
	cc_resolv_conf
	cc_rsyslog
	cc_runcmd
	cc_scripts_per_boot
	cc_scripts_per_instance
	cc_scripts_per_once
	cc_scripts_user
	cc_scripts_vendor
	cc_seed_random
	cc_set_hostname
	cc_set_passwords
	cc_ssh
	cc_ssh_authkey_fingerprints
	cc_timezone
	cc_update_etc_hosts
	cc_update_hostname
	cc_users_groups
	cc_write_files

If you want to delete existing partitions using cc_disk_setup then you will
need to install the Alpine "wipefs" package.

If you want to create/resize filesystems using cc_disk_setup and/or
cc_resizefs then you will need to install the relevant package(s) containing
the appropriate tools:

	BTRFS:		btrfs-progs
	EXT2/3/4:	e2fsprogs-extra
	F2FS:		f2fs-tools
	LVM:		lvm2
	XFS:		xfsprogs and xfsprogs-extra
	ZFS:		zfs


cc_ca_certs module
------------------

The remove-defaults option of the cloud-init cc_ca_certs module does not
currently work correctly. This option will delete certificates installed by
the Alpine ca-certificates package as expected. However the certificates
provided by the ca-certificates-bundle package, which is always automatically
installed in an Alpine system due to it being a dependency of a base package,
are not deleted.


Using ISO images for cloud-init configuration (i.e. with NoCloud/ConfigDrive)
-----------------------------------------------------------------------------

With the removal of the util-linux dependency from the Alpine cloud-init
package the "mount" command provided by Busybox will be used instead.

Cloud-init makes use of the mount command's "-t auto" option to mount a
filesystem containing cloud-init configuration data (detected by searching
for a filesystem with the label "cidata"). Busybox's mount command behaves
differently to that of util-linux's when the "-t auto" option is used,
specifically if the kernel module for the required filesystem is not already
loaded the util-linux mount command will trigger it to be loaded and so the
mount will succeed. However Busybox's mount will not normally trigger a kernel
module load and the mount will fail!

When this problem occurs the following will be displayed on the console
during boot:

  util.py[WARNING]: Failed to mount /dev/vdb when looking for data

If cloud-init debugging is enabled then the file /var/log/cloud-init.log will
also contain the following entries:

  subp.py[DEBUG]: Running command ['mount', '-o', 'ro', '-t', 'auto',
  '/dev/vdb', '/run/cloud-init/tmp/tmpAbCdEf'] with allowed return codes [0]
  (shell=False, capture=True)
  util.py[DEBUG]: Failed mount of '/dev/vdb' as 'auto': Unexpected error
  while running command.
  Command: ['mount', '-o', 'ro', '-t', 'auto', '/dev/vdb',
  '/run/cloud-init/tmp/tmpAbCdEf']
  Exit code: 255
  Reason: -
  Stdout:
  Stderr: mount: mounting /dev/vdb on /run/cloud-init/tmp/tmpAbCdEf failed:
  invalid argument

There are 2 possible solutions to this issue, either:

(1) Install the util-linux package into the Alpine image used with
cloud-init.

or:

(2) Create (or modify) the file /etc/filesystem and ensure it has a line
present with the name of the required kernel module for the relevant filesystem
i.e. "iso9660". This will ensure that Busybox's mount will trigger the loading
of this kernel module.


CloudSigma and SmartOS data sources
-----------------------------------

If you are using either the CloudSigma or SmartOS/Joyent Cloud data sources
then you will need to install the Alpine py3-pyserial package. This was removed
as a cloud-init (hard) dependency as it is only used by these two uncommon
Data Sources.


MAAS data source
----------------

If you are using the MAAS data source then you will need to install the
Alpine py3-oauthlib package. This was removed as a cloud-init (hard)
dependency as it is only used by the MAAS Data Source.