A subsequent commit will use this to implement the help command.
Just like the existing POSIX shell implementation, the standard error
needs to be redirected to standard output. For details see commit
5e63e9ec9b.
https://github.com/containers/toolbox/pull/318
Meson doesn't support Go [1], so this is implemented by a custom target
that invokes 'go build' to generate the binary.
Unfortunately, when using Go modules, 'go build' insists on being
invoked in the same source directory where the go.mod file lives,
while Meson insists on using a build directory separate from the
corresponding source directory. This is addressed by using a build
script that goes into the source directory and then invokes 'go build'.
Currently, the Go code is only built when a Go implementation is found,
and even then, it's not installed. Non-technical end-users are supposed
to continue using the POSIX shell implementation until the Go version
is blessed as stable.
[1] https://github.com/mesonbuild/meson/issues/123https://github.com/containers/toolbox/pull/318
This fixes the following build failure:
atomic_reactor.util - Package chkconfig available, but not installed.
atomic_reactor.util - No match for argument: chkconfig
atomic_reactor.util - Package dbus-daemon available, but not
installed.
atomic_reactor.util - No match for argument: dbus-daemon
atomic_reactor.util - Package rpm-plugin-systemd-inhibit available,
but not installed.
atomic_reactor.util - No match for argument:
rpm-plugin-systemd-inhibit
...
...
...
atomic_reactor.util - ERROR - {'errorDetail': {'code': 143,
'message': "The command '/bin/sh -c dnf -y reinstall
$(<missing-docs)' returned a non-zero code: 143"}, 'error': "The
command '/bin/sh -c dnf -y reinstall $(<missing-docs)' returned a
non-zero code: 143"}
Current Rawhide is actually version 33. So the appropriate image should
be pre-pulled.
Because of the old version of image being pulled, the tests were
failing.
The tests introduced by commit b5cdc57ae3 have proven to be
rather unstable due to mistakes in their design. The tests were quite
chaotically structured, and because of that images were deleted and
pulled too often, causing several false positives [1, 2].
This changes the structure of the tests in a major way. The tests
(resp. commands) are now run in a manner that better simulates the way
Toolbox is actually used. From a clean state, through creating
containers, using them and in the end deleting them. This should
reduce the strain on the bandwidth and possibly even speed up the
tests themselves.
[1] https://github.com/containers/toolbox/pull/372
[2] https://github.com/containers/toolbox/pull/374https://github.com/containers/toolbox/pull/375
This change adds a pre-run task to pull the fedora-toolbox images from
the registry to reduce the number of false positives caused by
'podman pull' failing to download them during the actual test.
Each section needs a separate playbook because they use different
versions of Fedora, and hence different default images.
https://github.com/containers/toolbox/pull/375
This adds several .yaml files that specify jobs (those in folder
playbooks) and one that serves as the main config (.zuul.yaml).
Tests and builds are currently executed on every change in PRs (ie.,
check and gating) and periodically (according to the documentation
this pipeline should be run at least once a day).
There are 4 tests in total:
1. 'ninja test' - does the same thing that Travis did
2. Fedora 30 - runs the system tests with current Podman and Toolbox
in Fedora 30
3. Fedora 31 - the same but for Fedora 31
4. Fedora Rawhide - the same but for Fedora Rawhide
https://github.com/containers/toolbox/issues/68
These tests are written using BATS (Bash Automated Testing System). I
used a very helpful helpers.bash script from the libpod project (Thank
you!) that I tweaked slightly.
https://github.com/containers/toolbox/issues/68
/usr/share/profile.d is the default location where toolbox.sh is
installed, even though, in practice, most (all?) distributions use
/etc/profile.d. It's reasonable to at least make the code work with the
default build values.
https://github.com/containers/toolbox/pull/362
/sys/fs/selinux is only present when SELinux is 'enforcing' or
'permissive'. When it's disabled, /sys/fs/selinux doesn't exist and
sysfs doesn't let you create it either. Therefore, the attempt to wipe
out the toolbox container's /sys/fs/selinux by bind mounting
/usr/share/empty over it fails, and in turn prevents the container from
starting up.
Fallout from f9cca5719dhttps://github.com/containers/toolbox/issues/344
On Silverblue, /mnt is a symbolic link to /var/mnt. Matching the
presence of /var/mnt on the host inside the toolbox container would
make things less confusing for users.
https://github.com/containers/toolbox/issues/92
A subsequent commit will give access to /var/mnt from the host, if its
present, by bind mounting /run/host/var/mnt at runtime. However, it
turns out that an attempt to non-recursively bind it will error out, if
the host's /var/mnt already contains a mount point.
On the host:
$ sudo mkdir --parents /var/mnt/tmp
$ sudo mount -t tmpfs none /var/mnt/tmp
Inside the container:
$ sudo mkdir --parents /var/mnt
$ sudo mount --bind -o rslave /run/host/var/mnt /var/mnt
mount: /var/mnt: wrong fs type, bad option, bad superblock on
/run/host/var/mnt, missing codepage or helper, or other error.
https://github.com/containers/toolbox/issues/92
This is the second time a Podman regression has caused a selinuxfs
instance to leak into the toolbox container's /sys/fs/selinux,
tricking various components into trying to use SELinux. It might be
better to work this around in Toolbox until the situation in Podman is
figured out.
Based on an idea from Colin Walters.
https://github.com/containers/libpod/issues/4452
Toolbox containers created prior to commit 8b84b5e460 didn't use
'toolbox init-container' as their entry points. This prevents them
from being configured at runtime through the entry points.
Being able to configure a toolbox container at runtime through the
entry point is very handy, as compared to doing it statically via
'podman create', because the configuration doesn't get permanently
baked into the container's definition. Instead, it's codified in
toolbox(1), which can be updated over time, and the container
reconfigured everytime it's started.
A deprecation notice is the precursor to actually dropping support for
these old containers in the future.
Preliminary testing suggests that toolbox containers created prior to
commit 8b84b5e460 already don't start on cgroups v2 systems. So,
this is mainly targetted at cgroups v1 users, who are still able to
work with those old containers.
https://github.com/containers/toolbox/pull/336
Otherwise, it would lead to:
$ toolbox run
/usr/bin/toolbox: line 1287: shift: 4: shift count out of range
toolbox: command '' not found in container fedora-toolbox-31
Fallout from 2da4cc4634https://github.com/containers/toolbox/pull/332
Currently, toolbox(1) offers a --verbose option that only shows debug
information from toolbox(1) itself and the error stream of internal
commands. There's no way to further increase the log level of the
internal commands. It's sometimes very useful to be able to get more
detailed logs from Podman.
This adds a new --very-verbose or -vv option that makes this possible.
This should have been implemented as '--verbose --verbose', which
could be conveniently shortened to '-vv'. This is what flatpak(1)
does. However, due to the lack of built-in command line parsing
facilities in POSIX shell, there's no support for multiple short
options expressed as one single argument. eg., '-vy' doesn't expand to
'-v -y'.
Therefore, a separate --very-verbose or -vv option was added to make
things convenient for the user. It's expected that most people will
refer to this as -vv.
If this option is used, every Podman command in the code is run with
'--log-level debug'. Use wisely, Podman can be 'very verbose'.
https://github.com/containers/toolbox/pull/289
This makes the following work from inside a toolbox container:
$ logger "syslog: hello world"
$ python3 <<< "from systemd import journal; \
journal.send('journal: hello world')"
https://github.com/containers/toolbox/pull/327
It's now possible to use journalctl(1) to query the user's systemd
journal entries from the host. However, messages from other users and
the system aren't shown.
https://github.com/containers/toolbox/pull/327
The machine ID is necessary to query the host operating system's
systemd journal, and currently toolbox containers have an empty
/etc/machine-id file.
Unlike /etc/resolv.conf, the machine ID is supposed to stay constant
once the host is booted. Therefore, it is safe to bind mount
/etc/machine-id from the host, as opposed to using a symbolic link;
because there's no chance of the file getting atomically updated on
the host and diverging from the bind mount due to being allocated a
new inode. Incidentally, this is also what Flatpak does.
A subsequent commit will use this to enable accessing the host's
systemd journal via journalctl(1) inside toolbox containers.
https://github.com/containers/toolbox/pull/327
For what it's worth, this does alter the mount propagation flags by
adding 'slave'.
Earlier with 'podman create --volume ...' it was:
$ findmnt -o OPTIONS,PROPAGATION /run/libvirt
OPTIONS PROPAGATION
rw,nosuid,nodev,seclabel,mode=755 private
Now with 'mount --bind ...' it is:
$ findmnt -o OPTIONS,PROPAGATION /run/libvirt
OPTIONS PROPAGATION
ro,relatime,seclabel private,slave
This difference was ignored because it doesn't appear to cause any
real problem.
https://github.com/containers/toolbox/pull/327
For what it's worth, this does alter the mount propagation flags by
adding 'slave'.
Earlier with 'podman create --volume ...' it was:
$ findmnt -o OPTIONS,PROPAGATION /var/lib/flatpak
OPTIONS PROPAGATION
ro,relatime,seclabel private
Now with 'mount --bind -o ro ...' it is:
$ findmnt -o OPTIONS,PROPAGATION /var/lib/flatpak
OPTIONS PROPAGATION
ro,relatime,seclabel private,slave
This difference was ignored because it doesn't appear to cause any
real problem.
https://github.com/containers/toolbox/pull/327
Subsequent commits will use this to perform some of the bind mounts in
the toolbox container's entry point, instead of doing them as part of
'podman create ...'.
Anything that's specified during 'podman create ...' gets statically
baked into the container's configuration, and is either difficult or
impossible to change afterwards. This means that toolbox containers
created with older versions of Toolbox keep diverging from those
created with newer versions. Hence making it complicated to keep older
containers working with a newer Toolbox.
In the case of bind mounts, a good chunk of the host's file hierarchy
is already bind mounted by 'podman create ...' under the toolbox
container's /run/host. Therefore, the more granular bind mounts like
$XDG_RUNTIME_DIR and /var/lib/flatpak can be performed by the
container's entry point at runtime using what's already inside
/run/host, and reduce the footprint of the static configuration.
Older containers created with Toolbox 0.0.10 onwards will see two bind
mounts for locations that get moved from 'podman create ...' to the
entry point. The presence of the second mount should be harmless.
Based on an idea from Colin Walters.
https://github.com/containers/toolbox/pull/327
Toolbox containers using runc as their runtime don't work on host
operating systems using cgroups v2. They need to be migrated to crun.
'podman start' throws a specific error for such containers:
ERRO[0000]: oci runtime "runc" does not support CGroups V2: use
system migrate to mitigate
Error: unable to start container "fedora-toolbox-30": this version
of runc doesn't work on cgroups v2: OCI runtime error
This error is identified by the phrase "use system migrate to mitigate"
to avoid encoding any assumptions about updating from cgroups v1 to v2
or downgrading in the other direction.
If the migration fails, 'toolbox reset' is suggested as the last hope.
https://github.com/containers/toolbox/pull/309
A subsequent commit will leverage this to detect 'podman start'
failures caused by attempting to run runc-based toolbox containers on
cgroups v2 sytems, and try to migrate them if possible.
https://github.com/containers/toolbox/pull/309
Asking for the Podman version is one of the most common support
questions. So it can't hurt to have it in the debug output, especially
when the version is already being read to decide if migration is
necessary or not.
https://github.com/containers/toolbox/pull/309