System Management

systemd and containers and chaos

We were fated to pretend

User and Group Permissions

Operating system users and groups are a tool to minimize access. Follow the principle of least privilege to protect against attacks (including accidental friendly fire).

# useradd -m dojo
# passwd dojo

add a dojo user

$ cat /etc/passwd

list users on system

$ groups

list groups current user is in, or pass a username

# getent $GROUP

list members of a group

$ cat /etc/group

list all groups

There is a concept of a “system” user which has no login shell. The only purpose is to run daemons. This actually fits most of my use cases, but I like being able to hop in to a user on the shell, so rarely use it.

# userdel -r $USERNAME

remove user, r option cleans up home directory, removes matching group too

# groupdel $GROUP

remove group

  • An interesting blog post from Drew DeVault exposed me to how commands automatically run as a different user or group.

Systemd

I have tried a lot of tools to manage the ever growing chaos of my homeservers. Something like ansible still feels like too much overhead for me. I have settled on just manually creating system users (e.g bitcoin for running a full node) and keeping the complexity of the app localized as much as possible (e.g. building an executable in the home directory). This utilizes the OS’s builtin user/group permissions model and makes it pretty easy to follow the principle of least privilege (e.g. only services that need access to Tor have access).

I use systemd to automate starting/stopping/triggering services and managing the service dependencies.

environment

There are a lot of ways to set environment variables: shell configs, PAM, systemd. It gets confusing too when vars are set, how to pass them from system process to user and vice versa.

Per-user is where it gets tricky.

I have used PAM before, but according to docs its on its way out.

Systemd is probably the safest bet, in ~/.config/environment.d/*.conf, but these are only loaded for services. Not shell programs (unless they are somehow service based).

  • Note that some managers query the systemd user instance for the exported environment and inject this configuration into programs they start, using systemctl show-environment or the underlying D-Bus call.
export $(/usr/lib/systemd/user-environment-generators/30-systemd-environment-d-generator)

Systemd has a little tool to export vars

personal units

  • system level drop them in /etc/systemd/system
  • user level…haven’t needed to mess much with yet

Sometimes I need to make a change to a unit, but don’t want it to be blown away by pacman on the next update. Systemd has a tool to do just this. Use systemd-delta to keep track of changes on system.

systemctl edit <unit>

triggers

Systemd can trigger services like cron. Easiest way to do this is create a .timer file for a service and then enable/start the timer. Do not enable the service, that confuses systemd.

[Unit]
Description=Run BOS node report

[Service]
User=lightning
Group=lightning
ExecStart=/home/lightning/balanceofsatoshis/daily-report.sh

report.service

[Unit]
Description=Run report daily

[Timer]
OnCalendar=daily

[Install]
WantedBy=timers.target

report.timer

sudo systemctl enable report.timer
sudo systemctl start report.timer

enable the timer which triggers the service

Another helpful trigger is when a path changes. Think backing up a file everytime it is modified. Like timers, only enable the .path and not the service itself.

[Unit]
Description=Backup LND channels on any changes

[Path]
PathModified=/data/lnd/backup/channel.backup

[Install]
WantedBy=multi-user.target

backup.path

sudo systemctl enable report.path
sudo systemctl start report.path

enable the path which triggers the service

sandboxing

Turns out the standard systemd examples are not very secure. Systemd provides a tool to see which services are in trouble: systemd-analyze security and then take a deeper dive per-service with systemd-analyze security <service>.

My standard set of hardening flags (which I’ll try to expand as I learn more about them):

# Hardening
PrivateTmp=true
PrivateDevices=true
ProtectSystem=strict
NoNewPrivileges=true
  • PrivateTmp // Processes running with this flag would see a different and unique /tmp from the one users and other daemons sees or can access. Mitigates other programs reading tmp data.
  • PrivateDevices // Sets up a new /dev/ mount for the executed processes, useful to securely turn off physical device access by the executed process.
  • ProtectSystem // Makes common system directories read only, don’t need programs messing with /boot
  • NoNewPrivileges // Prevents the service and related child processes from escalating privileges, seems like a reasonable default

I think RuntimeDirectory could be used to auto create and destroy a directory for a process, but requires coordination with the process to write/read to that location.

email

First requirement is an easy to call email script. I am jacking this straight from the Arch wiki with some slight mods for my email setup.

/usr/local/bin/systemd-email

#!/bin/sh
#
# Send alert to my email

/usr/bin/msmtp --read-recipients <<ERRMAIL
To: nick+gemini@yonson.dev
Subject: $1 failure
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=UTF-8

$(systemctl status --full "$1")
ERRMAIL

/etc/systemd/system/status-email@.service

[Unit]
Description=status email for %i

[Service]
Type=oneshot
ExecStart=/usr/local/bin/systemd-email %i
User=nobody
Group=systemd-journal

A template unit service with limited permissions to fire the email. Edit the service you want emails for and add OnFailure=status-email@%n.service to the [Unit] (not [Service])section. %n passes the unit’s name to the template.

  • OnFailure= A space-separated list of one or more units that are activated when this unit enters the “failed” state. A service unit using Restart= enters the failed state only after the start limits are reached.

If using Restart=, a service will only enter a failed state if StartLimitIntervalSec= and StartLimitBurst= are set in the [Service] section.

  • Units which are started more than burst times within an interval time span are not permitted to start any more.

Do not have to enable the template service (e.g. status-email@lnd).

containers with systemd

Systemd’s model works great for executables which follow the standard fork/exec process.

A growing portion of the services I am running do not publish an executable, but rather a container. The docker client/server (docker daemon) model takes on some of the same responsibilites as systemd (e.g. restarting a service when it fails). There is not a clear line in the sand.

Since I haven’t gone 100% containers for everything though, I need to declare all the service dependencies in systemd (including containerized services). This works just OK. The main shortcoming is that by default docker client commands are fire-and-forget. This sucks because no info is passed back to systemd, it doesn’t know if the service actualy started up correctly and can’t pass that along to dependencies.

Docker commands must always be run by root (or a user in the docker group, which is pretty much the same thing from a security perspective) so we can’t utilize systemd’s ability to execute services as a system user (e.g. bitcoin).

// example systemd service file wrapping docker
[Unit]
Description=Matrix go neb bot
Requires=docker.service
After=docker.service

// docker's `--attach` option forwards signals and stdout/stderr helping pass some info back to systemd
[Service]
ExecStart=docker start -a 80a975d2f9baff82a27edc389bfe2f5a74e597560acc63fb3dfe4a3df07c8797
ExecStop=docker stop 80a975d2f9baff82a27edc389bfe2f5a74e597560acc63fb3dfe4a3df07c8797

[Install]
WantedBy=multi-user.target

Lastly, there is a lot of complexity around user and group permissions between the host and containers. This is most apparent when sharing data between a host and container through a bind mount. in a perfect world from a permissions standpoint, the host’s user and groups would be mirrored into the container and respected there. However, most containers default to running as UID 0 a.k.a. root (note: there is a difference between the uid who starts the container and the uid which interannly runs the container, but they can be the same).

Here is where the complexity jumps again: enter user namespaces. User namespaces are the future fix to help the permissions complexity, but you have to either go all in on them or follow the old best practices, they don’t really play nice together.

user and groups old best practices

I have never gone “all in” on containers, but I think this is the decision tree for best practices:

if (all in on containers) {
    use *volumes* which are fully managed by container framework and hide complexities
} else if (mixing host and containers) {
    use bind mounts and manually ensure UID/GID match between host and containers
} else {
    // no state shared between host and container
    don't even worry bout this stuff
}

Case 2 can be complicated by containers which specify a USER in their Dockerfile. On one hand, this is safer than running as root by default. On the other, this makes them less portable since all downstream systems will have to match this UID in order to work with bind mounts.

I am attempting to bridge these mismatches by switching from docker to podman.

File System

$ df -h

disk free info at a high level

$ du -sh

disk usage, summarized and human readable of PWD

# du -ax / | sort -rn | head -20

track down heavy files and directories

secure backup

I have dealt with quite a few object storage providers professionally. For my day to day home server needs though, all of these were too enterprise. For my projects I use rsync.net.

After setting up an account, rsync.net gives you a host to run unix commands on.

A script to encrypt a file with pgp and sync it to my rsync.net host. No need for the pgp private keys to be on the box, the public key can be used to encrypt the file. If the file is needed in the future the private key will be needed.

My standard backup script includes an email for failure notifications.

backup.sh

#!/bin/bash
#
# securely backup a file to rsync.net

RECIPIENT=nick@yonson.dev
FILE=/data/lnd/backup/channel.backup

# --yes overwrites existing
if gpg --yes --recipient $RECIPIENT --encrypt "$FILE" &&
  # namespacing backups by host
  rsync --relative $FILE.gpg change@change.rsync.net:$HOSTNAME/ ; then
    echo "Successfully backed up $FILE"
  else
    echo "Error backing up $FILE"
    echo "Subject: Error rsyncing\nUnable to backup $FILE on $HOSTNAME" | msmtp nick@yonson.dev
fi

Package Management

[options]
IgnorePkg = docker-compose

lock package in pacman.conf

$ pacman -Qqe

list explictly installed

AUR

$ gpg --keyserver pgp.mit.edu --recv-keys 118759E83439A9B1

git key from a keyserver

$ curl https://raw.githubusercontent.com/lightningnetwork/lnd/master/scripts/keys/bhandras.asc --output key.asc
$ gpg --import key.asc 

download and import public key file to keyring

PKGBUILD

  • dependencies

    • depends – an array of packages that must be installed for the software to build and run. Dependencies defined inside the package() function are only required to run the software.
    • makedepends – only to build
  • only runtime deps?

  • good example: dendrite

aurutils

  • scripts to manage a local ABS (Arch Build System) repository which pacman can use
  • man pages contain examples for common tasks like removing stuff aur-remove
$ aur repo -l

list packages

  • have scrips run as aur user, but can run certain pacman commands as root without sudo in /etc/sudoers.d/10_aur

patch

  • ABS docs on patches
  • Want to apply a patch to an AUR package in a maintainable way
    • Can edit repository source under ~/.cache/aurutils/sync/
      • update source hashes with updpkgsums
      • build with aur build -f -d custom
      • change default merge strategy
      • need to see if this breaks aur fetch or at least what is the default
        • git config pull.rebase true – switch to rebase so the temp fix commit stays on top and no merge commits (probably a little dangerous)
        • hard to merge PKGBUILD hashes?

Containers

I am not completely sold on Red Hat’s new container ecosystem of tools (podman, buildah, skopeo…not sure if I love or hate these names yet), but podman has me sold on the fact that it uses the standard fork/exec model instead of client/server allowing it to “click in” to systemd with default settings. podman also runs in rootless mode allowing the OS user permission model to be used again (although I think there are still some complexities here).

At the time of me switching (2021-03-30) the arch wiki on podman listed three requirements to using podman.

  1. kernel.unprivileged_userns_clone kernal param needs to be set to 1
    • all my boxes had this set to 1 already so this was easy
  2. cgroups v2 needs to be enabled
    • this can be checked by running ls /sys/fs/cgroup and seeing alot of cgroup.* entries
    • systemd v248 defaults to v2, and some proof that the world revolves around me, v248 was released 36 minutes before I wrote this down so we are probably good for the future
  3. Set subuid and subgid
    • these are for the user namespaces, I saw a warning from podman when I didn’t set it

Note that the values for each user must be unique and without any overlap. If there is an overlap, there is a potential for a user to use another’s namespace and they could corrupt it.

[njohnson@gemini ~]$ podman run docker.io/hello-world
ERRO[0000] cannot find UID/GID for user njohnson: open /etc/subuid: no such file or directory - check rootless mode in man pages.
WARN[0000] using rootless singl

It is now easier to use the crun instead of runc OCI runtime. I don’t really know what any of these words mean. But I ran into an Error: container_linux.go:380: starting container process caused: error adding seccomp filter rule for syscall bdflush: permission denied: OCI permission denied and this was the fix. Apparently the “lighter” crun is the future. Set it in /etc/containers/containers.conf with runtime = "crun".

namespace permissions

With user namespaces enabled, the container’s default root UID is mapped by the host OS’s UID which started the process (when rootless). Nifty tables here show the differences.

I believe this is the best of both worlds, but it does require that images do not specify a USER (old best practice to not run as root). If USER is used, this will map to one of the sub UIDs of the host’s user namespace instead of the host user (which is root in the container).

A bit more is need to get group permissions to work correctly. Here is Red Hat’s group permissions deep dive.

$ podman top --latest huser user

podman also gives us a really cool sub-command called top which lets us map the user on the container host to the user in the running container.

$ sysctl kernel.unprivileged_userns_clone

kernel permissions are checked

POSIX vs User namespaces

There are interesting security tradeoffs when it comes to user namespaces, more on that in these here and here

  • POSIX == users, groups, root escalation, capabilities

Positive: adds a 2nd layer of defense, if an attacker gains root in a container, they are still unprivileged (mapped to non-root) on the host Negative: capabilties that a user does not have on the host they all of a sudden can get in a limited fashion in the contianer

–userns=keep-id

The container’s root user does have a bit of extra privileges, but nothing that could affect the host. There is a setting to run as a user that matches the host, --userns=keep-id, which would give up these extra privileges. This might just be cosmetic to not see uid 0 in the container…

  • creates the same uid, so if the image already has the uid there could be some confusion
  • doesn’t create a $HOME

systemd integration

  • can create container files that podman converts to service units
  • run daemon-reload
  • start generated service
  • docs

Running podman commands rootless requires systemd login variables to be set correctly. In the past I have had success using a login shell with su like su - lightning (notice the dash), but even that doesn’t seem to hook in with the logind/pam stuff and env vars like $XDG_RUNTIME_DIR are not set. The replacement for su is machinectl shell --uid=lightning.

For interactive processes (like a shell), you must use -i -t together in order to allocate a tty for the container process. -i -t is often written -it as you’ll see in later examples. Specifying -t is forbidden when the client is receiving its standard input from a pipe, as in:

# attaching a volume and running a command expecting output
podman run -it -v $HOME/.bos:/home/node/.bos docker.io/alexbosworth/balanceofsatoshis --version

If a user logs out, all processes of that user are killed. This might not be ideal if you have long running processes (like a web server) that you want to keep running. Systemd’s logind has a “linger” settting to allow this, but to be honest, I am not quite sure of all the side effects yet.

loginctl enable-linger lightning

build

  • image name often follows conventions based on the repository its stored in
    • for GCR this looks like: $REPO/$PROJECT/$NAME:$TAG
podman build -t clients:latest -f Containerfile .

build with tag, file, and context

processes

registries

/etc/containers/registries.conf to look at docker.io automatically.

unqualified-search-registries = ["docker.io"]

publish

ghcr.io/nyonson/raiju:latest

ghcr is githubs

containerfile (dockerfile)

The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.

work dir

The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.

COPY and ADD

COPY [--chown=<user>:<group>] <src>... <dest>
COPY [--chown=<user>:<group>] ["<src>",... "<dest>"]
ADD [--chown=<user>:<group>] <src>... <dest>
ADD [--chown=<user>:<group>] ["<src>",... "<dest>"]

Multiple resources may be specified but the paths of files and directories will be interpreted as relative to the source of the context of the build.

copy from other images

Optionally COPY accepts a flag –from= that can be used to set the source location to a previous build stage (created with FROM .. AS ) that will be used instead of a build context sent by the user. In case a build stage with a specified name can’t be found an image with the same name is attempted to be used instead.

You can have as many stages (e.g. FROM … AS …) as you want. The last one is the one defining the image which will be the template for the docker container.

When using multi-stage builds, you are not limited to copying from stages you created earlier in your Dockerfile. You can use the COPY –from instruction to copy from a separate image, either using the local image name, a tag available locally or on a Docker registry, or a tag ID. The Docker client pulls the image if necessary and copies the artifact from there.

copy or add?
  • According to the Dockerfile best practices guide, we should always prefer COPY over ADD unless we specifically need one of the two additional features of ADD (extract from URL or tarball)

Entrypoint and CMD

  • CMD instruction allows you to set a default command, which will be executed only when you run container without specifying a command.
  • ENTRYPOINT allows you to configure a container that will run as an executable.
  • The difference is ENTRYPOINT command and parameters are not ignored when Docker container runs with command line parameters
  • use of ENTRYPOINT sends a strong message that this container is only intended to run this one command
  • Combining ENTRYPOINT and CMD allows you to specify the default executable for your image while also providing default arguments to that executable which may be overridden by the user
  • exec form ENTRYPOINT ["executable", "param1", "param2"]
    • preferred, less surprises
    • can be “extended” with CMD (with the CMD part overwriteable)
  • shell form ENTRYPOINT command param1 param2
    • more surprises, like it always runs not matter the input

EXPOSE

The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. EXPOSE does not make the ports of the container accessible to the host.

The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.

RUN

  • each RUN line adds a layer to the image

dockerignore

  • file used to describe items which should not be included in ADD or COPY

golang container builds

  • Want to avoid re-downloading all dependencies every build
    • make use of layer-caching
  • Multi-stage helps with final image size, but not performance. That seems buildkit related.
  • docs

create, start, and run

podman create --name container-1 ubuntu

create and name a container, also when you can supply CMD

  • see container size with ps -a --size
  • start a container (needs to be created first)
  • run is the combo, creates a new container and then starts it

networking

Another container aspect that is extra complicated because I haven’t gone full containers.

  1. Use the host network
--network=host
  • this does not follow the principle of least privilege, the container network namespace is shared with the host
  1. Expose host as 10.0.2.2 in the container
  • unclear how this magic works yet and/or if its better than option (1)

Using Compose as an abstraction layer, containers can be created on the same network on talk to each other. DNS requires a plugin. Install podman-dnsname from arch repo.

Ports can be EXPOSED or PUBLISHED. Expose means the port can be reached by other container services. Published means the port is mapped to a host port.

compose

The yaml spec can be backed by podman instead of docker daemon.

  • can setup a podman.socket and use unmodified docker-compose that talks to that socket but in this case you lose the process-model (ex. docker-compose build will send a possibly large context tarball to the daemon)
  • which to use?
    • the docker-compose script parses compose YAML and runs podman commands, but it doesn’t handle everything the same as docker-compose (like doesn’t replace containers by default on up)
    • docker-compose is also a one man project that appears to be losing ground
    • I like the idea of it instead of relying on a client-server model, but its not robust at the moment
$ systemctl --user enable --now podman.socket

creates a socket for the user

$ ls -la /run/user/1000/podman/podman.sock
srw-rw---- 1 njohnson njohnson 0 Aug  9 05:34 /run/user/1000/podman/podman.sock=

double checking

$ curl --unix-socket /run/user/1000/podman/podman.sock http://localhost/images/json

curl to ping the unix socket

The default docker socket is /var/run/docker.sock and it usually is owned by root, but in the docker group. Fun fact, /var/run is symlinked to /run on Arch.

podman system service is how to run podman in a daemon API-esque mode.

The podman-docker package provides a very light docker shim over the podman exec. The PKGDEST lead me to the Makefile of podman which contains the logic to override docker. /usr/lib/tmpfiles.d/podman-docker.conf contains the symlink definition. tmpfiles.d is a systemd service to create and maybe maintain volitile files.

PODMAN-DOCKER is necessary for docker-compose exec

down

Should use the offical down command (over ctrl-c) to ensure everything is cleaned up, else containers could be reused.

> docker-compose down
Stopping publisher ... done
Stopping bitcoind  ... done
Stopping consumer  ... done
Removing publisher ... done
Removing bitcoind  ... done
Removing consumer  ... done
Removing network lightning_lnnet

Volumes

Define in image (Dockerfile) or outside (docker-compose)?

  • Lots of painpoints defining in image
  • (when defined in image) unless you explicitly tell docker to remove volumes when you remove the container, these volumes remain, unlikely to ever be used again
  • outside of an image has a lot of benefits: Not only can you define your volume, but you can give it a name, select a volume driver, or map a directory from the host
  • The simplest way of making Docker data persistent is bind mounts, which literally bind a location on the host disk to a location on the container’s disk. These are simple to create and use, but are a little janky as you need to set up the directories and manage them yourself. Volumes are like virtual hard drives managed by Docker. Docker handles storing them on disk (usually in /var/lib/docker/volumes/), and gives them an easily memorable single name rather than a directory path. It’s easy to create and remove them using the Docker CLI.
  • Volumes are helpful for saving data across restarts of your Docker containers.
  • Bind mounts will mount a file or directory on to your container from your host machine, which you can then reference via its absolute path

Networks

  • expose vs ports
    • ports: Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen)
    • expose: Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
  • In recent versions of Dockerfile, EXPOSE doesn’t have any operational impact anymore, it is just informative
  • check if ports are bound on the local system with ss -tulpn

ENV and ARG

The ENV instruction sets the environment variable to the value

Or using ARG, which is not persisted in the final image

  • environment args can be passed from docker-compose to Dockerfile, but might be best to think of them as runtime vs. buildtime and not mix them
  • Setting ARG and ENV values leaves traces in the Docker image. Don\u2019t use them for secrets

When building a Docker image from the commandline, you can set ARG values using build-arg

When you try to set a variable which is not ARG mentioned in the Dockerfile, Docker will complain.

When building an image, the only thing you can provide are ARG values, as described above. You can’t provide values for ENV variables directly. However, both ARG and ENV can work together. You can use ARG to set the default values of ENV vars.

  • Use ARG for the simplicity unless you need to change the variable at runtime

dependencies

troubleshooting

one-off exec

Sometimes you want to debug something on a running container.

podman exec c10c378bb306 cat /etc/hosts

one-off

podman exec -it fdea48095fe1 /bin/bash

shell process

Copy file from container to host

podman cp <containerId>:/file/path/within/container /host/path/target

new kernal

If getting weird errors, might be cause of new kernal installed.

ERRO[0000] 'overlay' is not supported over extfs at "/var/lib/containers/storage/overlay"

duplicate mount points

  • volumes used by other containers on the system could be causing issues
  • can see all errors in systemd logs instead of last reported on CLI
podman system prune

WARNING! This will remove:
	- all stopped containers
	- all networks not used by at least one container
	- all dangling images
	- all dangling build cache
podman volume prune

Encryption

X.590

certificate authority

  • using smallstep tooling
  • The root cert (public) is shared, but the key should be very secret. Maybe even tossed after making an intermediate pair.
  • Bumped the max TLS cert lifetime from 24h to a year (8760h) with maxTLSCertDuration setting
  • step-ca is the CA daemon
create root and intermediate CA certificates and keys
  • CA config (password and intermediate are store under /home/authority/.step)
  • Root cert is under /etc/step-ca/roo_ca.crt
step-cli ca init --name="Gemini" \
    --dns="ca.yonson.dev" --address=":443" \
    --provisioner="nick@yonson.dev"
sudo step-cli ca bootstrap --ca-url gemini2.lan:8444 --fingerprint ff13c382c168821f7e13fb362678974c04d5547dc3f106aad4c99615b8b4f877

copies cert and settings for step-cli

sudo trust anchor --store /root/.step/certs/root_ca.crt

Setup /etc/ca-certificates/extracted/cadir/Gemini_Root_CA.pem

create service certificate and private key (nginx example)
# step-cli ca certificate "*.yonson.dev" /etc/nginx/server.crt /etc/nginx/server.key
  • the CA url needs to be gemini.lan since that is connected to the root/intermediate somehow…
  • * allows all subdomains to use same server cert
# step-cli ca renew --ca-url=gemini2.lan:8444 --root=/home/authority/.step/certs/root_ca.crt /etc/nginx/server.crt /etc/nginx/server.key

step ca renew command renews the given certificate (with a request to the certificate authority) and writes the new certificate to disk

server {
    listen 444 ssl;
    server_name  labs.yonson.dev;
    ssl_certificate     server.crt;
    ssl_certificate_key server.key;

    location /public.key { alias /srv/public.key; }
    location /private.key { alias /srv/private.key; }
}

nginx configs for simple https server to serve over LAN

mTLS clients

  • cert and key together make pkcs12 format (p12)
  • nss is the standard tool suite in linux to try and make OS handle certs
  • client cert chain
    • PEM is multple certs format
    • In order to provide multiple certificates for the client chain, the “PEM” format is required as the “DER” format can only provide a single certificate (or key).
    • step crypto key format to switch formats, but PEM format is default for step
    • maybe the --bundle flag/command can be used when full chain is needed
  1. Get root cert if you don’t already using step-cli

  2. generate cert/key on where CA is reachable

$ step-cli ca certificate --ca-url="gemini2.lan:8444" --root="/etc/ca-certificates/extracted/cadir/Gemini_Root_CA.pem" --not-after="8760h" "mercury4" client.crt client.key

should be good for a year

  1. package in p12
$ step-cli certificate p12 mercury4.p12 client.crt client.key

will prompt for password, should set that

nss

Load into nss for OS

certutil -d sql:$HOME/.pki/nssdb -L

list certs

$ pk12util -d sql:$HOME/.pki/nssdb -i mercury4.p12

using standard db nssdb

  • Chromium and Evolution use the “shared” database at -d “sql:$HOME/.pki/nssdb”. For Firefox, Thunderbird, and SeaMonkey, specify the browser’s own profile directory (e.g. -d ~/.mozilla/firefox/ov6jazas.default).
  • Loaded the p12 through firefox UI, nss works for chromium though
  • chromium docs
git
  • client http.sslCert
    • looks like it prefers pem format
    • doc
    • need a user account over https? a system user account…
      • having to store user credentials client side seems worse than forwarding port 22
  • server
    • ssh setup is really easy, just need to the server exposed and use keys
  • how to pass request from nginx to system to push/pull?
    • can use CGI built in to nginx or forward requests to git http-backend…appears most use CGI
    • Common Gateway Interface (CGI) is an interface specification that enables web servers to execute an external program, typically to process user requests
  • cgit
    • web interface, but maybe allows https configuration?
    • arch docs
iOS
  • certs at Settings > General > VPN & Device Management
  • add the root cert, then have to “activate” in General settings
  • get the p12 to the phone and “install” the profile
  • as of 2023-12, needed to use the “legacy” flag when creating the p12 since iOS doesn’t support newer encryption methods
Android
  • add trusted credential to OS
  • “VPN & app user certificate” for client

Not sure why no prompt when visiting labs.yonson.dev/ums

$ openssl s_client -crlf -connect labs.yonson.dev:443
...

Acceptable client certificate CA names
O = Gemini, CN = Gemini Root CA
$ step-cli certificate inspect client.crt
...

Issuer: O=Gemini,CN=Gemini Intermediate CA

Looks like the intermediate is maybe the issue?

cat sub.crt rootca.crt > chain.crt
openssl pkcs12 -export -in client01.crt -inkey client01.key -certfile chain.crt -out client01-combined.p12
  • that worked! So I think solutions are:
    • combine root and intermediate crts when packaging client cert (smallstep tooling doesn’t do this cause it shouldn’t have to?)
    • recreate the intermediate crt to match root name?
    • have nginx point to the intermediate instead of the root (seems bad)

PGP

gpg --armor --export you@example.com > mykey.asc

export public key

gpg --export-secret-key -a nickj21@gmail.com | qrencode -o key.png

qr code

Helix

Everything is about selections. The cursor by default is selecting a single character. There is always a character at the end of the line to select before, the newline.

Space-? to search for a command with fuzzy matching.

  • i – Insert at beginning for selection, if just cursor is selected than before cursor.
  • a – Insert at end of selection, if just cursor selected inserted characters are added to selection.

Motions with selections:

  • w – Move forward to before the beginning of the next word (grabs whitespace in-between, useful when removing a whole word).
  • e – Move forward to the end of the current word.
  • b – Move backward to beginning of current word.

The w + b combo is nice to grab a word when sittin in the middle of it. w and e are very subtle difference, w will grab the whitespace after. Capitalize versions of the motion commands include dashes and such.

Change commands, c, delete the selection and enters Insert mode, like di.

Select mode is kinda like Visual mode from Vim, but more is centered on it in Helix. Enter it the same way with v. Can exit with v too. Using deletion d in Select mode returns you to Normal mode too. Use motion commands in Select mode to grab more things.

Use x and X to select whole lines. Can repeat x to select next line.

Use s select command to match in selection. Use Alt-s to split the selection on newlines (new cursors), so each line is its own cursor with a selection.

Use & to align selection, need to get the hang of that one.

f and t can select up to a character (find and till), t doesn’t include the character. F and T go backwards.

Can replace all character in selection with a single one with r.

J to join lines in selection, remove newlines.

Collapse a selection with ; which collapses to single character under cursor. Can use Alt+; to move cursor to front of selection first. Described as “flipping” selection.

Undo with u and Redo with U.

Yank copy is similar to Vim with y. p to paste before cursor, P for after. Deleted or Changed text is also yanked. Alt-d or Alt-c to avoid that implicit yank. Replace selection with yank with R.

Search forward in file with /, n for next N for previous. ? for previous search (a.k.a. Shift-/).

Cursors are powerful. Can have multiple of them with C which creates cursor on next “suitable” (probably useful) line. Alt-C to create up instead. Collapse all cursors to one with ,.

Repeat last insert command with .. Alt-. to repeat last find/till selections.

Indent with > and <.

registers

" lists registers.

rulers

Kinda wide by default whole column background so it doesn’t hide characters. Two of them show up in the git COMMIT_EDITMSG by default, kinda looks like LSP settings.

Terminal Emulator

zsh

  • .zshrc for interactive
  • .zshenv for everything => personally use for local settings
  • don’t bother with .zlogin (or .zprofile) which are usually used to start window managers

colors

// greyscale, dark to light for dark theme
base00 - Default Background
base01 - Lighter Background (Used for status bars, line number and folding marks)
base02 - Selection Background
base03 - Comments, Invisibles, Line Highlighting
base04 - Dark Foreground (Used for status bars)
base05 - Default Foreground, Caret, Delimiters, Operators
base06 - Light Foreground (Not often used)
base07 - Light Background (Not often used)

// colors
base08 - Variables, XML Tags, Markup Link Text, Markup Lists, Diff Deleted
base09 - Integers, Boolean, Constants, XML Attributes, Markup Link Url
base0A - Classes, Markup Bold, Search Text Background
base0B - Strings, Inherited Class, Markup Code, Diff Inserted
base0C - Support, Regular Expressions, Escape Characters, Markup Quotes
base0D - Functions, Methods, Attribute IDs, Headings
base0E - Keywords, Storage, Selector, Markup Italic, Diff Changed
base0F - Deprecated, Opening/Closing Embedded Language Tags, e.g. <?php ?>

base16 conventions

[colors]
background={{base00-hex}}
foreground={{base05-hex}}

# normal
regular0={{base00-hex}}
regular1={{base08-hex}}
regular2={{base0B-hex}}
regular3={{base0A-hex}}
regular4={{base0D-hex}}
regular5={{base0E-hex}}
regular6={{base0C-hex}}
regular7={{base05-hex}}

# bright
bright0={{base03-hex}}
bright1={{base09-hex}}
bright2={{base01-hex}}
bright3={{base02-hex}}
bright4={{base04-hex}}
bright5={{base06-hex}}
bright6={{base0F-hex}}
bright7={{base07-hex}}

# misc
selection-background={{base05-hex}}
selection-foreground={{base00-hex}}
urls={{base04-hex}}
jump-labels={{base00-hex}} {{base0A-hex}}
scrollback-indicator={{base00-hex}} {{base04-hex}}

foot to base16 mapping

Black        0;30     Dark Gray     1;30
Red          0;31     Light Red     1;31
Green        0;32     Light Green   1;32
Brown/Orange 0;33     Yellow        1;33
Blue         0;34     Light Blue    1;34
Purple       0;35     Light Purple  1;35
Cyan         0;36     Light Cyan    1;36
Light Gray   0;37     White         1;37

shell colors

Sway

  • start script is helpful to setup env, call from display manager
  • wev is a nifty program to see key presses for function key mapping (doesn’t capture XF86 keys)

screenshare

  • Wayland defines and exposes window information, PipeWire streams audio and video bits. These gotta play nice together through the xdg-desktop-portal spec.
  • Wayland is more secure than Xorg and doesn’t give info away as freely, so waiting on a spec to provide that so PipeWire can stream a section/window and not full screen.

idle and lock

  • Protocol proposal for idle-notify
  • The swayidle program uses the KDE protocol right now.
  • When should swaylock be daemonized? when wait is used?
  • To make sure swayidle waits for swaylock to lock the screen before it releases the inhibition lock, the -w options is used in swayidle, and -f in swaylock.

screenshots

  • bemenu is too heavy, just use fzf.

outputs

output HDMI1 scale 2
output HDMI1 pos 0 0 res 3200x1800
output eDP1 pos 1600 0 res 1920x1080

If scaling is active, it has to be considered when defining relative position

  • Note that the x-pos of eDP1 is 1600 = 3200/2.
  • Scale “up” (e.g. 2) to make things larger, lower res.
set $output eDP-1
bindswitch --locked --reload lid:on output $output disable
bindswitch --locked --reload lid:off output $output enable

Disable output on lid switch.

notifications

mako

  • Need to exec this for chrome to hook into it
  • Needed libnotify for slack to work

Shell

  • /etc/profile.d/ convention for packages to source things into the shell

functions

  • passed parameters are $1, $2, $3 … $n, corresponding to the position of the parameter after the function name
#!/bin/bash

greeting () {
  echo "Hello $1"
}

greeting "Joe"

outputs “Hello Joe”

streams

  • Both stderr and stdout print to console by default.
# Redirect stdout, because it's plain `>`
$ ./command file1 file2 file3 > log-file
stderr file2
# Redirect stderr, because it's `2>`
$ ./command file1 file2 file3 2> log-file
stdout file1
stdout file3

# Redirect both
$ ./command file1 file2 file3 > log-file 2>&1
$ cat log-file
stderr file2
stdout file1
stdout file3

Redirect both stdout and stderr to file, need to first shoot stderr at stdout.

so you want to find the absolute path of the script?

variables

  • set a default with ${MIGRATION_NAME:-you_forgot_to_name_me_shame_on_you}
  • for posistional VARIABLE="${1:-DEFAULTVALUE}"
    • Quoting prevents globbing and word splitting.

conditionals

if TEST-COMMAND1
then
  STATEMENTS1
elif TEST-COMMAND2
then
  STATEMENTS2
else
  STATEMENTS3
fi
# Kill previous process if one is running
if [ -f $PID_FILE ]; then
    kill -0 $(cat $PID_FILE) 2> /dev/null && kill $(cat $PID_FILE)
fi

check file

[ vs [[

sandboxing

  • set -e
    • kicks out on first error
    • “only exits on an ‘uncaught’ errors”: funkiness with &&

process management

eval

  • The args are read and concatenated together into a single command. This command is then read and executed by the shell, and its exit status is returned as the value of eval. If there are no args, or only null arguments, eval returns 0.
  • Without an explicit eval, the shell tries to execute the result of a command substitution, not to evaluate it.
    • export better than eval since its less powerful

Git

  • What is up with origin main vs. origin/main? Two primitives here: branches and remotes. In this case, origin is a remote and everything else is a branch. The branch origin/main is kinda special though, cause it is a “remote tracking branch”.

remotes

  • Default remote is usually called origin.
  • git clone command implicitly adds the origin remote.
  • ssh (with git user) or https? ssh easier with key mgmt.
$ git remote -v                                     
origin	git@git.sr.ht:~yonson/raiju (fetch)
origin	git@git.sr.ht:~yonson/raiju (push)

list remotes

git remote rename origin destination

Rename remote.

bare repository

$ git init --bare .

1. Create repository

$ git remote set-url origin git@gemini2:/srv/git/dts.git
$ git push

2. Update client to new origin

rebase

git rebase -i origin/main

Interactive.

$ git rebase -Xtheirs branch-b # <- ours: branch-b, theirs: branch-a
$ git merge -Xtheirs branch-b  # <- ours: branch-a, theirs: branch-b

Assuming branch-a is current version.

merge conflicts

tags

A lightweight tag is very much like a branch that doesn’t change — it’s just a pointer to a specific commit.

git tag -a v1.4 -m "my version 1.4"

Annotated tag with message.

git tag -a v1.2 9fceb02

Annotated tag at commit.

  • By default, the git push command doesn’t transfer tags to remote servers.

clean workspace

git checkout origin/master -- path/to/file

Clean file from master branch.

  • git branch – list local branch references
  • git branch -r – list remote branch references
  • git prune – only works on objects (not references) so it has no effect on branches.
  • git remote prune origin – removes remote branch references which no longer exist on the remote, but only tracking branches.
  • git branch --merged=main shows the branches merged into main. Don’t have to specify main if on main.
  • git branch -d $BRANCH-d trys to detect if the branch has been merged, -D forces

With a -d or -D option, $BRANCH will be deleted. You may specify more than one branch for deletion. If the branch currently has a reflog then the reflog will also be deleted.

squash and merge

using git branch -D at the end of the command worked better for us, as we tend to squash commits on merge, which gets us The branch x is not fully merged errors when running with -d.

  • git branch --merged trys to detect local branches that have been merged in the remote repository, but this doesn’t work repositories using squash-on-merge because the branch is modified before merging

hooks

  • By default the hooks directory is $GIT_DIR/hooks, but that can be changed via the core.hooksPath configuration variable

email workflow

  • tutorial

  • git has email functions built in, it needs to be configured with an email provider though

  • Warning! Some people think that they can get away with sending patches through some means other than git send-email, but you can’t. Your patches will be broken and a nuisance to the maintainers whose inbox they land in. Follow the golden rule: just use git send-email.