scrapbook of a sysadmin

Monitoring best practices

There are things that really irks my day and monitoring alerts that are not actionable (i.e. – informational) are in that list, so writing this blog post to improve the signal.

  • Any alert you receive from monitoring system should be actionable and that action must not be snoosable or something similar.

You get alert – you work bitch.

Commen in the Fediverse

How to use Borgmatic to backup PostgreSQL in Kubernetes

There are many goodies in the internets, but not much good documentation so that usage of them would be frictionless. Here is a short writeup on backuping up Mastodon instance database, running on Kubernetes with Borgmatic, but it could be used for any generic database and/or file path, that is supported by Borg backup and Borgmatic.

For the destination/target repository I am using Borg Base, but it should work with any Borg compatible ssh repo.

There are some issues that I might solve in the future, namely – storing sensitive information in safer way, but for the moment I just wanted to make backups work. If you will do it – please let me know and I will update my setup.

First things first, we need: – Kubernetes – Mastodon deployment in its own namespace – Setup SSH keys – Setup Healtchecks – Setup Borgbase – Install Borgmatic via Helm Chart, using our values


You should be runnning something already

Mastodon deployment

You should have deployed Mastodon already in its own namespace. Instructions should work for any generic deployment that uses PostgreSQL or any other supported database.

Setup SSH keys

You need to generate ssh key and save both parts – private and public. Private will go to our Borgmatic setup and public – to Borgbase.

ssh-keygen -t ed25519 -f /tmp/id_ed25519

This can be done on any computer that has ssh-keygen utility. /tmp/id_ed25519will contain the private part and /tmp/ – the public. You don't need the files themselves, just contents.

Setup Healthchecks

This is optional, but do it if you do not use Borgbase – get account – it is free for small numbers of checks. Borgbase also has the same functionality, so you might skip it if you chose to use Borgbase.

Setup Borgbase

Again – you can use whatever repository server that supports Borg, but I found Borgbase having all the features I require for the cheap price. Add public ssh key part to keys and make sure that repository is tied to this key. Save the repository address.

Install Borgmatic via Helm Chart, using our values

I have used this Chart – – it is great stuff, just had some issues figuring out how it works. I am kinda slow learner – I make a lot of assumptions that result in being me wrong most of the time. The trick was to find good values for values.yaml:

# This chart inherits from our common library chart. You can check the default values/options here:

  # -- image repository
  # -- image pull policy
  pullPolicy: IfNotPresent
  # -- image tag
  tag: 1.7.14

  # -- Set the controller type. Valid options are `deployment` or `cronjob`.
  type: deployment
    # -- Only used when `controller.type: cronjob`. Sets the backup CronJob time.
    schedule: 0 * * * *
    # -- Only used when `controller.type: cronjob`. Sets the CronJob backoffLimit.
    backoffLimit: 0

# -- environment variables. [[ref]](
# @default -- See [values.yaml](./values.yaml)
  # -- Borg host ID used in archive names
  # @default -- Deployment namespace
  PGPASSWORD: ---PostgresSQL db password---

  # -- Configure persistence settings for the chart under this key.
  # @default -- See [values.yaml](./values.yaml)
    enabled: false
    retain: true
    # storageClass: ""
    # accessMode: ReadWriteOnce
    # size: 1Gi
      - path: borg-repository
        mountPath: /mnt/borg-repository
      - path: config
        mountPath: /root/.config/borg
      - path: cache
        mountPath: /root/.cache/borg
  # -- Configure SSH credentials for the chart under this key.
  # @default -- See [values.yaml](./values.yaml)
    name: borgmatic-ssh 
    enabled: true
    type: configMap
    mountPath: /root/.ssh/
    readOnly: false
    defaultMode: 0600

  # -- Configure Borgmatic container under this key.
  # @default -- See [values.yaml](./values.yaml)
  ssh :
    enabled: true
      id_ed25519: |
        -----END OPENSSH PRIVATE KEY-----
      known_hosts: |
        --- paste the output of ssh-keyscan borg-repository-address ---
    enabled: true
      # -- Crontab
      crontab.txt: |-
        0 1 * * * PATH=$PATH:/usr/bin /usr/local/bin/borgmatic --stats -v 0 2>&1
      # -- Borgmatic config. [[ref]](
      # @default -- See [values.yaml](./values.yaml)
      config.yaml: |
          # List of source directories to backup.
            - /etc/ <- any directory you want as we are concerned only with db backup

          # Paths of local or remote repositories to backup to.
            - ---BORG REPOSITORY URL---

          # Retention policy for how many backups to keep.
          keep_daily: 7
          keep_weekly: 4
          keep_monthly: 6

          # List of checks to run to validate your backups.
            - name: repository
            - name: archives
              frequency: 2 weeks


          # Databases to dump and include in backups.
            - name: mastodon_production
              hostname: hostname-of-mastodon-postgresql-db
              username: mastodon

          # Third-party services to notify you if backups aren't happening.
          healthchecks: --- healtcheck url ---
helm install borgmatic gabe565/borgmatic -f values.yaml -n mastodon
kubectl rollout status deployment.apps/borgmatic -n mastodon
kubectl exec -i deployment.apps/borgmatic -n mastodon -- borgmatic init --encryption repokey-blake2
kubectl exec -it deployment.apps/borgmatic -n mastodon -- borgmatic create --stats

All of these commands should be completed without errors

Commen in the Fediverse

Caddyfile for running Lemmy

How to follow Lemmy community from Mastodon

Spent a couple of hours.. wanted to follow a Lemmy community from instance on Mastodon. Here is a working config for Caddy (Caddyfile): {
	reverse_proxy	http://lemmy_lemmy-ui_1:1234
        tls {

@lemmy {
        path    /api/*
        path    /pictrs/*
        path    /feeds/*
        path    /nodeinfo/*
        path    /.well-known/*

@lemmy-hdr {
	header Accept application/*

@lemmy-post {
	method POST

handle @lemmy {
        reverse_proxy   http://lemmy_lemmy_1:8536

handle @lemmy-hdr {
        reverse_proxy   http://lemmy_lemmy_1:8536

handle @lemmy-post {
        reverse_proxy   http://lemmy_lemmy_1:8536

The key point here was

@lemmy-hdr {
	header Accept application/*

I have taken a hint from

and from some nginx conf for lemmy

Commen in the Fediverse

Reclaiming space in synapse postgresql database

Follow the fat elephant

I have received an alert from Grafana – that my synapse directory is almost full, which was kinda strange as I have given 100GB partition to it just a couple of weeks ago.. So I have put a hat, picked up some cider and something to smoke and went to the adventure.

From the old times I knew that postgresql database size can be reduced using vacuumdb. Entered the container and boom – after 15 or so minutes it has finished and reclaimed 100MB of space.. Hmmm... Interesting – which table eats the space. Google, link

    relname AS "relation",
    pg_size_pretty (
        pg_total_relation_size (C .oid)
    ) AS "total_size"
    pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C .relnamespace)
    nspname NOT IN (
AND C .relkind <> 'i'
AND nspname !~ '^pg_toast'
    pg_total_relation_size (C .oid) DESC
 relation      | total_size
 state_groups_state | 65 GB
 event_json         | 1197 MB
 event_edges        | 619 MB
 events             | 595 MB
 event_auth         | 528 MB

Alright!!! Google: stategroupsstate, link and found a compression tool.

git clone, crap a short docker-compose.yml and build the tool.

root@instance-20211112-2005:/opt/synapse-compress-state# cat docker-compose.yaml
version: "3.5"
        context: rust-synapse-compress-state/
      command: synapse_auto_compressor -p postgresql://user:pass@dbhost/dbname -c 500 -n 100
          - synapse

                name: synapse

let's crap some more:

root@instance-20211112-2005:/opt/synapse# cat /opt/synapse-compress-state/

cd /opt/synapse-compress-state/

docker-compose up

put it into crontab:

@daily /opt/synapse-compress-state/ > /dev/null

later googled more and found some smarter people than me: shrink synapse database and that really helped, especially reindexing.

Commen in the Fediverse

How to send mail using php mail() function in docker way

Using ssmtp to send email to the relay container

Sometimes we want to run dockerized old php site that we do not want to work with, or a programmer is gone and nobody cares to make changes to use email relay host, such as mailgun or gmail or anything else. In the Linux VM or bare metal server it is quite easy task – you run web server and mail server two in one and mail server takes care of mail routing.

In the dockerized environment usually you want to run the least amount of services possible in the container, so sending mail using PHP's mail() function becomes a tricky.

Let's create a docker-compose.yml file, containing all required containers:

caddy – web server

php – php server for the app – the trick here to use msmtp as a sendmail to have mail sent to remote server (our mail container)

mail – smtp relay server, we will use postfix

the source code is here: docker-php-mail-example

the main thing is to use ssmtp on the php container and send data to mail container


Commen in the Fediverse

Getting rid of * WARNING ** Mnesia is overloaded: {dumplog, timethreshold}

On high load environment, we had a message in RabbitMQ logs every second or so:

* WARNING ** Mnesia is overloaded: {dump_log, time_threshold}

Internet said that it is nothing to worry about for everybody, who has asked on how to change it, but I still wanted to get rid of the noise. After playing with RabbitMQ installation, I have found where to put config changes:


there you have to put 

SERVER_ADDITIONAL_ERL_ARGS="-mnesia dump_log_write_threshold XXXXX -mnesia dc_dump_limit YY"

Where XXXXX and YY numbers, that work for your environment.

Here is the best explanation on what it is... just kidding.

More breadcrumbs here and you can find a reply and a comment of somebody that understood the assignment. 

Commen in the Fediverse