saint

automation

Published: November 13, 2025 Author: River Instance Team Reading Time: 8 minutes


The Mission

Today we upgraded our Mastodon instance (river.group.lt) from version 4.5.0 to 4.5.1. While this might sound like a routine patch update, we used it as an opportunity to make our infrastructure more secure and our deployment process more automated. Here's what we learned along the way.


Why Upgrade?

When glitch-soc (our preferred Mastodon variant) released version 4.5.1, we reviewed the changelog and found 10 bug fixes, including:

  • Better keyboard navigation in the Alt text modal
  • Fixed issues with quote posts appearing as “unquotable”
  • Improved filter application in detailed views
  • Build fixes for ARM64 architecture

More importantly: no database migrations, no breaking changes, and no new features that could introduce instability. This is what we call a “safe upgrade” – the perfect candidate for improving our processes while updating.


The Starting Point

Our Mastodon setup isn't quite standard. We run:

  • glitch-soc variant (Mastodon fork with extra features)
  • Custom Docker images with Sentry monitoring baked in
  • Kubernetes deployment via Helm charts
  • AMD64 architecture (important for cross-platform builds)

This means we can't just pull the latest official image – we need to rebuild our custom images with each new version.


The Problem We Solved

Before this upgrade, our build process looked like this:

# Find Harbor registry credentials (where?)
# Copy-paste username and password
docker login registry.example.com
# Enter credentials manually
# Update version in 4 different files
# Hope they all match
./build.sh
# Wait for builds to complete
# Manually verify everything worked

The issues: – Credentials stored in shell history (security risk) – Manual steps prone to typos – No automation = easy to forget steps – Credentials sitting in ~/.docker/config.json unencrypted

We knew we could do better.


The Solution: Infisical Integration

Infisical is a secrets management platform – think of it as a secure vault for credentials that your applications can access automatically. Instead of storing Harbor registry credentials on our laptop, we:

  1. Stored credentials in Infisical (one-time setup)
  2. Updated our build script to fetch credentials automatically
  3. Automated the Docker login process

Now our build script looks like this:

#!/bin/bash
set -e

VERSION="v4.5.1"
REGISTRY="registry.example.com/library"
PROJECT_ID="<your-infisical-project-id>"

echo "🔑 Logging in to Harbor registry..."
# Fetch credentials from Infisical
HARBOR_USERNAME=$(infisical secrets get \
  --domain https://secrets.example.com/api \
  --projectId ${PROJECT_ID} \
  --env prod HARBOR_USERNAME \
  --silent -o json | jq -r '.[0].secretValue')

HARBOR_PASSWORD=$(infisical secrets get \
  --domain https://secrets.example.com/api \
  --projectId ${PROJECT_ID} \
  --env prod HARBOR_PASSWORD \
  --silent -o json | jq -r '.[0].secretValue')

# Automatic login
echo "${HARBOR_PASSWORD}" | docker login ${REGISTRY} \
  --username "${HARBOR_USERNAME}" --password-stdin

# Build and push images...

Note: Code examples use placeholder values. Replace registry.example.com, secrets.example.com, and <your-infisical-project-id> with your actual infrastructure endpoints.

The benefits: – ✅ No credentials in shell history – ✅ No manual copy-pasting – ✅ Audit trail of when credentials were accessed – ✅ Easy credential rotation – ✅ Works the same on any machine with Infisical access


The Upgrade Process

With our improved automation in place, the actual upgrade was straightforward:

Step 1: Research

We used AI assistance to research the glitch-soc v4.5.1 release: – Confirmed it was a patch release (low risk) – Verified no database migrations required – Reviewed all 10 bug fixes – Checked for breaking changes (none found)

Lesson: Always research before executing. 15 minutes of reading can prevent hours of rollback.

Step 2: Update Version References

We needed to update the version in exactly 4 places:

  1. docker-assets/build.sh – Build script version variable
  2. docker-assets/Dockerfile.mastodon-sentry – Base image version
  3. docker-assets/Dockerfile.streaming-sentry – Streaming image version
  4. values-river.yaml – Helm values for both image tags

Lesson: Keep a checklist of version locations. It's easy to miss one.

Step 3: Build Custom Images

cd docker-assets
./build.sh

The script now: – Fetches credentials from Infisical ✓ – Logs into Harbor registry ✓ – Builds both images with --platform linux/amd64 ✓ – Pushes to registry ✓ – Provides clear success/failure messages ✓

Build time: ~5 seconds (thanks to Docker layer caching!)

Step 4: Deploy to Kubernetes

cd ..
helm upgrade river-mastodon . -n mastodon -f values-river.yaml

Helm performed a rolling update: – Old pods kept running while new ones started – New pods pulled v4.5.1 images – Old pods terminated once new ones were healthy – Zero downtime for our users

Step 5: Verify

kubectl exec -n mastodon deployment/river-mastodon-web -- tootctl version
# Output: 4.5.1+glitch

All three pod types (web, streaming, sidekiq) now running the new version. Success! 🎉


What We Learned

1. Automation Compounds Over Time

The Infisical integration took about 60 minutes to implement. The actual version bump took 30 minutes. That might seem like overkill for a “simple” upgrade.

But here's the math: – Manual process: 5 minutes per build to manage credentials – Automated process: 0 minutes – Builds per year: ~20 upgrades and tests – Time saved annually: 100 minutes – Payback period: 12 builds (~6 months)

Plus, we eliminated a security risk. The real value isn't just time – it's confidence and safety.

2. Separate Upstream from Custom

We keep the upstream Helm chart (Chart.yaml) completely untouched. Our customizations live in: – Custom Dockerfiles (add Sentry) – Values overrides (values-river.yaml) – Build scripts

Why this matters: We can pull upstream chart updates without conflicts. Our changes are additive, not modifications.

3. Test Incrementally

We didn't just run the full build and hope it worked. We tested:

  1. ✓ Credential retrieval from Infisical
  2. ✓ JSON parsing with jq
  3. ✓ Docker login with retrieved credentials
  4. ✓ Image builds
  5. ✓ Image pushes to registry
  6. ✓ Kubernetes deployment
  7. ✓ Running version verification

Each step validated before moving forward. When something broke (initial credential permissions), we caught it immediately.

4. Documentation Is for Future You

We wrote a comprehensive retrospective covering: – What went well – What we learned – What we'd do differently next time – Troubleshooting guides for common issues

In 6 months when we upgrade to v4.6.0, we'll thank ourselves for this documentation.

5. Version Numbers Tell a Story

Understanding semantic versioning helps assess risk:

  • v4.5.0 → v4.5.1 = Patch release (bug fixes only, low risk)
  • v4.5.x → v4.6.0 = Minor release (new features, moderate risk)
  • v4.x.x → v5.0.0 = Major release (breaking changes, high risk)

This informed our decision to proceed quickly with minimal testing.


What We'd Do Differently Next Time

Despite the success, we identified improvements:

High Priority

1. Validate credentials before building

Currently, we discover authentication failures during the image push (after building). Better:

# Test login BEFORE building
if ! docker login ...; then
  echo "❌ Auth failed"
  exit 1
fi

2. Initialize Infisical project config

Running infisical init in the project directory creates a .infisical.json file, eliminating the need for --projectId flags in every command.

3. Add version consistency checks

A simple script to verify all 4 files have matching versions before building would catch human errors.

Medium Priority

4. Automated deployment verification

Replace manual kubectl checks with a script that: – Waits for pods to be ready – Extracts running version – Compares to expected version – Reports success/failure

5. Dry-run mode for build script

Test the script logic without actually building or pushing images. Useful for testing changes to the script itself.


The Impact

Before this session: – Manual credential management – 5+ minutes per build for login – Credentials in shell history (security risk) – No audit trail

After this session: – Automated credential retrieval – 0 minutes per build for login – Credentials never exposed (security improvement) – Full audit trail in Infisical – Repeatable process documented

Plus: We're running Mastodon v4.5.1 with 10 bug fixes, making our instance more stable for our users.


Lessons for Other Mastodon Admins

If you run a Mastodon instance, here's what we learned that might help you:

For Small Instances

Even if you're running standard Mastodon without customizations:

  1. Document your upgrade process – Your future self will thank you
  2. Test in staging first – If you don't have staging, test with dry-run/simulation
  3. Always check release notes – 5 minutes of reading prevents hours of debugging
  4. Use semantic versioning to assess risk – Patch releases are usually safe

For Custom Deployments

If you run custom images like we do:

  1. Separate upstream from custom – Keep modifications isolated and additive
  2. Automate credential management – Shell history is not secure storage
  3. Use Docker layer caching – Speeds up builds dramatically
  4. Platform flags matter--platform linux/amd64 if deploying to different architecture
  5. Verify the running version – Don't assume deployment worked, check it

For Kubernetes Deployments

If you deploy to Kubernetes:

  1. Rolling updates are your friend – Zero downtime is achievable
  2. Helm revisions enable easy rollbackhelm rollback is simple and fast
  3. Verify pod image versions – Check what's actually running, not just deployed
  4. Monitor during rollout – Watch pod status, don't just fire and forget

The Numbers

Session Duration: 90 minutes total – Research: 15 minutes – Version updates: 10 minutes – Infisical integration: 60 minutes – Build & deploy: 5 minutes

Deployment Stats:Downtime: 0 seconds (rolling update) – Pods affected: 3 (web, streaming, sidekiq) – Helm revision: 166 – Rollback complexity: Low (single command)

Lines of code changed: 18 lines across 4 files Lines of documentation written: 629 lines (retrospective) Security improvements: 1 major (credential management)


Final Thoughts

What started as a simple patch upgrade turned into a significant infrastructure improvement. The version bump was almost trivial – the real work was automating away manual steps and eliminating security risks.

This is what good ops work looks like: using routine maintenance as an opportunity to make systems better. The 60 minutes we spent on Infisical integration will pay dividends on every future build. The documentation we wrote will help the next person (or future us) upgrade with confidence.

Mastodon v4.5.1 is running smoothly, our build process is more secure, and we learned lessons that will make the next upgrade even smoother.


Resources

For Mastodon Admins:Mastodon Upgrade Documentationglitch-soc Releases

For Infrastructure:Infisical (Secrets Management)Docker Build Best PracticesHelm Upgrade Documentation

Our Instance:river.group.lt – Live Mastodon instance – Running glitch-soc v4.5.1+glitch – Kubernetes + Helm deployment – Custom images with Sentry monitoring


Questions?

If you're running a Mastodon instance and have questions about: – Upgrading glitch-soc variants – Custom Docker image workflows – Kubernetes deployments – Secrets management with Infisical – Zero-downtime upgrades

Feel free to reach out! We're happy to share what we've learned.


Tags: #mastodon #glitch-soc #kubernetes #devops #infrastructure #security #automation


This blog post is part of our infrastructure documentation series. We believe in sharing knowledge to help others running similar systems. All technical details are from our actual upgrade session on November 13, 2025.

Comment in the Fediverse @saint@river.group.lt