When Free Isn't Forever: Navigating the Bitnami Deprecation
Date: November 21, 2025 Author: Infrastructure Team @ River.group.lt Tags: Infrastructure, Open Source, Vendor Lock-in, Lessons Learned
TL;DR
Broadcom's acquisition of VMware (and Bitnami) resulted in the deprecation of free container images, affecting thousands of production deployments worldwide. Our Mastodon instance at river.group.lt was impacted, but we turned this crisis into an opportunity to build more resilient infrastructure. Here's what happened and what we learned.
The Wake-Up Call
On November 21st, 2025, while upgrading our Mastodon instance from v4.5.1 to v4.5.2, we discovered something concerning: several Elasticsearch pods were stuck in CrashLoopBackOff. The error was cryptic:
/bin/bash: line 1: sysctl: command not found
This wasn't a configuration issue or a bug in our deployment. This was the canary in the coal mine for a much larger industry-wide problem.
What Actually Happened
The Bitnami Story
If you've deployed anything on Kubernetes in the past few years, you've probably used Bitnami Helm charts. They were convenient, well-maintained, and free. The PostgreSQL chart, Redis chart, Elasticsearch chart—all trusted by thousands of organizations.
Then came the acquisition: – August 2021: VMware acquired Bitnami – 2024: Broadcom acquired VMware – August 28, 2025: Bitnami stopped publishing free Debian-based container images – September 29, 2025: All images moved to a read-only “legacy” repository
The new pricing? $50,000 to $72,000 per year for “Bitnami Secure” subscriptions.
Our Impact
Our entire Elasticsearch cluster was running on Bitnami images: – 4 Elasticsearch pods failing to start – Search functionality degraded – Running on unmaintained images with no security updates – Init containers expecting tools that no longer existed in the slimmed-down legacy images
But we weren't alone. This affected: – Major Kubernetes distributions – Thousands of Helm chart deployments – Production instances worldwide
The Detective Work
The debugging journey was educational:
- Pod events → Init container crashes
- Container logs → Missing
sysctlcommand indebian:stable-slim - Web research → Discovered the Bitnami deprecation
- Community investigation → Found Mastodon's response (new official chart)
- System verification → Realized our node already had correct kernel settings
The init container was trying to set vm.max_map_count=262144 for Elasticsearch, but:
– The container image no longer included the required tools
– Our node already had the correct settings
– The init container was solving a problem that didn't exist
Classic case of inherited configuration outliving its purpose.
The Fix (and the Plan)
We took a two-phase approach:
Phase 1: Immediate Stabilization
What we did right away: 1. Disabled the unnecessary init container 2. Scaled down to single-node Elasticsearch (appropriate for our size) 3. Cleared old cluster state by deleting persistent volumes 4. Rebuilt the search index from scratch
Result: All systems operational within 2 hours, search functionality restored.
Phase 2: Strategic Migration
We didn't just patch the problem—we planned a proper solution:
Created comprehensive migration plan (MIGRATION-TO-NEW-CHART.md):
– Migrate to official Mastodon Helm chart (removes all Bitnami dependencies)
– Deploy OpenSearch instead of Elasticsearch (Apache 2.0 licensed)
– Keep our existing DragonflyDB (we were already ahead of the curve!)
– Timeline: Phased approach over next quarter
The new Mastodon chart removes bundled dependencies entirely, expecting you to provide your own: – PostgreSQL → CloudNativePG or managed service – Redis → DragonflyDB, Valkey, or managed service – Elasticsearch → OpenSearch or Elastic's official operator
This is actually better architecture—no magic, full control, and proper separation of concerns.
What We Learned
1. Vendor Lock-in Happens Gradually
We didn't consciously choose vendor lock-in. We just used convenient, well-maintained Helm charts. Before we knew it: – PostgreSQL: Bitnami – Redis: Bitnami – Elasticsearch: Bitnami
One vendor decision affected our entire stack.
New rule: Diversify dependency sources. Use official images where possible.
2. “Open Source” Doesn't Mean “Free Forever”
Recent examples of this pattern: – HashiCorp → IBM (Terraform moved to BSL license) – Redis → Redis Labs (licensing restrictions added) – Elasticsearch → Elastic NV (moved to SSPL) – Bitnami → Broadcom (deprecated free tier)
The pattern: Company acquisition → Business model change → Service monetization
New rule: For critical infrastructure, always have a migration plan ready.
3. Community Signals are Early Warnings
The Mastodon community started discussing this in August 2025. The official chart team had already removed Bitnami dependencies months before our incident. We could have been proactive instead of reactive.
New rule: Subscribe to community channels for critical dependencies. Monitor GitHub issues, Reddit discussions, and release notes.
4. Version Pinning Isn't Optional
We were using elasticsearch:8 instead of elasticsearch:8.18.0. When the vendor deprecated tags, we had no control over what :8 meant anymore.
New rule: Always pin to specific versions. Use image digests for critical services.
5. Init Containers Need Regular Audits
Our init container was setting kernel parameters that: – Were already set on the host – May have been necessary years ago – Nobody had questioned recently
New rule: Audit init containers quarterly. Verify they're still necessary.
The Bigger Picture
This incident is part of a broader trend in the cloud-native ecosystem:
The Consolidation Era: – Big Tech acquiring open-source companies – Monetization pressure from private equity – Shift from “community-first” to “enterprise-first”
The Community Response: – OpenTofu (Terraform fork) – Valkey (Redis fork) – OpenSearch (Elasticsearch fork) – New Mastodon chart (Bitnami-free)
The open-source community is resilient. When a vendor tries to close the garden, the community forks and continues.
Our Action Plan
Immediate (Done ✅)
- [x] Fixed Elasticsearch crashes
- [x] Restored search functionality
- [x] Documented everything
- [x] Created migration plan
Short-term
- [ ] Add monitoring alerts for pod failures
- [ ] Pin all container image versions
- [ ] Deploy OpenSearch for testing
Long-term
- [ ] Migrate to official Mastodon chart
- [ ] Consider CloudNativePG for PostgreSQL
- [ ] Regular dependency health audits
What You Should Do
If you're running infrastructure on Kubernetes:
1. Audit Your Dependencies
# Find all Bitnami images
kubectl get pods --all-namespaces -o json | \
jq -r '.items[].spec.containers[].image' | \
grep bitnami | sort -u
2. Check Your Helm Charts
# List all Helm releases using Bitnami charts
helm list --all-namespaces -o json | \
jq -r '.[] | select(.chart | contains("bitnami"))'
3. Create Migration Plans
Don't panic-migrate. Create proper plans: – Document current state – Research alternatives – Test migrations in non-production – Schedule maintenance windows – Have rollback procedures ready
4. Learn from Our Mistakes
We've documented everything: – Migration plan: Step-by-step guide to official Mastodon chart – Retrospective: What went wrong and why – Lessons learned: Patterns to avoid vendor lock-in
Resources
If you're dealing with similar issues:
Bitnami Alternatives: – PostgreSQL: Official images, CloudNativePG – Redis: DragonflyDB, Valkey – Elasticsearch: OpenSearch, ECK
Mastodon Resources: – New Official Chart – Migration Guide
Community Discussion: – Bitnami Deprecation Issue – Reddit Discussion
Closing Thoughts
This incident reminded us of an important principle: Infrastructure should be boring. We want our database to just work, our cache to be reliable, and our search to be fast. We don't want vendor drama.
The irony? Bitnami made things “boring” by providing convenient, pre-packaged solutions. But convenience can become dependency. Dependency can become lock-in. And lock-in can become a crisis when business models change.
The path forward is clear: 1. Use official images where possible 2. Diversify dependency sources 3. Pin versions explicitly 4. Monitor community signals 5. Always have a Plan B
Our Mastodon instance at river.group.lt is now healthier than before. All pods are green, search is working, and we have a clear migration path to even better infrastructure.
Sometimes a crisis is just the push you need to build something more resilient.
Discussion
We'd love to hear your experiences: – Have you been affected by the Bitnami deprecation? – What alternatives are you using? – What lessons have you learned about vendor dependencies?
About the Author: This post is from the infrastructure team maintaining river.group.lt, a Mastodon instance running the glitch-soc fork. We believe in transparent operations and sharing knowledge with the community.
License: This post and associated migration documentation are published under CC BY-SA 4.0. Feel free to adapt for your own use.
Updates: – 2025-11-21: Initial publication – Search index rebuild completed successfully – All systems operational
P.S. – If you're running a Mastodon instance and need help with migration planning, reach out. We've documented everything and we're happy to help.
Comment in the Fediverse @saint@river.group.lt