From Firebase to PostgreSQL: How We Cut Our Cloud Costs by 80%
A retrospective on a complex Firebase to PostgreSQL migration that transformed our data architecture and cut our cloud bill by 80%.

The finance manager's email arrived on a Monday morning. Subject: "Firebase Bill Alert". The amount exceeded 12,000 euros for the previous month. For an application that admittedly had several thousand active users, but was nowhere near the scale of a web giant, something was off. Digging into the billing logs told a clear story: Firestore reads were exploding, Cloud Functions were looping on inefficient queries, and each new feature made things worse.
This alert marked the beginning of a six-month migration to move from Firebase to PostgreSQL, hosted on Supabase. Today, our monthly bill sits around 2,400 euros, with notably better performance and an architecture that finally gives us control. This 80% savings didn't come out of nowhere. It's the result of careful planning, deliberate technical choices, and a few mistakes we'd have preferred to avoid. Reducing infrastructure costs without compromising quality remains one of the major challenges for tech teams.
Why Firebase ends up costing so much
Firebase seduces you at the start. You launch an MVP in a few days, the SDKs are well-designed, real-time synchronization works out of the box. The problem emerges when your application grows and usage patterns become more complex. Firestore charges per read and write. A query returning 100 documents counts as 100 reads. If you refresh a list every minute for 500 simultaneous users, you'll quickly blow through the free tier.
In our case, three anti-patterns were eating into the bill. First, poorly configured real-time listeners. Many React components were listening to entire collections when they only displayed a few items. Second, no server-side aggregation. Firestore doesn't offer native GROUP BY or SUM operations, which forced us to load thousands of documents on the client to calculate totals. Finally, secondary indexes that weren't well understood generated cascading reads for seemingly simple queries.
We tried to optimize. We refactored listeners, cached certain data, reduced refresh frequencies. Result: we dropped from 12,000 to 9,000 euros per month. Not enough. The real problem wasn't technical but structural. Firestore is designed for specific use cases. The moment you step outside its sweet spot, you pay premium prices. PostgreSQL, by contrast, offered the flexibility our product trajectory demanded and represented a credible Firebase alternative.
Planning a database migration without shutting the lights off
Migrating a production database is a high-wire act. You can't afford extended downtime, can't lose data, and can't break existing features. The first step was mapping precisely what we were migrating. We audited all Firestore collections, identified volumes, read-write patterns, and entity dependencies. This audit revealed surprises: some collections stored obsolete data from months back, others contained hidden duplicates masked by the lack of uniqueness constraints.
We chose a three-phase progressive migration strategy. Phase one: write duplication. We started with the least critical entities, like audit logs and user preferences. Each new write went simultaneously to Firestore and PostgreSQL. This dual-write let us test the relational schema under real conditions with zero risk. Phase two: read switchover by feature. After duplication had stabilized for two weeks, we migrated reads feature by feature, starting with those generating the least traffic. A feature flag let us rollback in seconds if something went wrong.
Phase three: historical data migration and decommissioning. This proved to be the most time-consuming step. We wrote migration scripts that transformed Firestore documents into PostgreSQL rows while reconstructing entity relationships. Firestore stores everything as nested JSON. PostgreSQL expects normalized tables with foreign keys. The transformation required decisions about final structure: denormalizing some data for performance, normalizing others for consistency. We spent three weeks refining these scripts before launching the full migration over a low-traffic weekend.
The pitfalls that slowed us down
We discovered the first pitfall during the duplication phase. Firestore is schemaless; PostgreSQL requires strict types. Several collections had fields whose type varied from document to document. An "amount" field sometimes stored a number, sometimes a string, sometimes null, sometimes absent. We had to manually clean these inconsistencies before any migration. We wrote validation scripts that scanned collections to identify these edge cases. Result: 8% of our documents needed manual intervention.
Second pitfall: transaction management. Firestore offers transactions, but their scope is limited. We used a lot of batch writes to work around these limitations, creating inconsistent intermediate states on errors. PostgreSQL enables robust ACID transactions, but you have to implement them correctly. We underestimated the application refactoring effort. Certain business operations now required explicit BEGIN/COMMIT, with fine-grained rollback handling. We spent two sprints reviewing the transactional logic across our codebase.
Third pitfall: PostgreSQL production performance. We expected PostgreSQL to be faster. It was true for aggregations and complex joins, less obvious for simple reads. Firestore automatically indexes all fields. PostgreSQL requires you to create indexes manually. Initially, some queries took 3 seconds instead of 300ms. EXPLAIN ANALYZE became our best friend. We identified expensive sequential scans, created missing indexes, adjusted some schemas to avoid unnecessary joins. Today, performance far exceeds Firestore, but it wasn't immediate.
The last pitfall, often overlooked: the learning curve. Our team knew Firestore and its SDKs well. PostgreSQL meant relearning SQL best practices, understanding query planner subtleties, managing schema migrations with tools like Prisma or TypeORM. We invested in training and organized pair programming sessions to build collective competency. This human friction slows any technical migration—you need to anticipate it in planning.
What the migration changed beyond the budget
The 80% savings was the initial goal. It materialized, but the collateral benefits exceeded our expectations. First, we regained control over our data. PostgreSQL lets you run complex analytical queries directly in the database, without loading thousands of documents on the client. We eliminated several compute microservices that ran constantly to compensate for Firestore's limitations. Result: simpler architecture, easier to maintain.
Next, data quality improved significantly. PostgreSQL constraints (foreign keys, unique constraints, check constraints) forced the team to formalize the data model. No more inconsistent fields or broken references. Schema migrations, though more demanding, now guarantee the structure evolves in a controlled, documented way. Fewer bugs from malformed data. This rigor in data modeling aligns with the principles we apply to ensure a single source of truth in our data projects.
On the development side, productivity initially dropped during the transition, then surged. Developers spend less time working around Firestore limitations. SQL queries are more expressive than Firestore query builders. We could implement features that would have been nightmarish with Firestore: reports with complex aggregations, high-performance full-text search via PostgreSQL extensions, granular permission management with Row-Level Security on Supabase.
Finally, the migration revealed design issues we hadn't seen. Some Firestore collections served as disguised in-memory caches. Others mixed transactional and analytical data. Redrawing the relational schema forced us to clarify each table's responsibility, properly separate business contexts. This reset created healthier foundations for future evolution.
Recommendations for a successful migration
If we did it again, we'd change several things. First, we'd start the migration earlier. We waited until the pain became unbearable. Result: we migrated in a rush, under budget pressure that limited our options. Ideally, we'd have anticipated once Firestore patterns showed their limits, before technical debt accumulated. Measuring the ROI of tech projects actually helps you anticipate these strategic decisions.
Next, we'd invest more in observability from the start. We added detailed metrics as we went, tracking error rates, latencies, inconsistencies between the two databases. This tooling should have been ready before the first line of migration code. Without fine visibility, you're navigating blind.
We'd also allocate more time to upfront data cleanup. Trying to migrate dirty data into a strict schema is an endless source of complications. Better to spend two weeks auditing and cleaning than two months fixing migration scripts that fail on unforeseen edge cases.
Technically, using a modern ORM (we chose Prisma) was decisive. It manages schema migrations in a versioned way, generates typesafe code, eases team collaboration. Pairing PostgreSQL with Supabase gave us a complete stack (auth, storage, realtime) that compensated for the Firebase features we were dropping. This ecosystem coherence simplified the transition.
Finally, communicate better with the business. The migration impacted the product roadmap for six months. Some features were delayed, others simplified to ease the transition. Getting product managers on board, showing them medium-term benefits, prevents frustration and rushed decisions.
Firebase remains an excellent solution for rapid prototyping or applications with simple, predictable patterns. But as complexity grows, volumes climb, and you need fine control over costs and architecture, PostgreSQL wins. This migration cost us six months of effort and significant learning. It also freed us from an unsustainable budget trajectory and returned control of our data stack. Every context is unique, but if your Firebase bill makes you wince every month, maybe it's time to seriously explore relational alternatives.
Frequently Asked Questions
Why migrate from Firebase to PostgreSQL to reduce cloud costs?▼
Firebase applies a pay-as-you-go usage model that becomes very expensive at scale, while PostgreSQL offers predictable and controlled costs. By migrating to a self-hosted or managed relational database, you eliminate Firebase's proprietary infrastructure fees and benefit from significant economies of scale, typically achieving 70-80% savings for high-volume workloads.
What are the main challenges when migrating from Firebase to PostgreSQL?▼
The challenges include data restructuring (Firebase uses NoSQL documents while PostgreSQL is relational), migrating business logic and indexes, and managing application downtime. You'll also need to train your team on SQL concepts and handle architectural differences like Firebase's lack of native distributed transactions.
How long does it take to migrate an application from Firebase to PostgreSQL?▼
The timeline depends on your architecture's complexity, data volume, and team size. A medium-scale migration typically takes 2 to 6 months, including preparation, data migration, testing, and performance optimization. Smaller projects can be migrated in a few weeks, while complex systems require several months of planning.
What tools should I use to migrate data from Firebase to PostgreSQL?▼
Common tools include Firestore Export/Import to export Firebase data in JSON format, then tools like pgAdmin, DBeaver, or custom Python scripts to load the data into PostgreSQL. You can also leverage managed migration services like AWS Database Migration Service if your data flows through AWS.
How can I ensure service continuity when migrating from Firebase to PostgreSQL?▼
Implement a phased migration strategy using dual-write mode (simultaneous writes to both Firebase and PostgreSQL), conduct thorough testing in your staging environment, and schedule your cutover window during off-peak hours. Version your code to enable quick rollbacks if issues arise, and maintain data synchronization until you've completed the full cutover.
Related Articles

dbt Fusion: How to Cut Data Warehouse Compute Costs by 64%
State-aware orchestration transforms data pipeline management: fewer wasted computations, faster deployments, massive savings on your compute costs.

Reducing Snowflake Costs by 64%: The Practical Guide to dbt's Fusion Engine
dbt's Fusion engine transforms the economics of modern data warehouses. Here's how an optimization that changes the game delivered a -64% reduction in compute costs.

How to Cut Your Compute Costs by 60% Without Compromising Your Data Pipeline
Your cloud bill is skyrocketing, execution times are dragging. What if the issue stemmed from how you're orchestrating your dbt transformations?