Firebolt: Experience with the Fastest Data Warehouse
Working with Firebolt over the past few months has been nothing short of an adventure — one filled with optimism, head-scratching, performance highs, and some deep engineering sighs. It’s a platform with a lot of promise, backed by impressive speed benchmarks and a modern architecture. But like any early-stage rocket ship, it has its quirks — especially if you’re the kind of engineer who likes to dig deep and move fast. Here’s a breakdown of why Firebolt has been such an interesting ride.
Firebolt Benchmark - https://www.firebolt.io/blog/introducing-firescale
Wirekite Benchmark - https://benchmarks.wirekite.io/series/benchmarks-extract-and-load
No Command Line? No Party
Let’s start with the obvious elephant in the room — no CLI. In an age where every serious tool offers a command line for real engineers to script, automate, test, and break things beautifully, Firebolt chose to go GUI-only. Sure, the web interface looks polished and is friendly to first-timers, but for engineers who like to tinker, build wrappers, or showcase product strengths and limitations through scripts, a lack of CLI is a red flag. It feels like you’re being asked to drive a sports car using only the touchscreen. There’s just no replacement for a well-documented, responsive, Unix-y CLI interface. And for many seasoned engineers, if it doesn’t have a CLI, it doesn’t feel real.
Initial Setup
Then there’s the initial configuration — arguably the moment where a tool either wins or loses its future champions. Firebolt makes a poor first impression. The security model is… chaotic. You’re thrown into a maze of owners, users, roles, service accounts, policies, logins, keys, and organization scope. One misstep and you’re locked out. Want to "just play" with the database?. It's ironic that in an age of seamless SaaS onboarding, setting up Firebolt felt difficult. Contrast that with the early days of MySQL, which won hearts (and market share) simply by being easy to install, run, and explore. Firebolt, unfortunately, missed that memo.
Blistering Ingestion Speeds
But once you have it set up, ingesting data into Firebolt is snappy, really snappy. Compared to other cloud data warehouses, Firebolt’s performance here is eye-catching. They’ve made serious engineering choices to optimize ingestion pipelines, and it shows. For teams importing large datasets, this speed is addictive. You find yourself asking: “Why doesn’t every platform work this fast?” It’s a major win, and credit where it’s due — Firebolt shines bright in the lanes it was built for.
Blistering Query Speeds
Firebolt's tagline isn’t marketing fluff — their decoupled compute and storage, combined with advanced indexing (like aggregating indexes and join indexes), really do make queries run much faster than traditional cloud data warehouses. This is especially noticeable with:
Large fact tables
Star/snowflake schemas
High-concurrency workloads
Awesome Indexing
Firebolt brings real indexing to cloud data warehouses, which is pretty rare in the modern OLAP world. It supports:
Primary indexes to organize how data is stored physically
Aggregating indexes for super-fast summary queries
Join indexes to optimize complex joins between large tables
This indexing model allows users to pre-optimize for known access patterns, leading to big wins in performance and cost.
Lower Storage Costs
Firebolt uses F3 (Firebolt File Format), a columnar compressed format designed to optimize for:
Fast sequential scans
Predictable performance
Storage cost reduction
In our benchmarks, storage costs were often lower than Redshift or Snowflake for the same datasets, especially after compaction.
Fast In, Slow Around
But speed has a funny way of revealing tradeoffs. Once you try to update or delete data, the ride slows to a crawl. Firebolt, like many OLAP systems, isn’t designed for transactional workloads — fair. But the degree of slowness in operations like UPDATE or DELETE feels surprising and sometimes unworkable. And more frustratingly, there’s no MERGEor UPSERTsupport in SQL — a staple in modern data warehousing for merging new records with existing data. That’s a tough pill to swallow for customers expecting to incrementally update their data warehouse without a full overwrite every time.
Medium is Not the New Big
Another limitation that caught us off guard — the size caps. Firebolt currently tops out at "M" size for compute engines. In a market where vendors offer XL, 2XL, even 4XL clusters for big data workloads, Firebolt’s constraint feels underwhelming. We’re dealing with warehouse-scale data here — hundreds of terabytes, sometimes petabytes. If "M" is your max, you’re signaling to customers that you’re not quite ready for enterprise-scale heavy lifting. It's a curious choice, and one that will need to be addressed if Firebolt is to compete in the big leagues.
Outsourced Migrations
Data migration, often the most painful part of any cloud transition, has been outsourced in Firebolt's case — mainly to tools like Airbyte and dbt. While it’s understandable that Firebolt didn’t want to reinvent the wheel, the performance of these tools, in practice, has been underwhelming. Our benchmarks showed slow throughput and unreliable sync behavior, especially with large datasets and complex schema transformations. In critical pipelines, these lags become show-stoppers. Offloading this responsibility without tight native integration leaves users juggling too many moving parts — and that slows adoption.

