So you want to hook up Metabase to Google BigQuery and actually get some answers out of your data. Good call. This guide is for anyone who’s tired of hand-waving tutorials that skip the tricky parts or gloss over the “gotchas.” Whether you’re a data analyst, a product manager, or just the unlucky one who got stuck with setup, I’ll walk you through each step, flag the common headaches, and help you get real value out of your setup.
Before we start: if you’re not familiar, Metabase is an open-source tool for exploring and visualizing data—popular because it’s pretty easy to use and doesn’t drown you in jargon.
Let’s get your Metabase talking to BigQuery, without the usual frustration.
Step 1: Get Your Google Cloud Project and BigQuery Ready
If you’ve already got BigQuery up and running, you can probably skip most of this. But if you’re new, here’s what you need:
- A Google Cloud Platform (GCP) project.
- You’ll need Owner or Editor rights, or at least permissions to create service accounts and manage BigQuery.
- Billing enabled.
- BigQuery’s not free, even if your queries are small. Make sure billing is set up to avoid headaches later.
- A dataset in BigQuery.
- You need at least one dataset with some tables. If you’re just testing, Google offers free public datasets.
Pro tip: If you only have “Viewer” access, you won’t get far. Don’t waste time fighting permissions—get someone with admin rights to help early on.
Step 2: Create a Service Account for Metabase
Metabase needs a way to authenticate to Google. Don’t use your personal account—it’s messy and insecure. Service accounts are the way to go.
- Go to the GCP Console and select your project.
- Navigate to "IAM & Admin" → "Service Accounts".
- Click "Create Service Account."
- Name it something obvious, like
metabase-bigquery
.
- Name it something obvious, like
- Assign roles:
- At minimum, you’ll need
BigQuery Data Viewer
andBigQuery User
. If you want to let Metabase write data (rare, but possible), you’ll needBigQuery Data Editor
. - Don’t go overboard with permissions. Least privilege is safest.
- If you want Metabase to see all datasets, grant the role at the project level. If not, just the relevant datasets.
- At minimum, you’ll need
- Click "Done."
Pro tip: Don’t skip documentation—write down what this service account is for. A year from now, you’ll thank yourself.
Step 3: Create and Download a Service Account Key (JSON)
Now you need credentials for Metabase to connect.
- Find your new service account in the list.
- Click on it, then go to the "Keys" tab.
- Click "Add Key" → "Create new key."
- Choose JSON as the key type.
- Download the key file.
- Store it somewhere safe. Treat it like a password. Anyone with this file can access your data.
Warning: Never check this JSON file into source control. You’d be surprised how often this happens.
Step 4: Set Up BigQuery Permissions for Your Service Account
Even with the right roles, sometimes you’ll run into permissions issues, especially if your datasets are locked down.
- Double-check that your service account has access to the specific datasets Metabase needs.
- Go to BigQuery, find your dataset, and look under "Permissions."
- Add your service account if it’s not there, and give it at least
BigQuery Data Viewer
.
What can go wrong? - If dashboards show “Permission denied” or queries fail, it’s probably a dataset-level permissions issue, not a Metabase bug. - If you’re using authorized views or row-level security, you’ll need to grant access to those as well.
Step 5: Open Metabase and Add BigQuery as a Database
Here’s where things actually start to look like progress.
- Log in to Metabase (usually at
http://localhost:3000
if you’re running it locally, or your server address). - Go to Admin Settings → Databases → Add Database.
- Choose “BigQuery” from the list.
- Fill in the details:
- Project ID: This is your GCP project’s ID, not its name. You’ll find it in the GCP console.
- Dataset ID (optional): Leave blank to sync all datasets, or specify one if you want to limit exposure.
- Service Account JSON: Paste the entire contents of the JSON key file you downloaded earlier.
- Processing Location: If you know your BigQuery data region (like
US
orEU
), enter it. If not, leave blank—Metabase will try to guess.
- Save. Metabase will try to connect and sync your schema.
What works: Metabase’s BigQuery connector is pretty solid for most use cases. It supports standard SQL, works with legacy SQL if you enable it (don’t unless you have to), and can handle large datasets if you’re careful.
What doesn’t: - Metabase doesn’t always play nice with BigQuery’s wild-west of data types (nested fields, arrays, etc.). Expect some quirks with complex schemas. - Scheduled queries in Metabase run as the service account, so if you’re using additional access controls (like VPC Service Controls), you might hit walls.
Step 6: Sync and Scan Your BigQuery Schema
Metabase tries to automatically sync your datasets and scan tables for fields and types. But it can get tripped up if you have a lot of datasets, or if your schema is massive.
- Let the initial sync finish. You’ll see progress in the Admin panel.
- If tables or columns are missing, try:
- Clicking “Sync database schema now.”
- If that fails, check your service account permissions again.
- Limit what’s scanned. If you’ve got hundreds of datasets, consider only exposing what you need by specifying a Dataset ID when you add the database.
Pro tip: If your schema changes a lot (tables come and go), resync regularly, but don’t overdo it—BigQuery charges for metadata reads too.
Step 7: Start Building Questions and Dashboards
You’re in. Now you can start exploring your data:
- Use Metabase’s “Ask a Question” to build simple queries.
- Visualization options are decent, but don’t expect Tableau-level polish.
- For anything complex (think: nested data, massive joins), you’ll probably need to write SQL directly in Metabase.
- Save questions, group them into dashboards, and share with your team.
What to ignore: - Metabase’s data modeling features are limited. You can rename fields and hide columns, but don’t expect a semantic layer or fancy transformations. - If you need data cleaning or serious ETL, do it upstream—Metabase isn’t the place.
Common Gotchas and Honest Advice
- Performance: BigQuery charges by data scanned. Metabase’s auto-generated queries can be “expensive” if you’re not careful. Watch your quotas and billing.
- Caching: Metabase caches some results, but BigQuery is pay-per-query. Don’t assume you’re seeing “live” data if you’ve just run a heavy query.
- Row-level security: BigQuery supports it, but Metabase doesn’t natively understand all the nuances. Test with dummy data before rolling out to the company.
- Data freshness: Metabase only shows what’s in BigQuery. If your pipeline is delayed, so is your dashboard.
- User management: Metabase uses its own user system—not Google SSO by default—unless you set up SAML or Google Auth. Keep that in mind if you’re security-conscious.
Pro tip: If something seems buggy, check the Metabase GitHub issues before blaming yourself. Some quirks are just limitations of the connector.
Wrapping Up: Keep It Simple and Iterate
Getting Metabase connected to BigQuery isn’t rocket science, but there are enough moving parts to make it annoying if you miss a step. Stick to service accounts, keep permissions tight, and don’t try to over-engineer your first dashboards. Start simple—get one or two questions working before building out everything. Iterate as you go, and keep an eye on costs.
Remember: most data stacks get complicated because people add too much, too fast. Keep it lean, stay skeptical of “magic” features, and focus on making your team’s questions easier to answer. That’s what actually matters.