Tools
Tools: Moving Data from MySQL to BigQuery (Without Turning It Into a Side Project) (2026)
First, a quick reality check
ELT > ETL (most of the time)
CDC is what changes everything
When it works
What’s not great
Honest take
Option 2: BigQuery Data Transfer Service
When it works
What’s good
Where it starts to feel limited
Honest take
Option 3: CDC (real-time sync)
How it works (in plain terms)
Why teams switch to this
Do you build it or use something?
Quick Comparison
What usually goes wrong
Connection issues
Data mismatches
Slow MySQL
Data gaps
After the data is in BigQuery
Validate early
Monitor continuously
Control costs
What to use (and when)
Final thought You don’t notice the problem right away. Everything runs smoothly in MySQL… until a new report shows up. Then queries slow down, dashboards lag, and you start realizing you’re stretching the database beyond what it’s good at. That’s usually when BigQuery enters the picture. So the real question becomes: How do you actually move data between them without turning it into a side project? Let’s walk through the three approaches that people actually use — and where each one makes sense. Before tools, there are two ideas worth keeping in mind. Old approach: transform before loading. Modern approach: load first, transform later. BigQuery is built for heavy lifting. MySQL isn’t. So it usually makes more sense to move raw data first and shape it inside the warehouse. Most teams end up here anyway, even if they don’t call it ELT. If you only remember one thing from this article, make it this. Batch pipelines reload data. CDC pipelines move only what changed. Instead of re-running queries every hour, CDC reads MySQL’s binary log and streams updates as they happen. That’s the difference between: This is the simplest way to get data across. Export → upload → done. If you just need to move data once, this is fine. Same if the dataset is small or you’re just trying something out and don’t want to spend time setting things up properly. It starts getting annoying the moment you try to reuse it. Something changes in the schema and the next run just fails with no warning. If the load dies halfway through, you’re the one fixing it. And if you put it on a schedule, it slowly turns into that one script nobody wants to touch but everyone depends on. Fine for one-offs. Painful as a process. This is Google’s managed way of doing scheduled imports. You configure it once, and it runs on a schedule. If you’re already deep in GCP, this is usually the first thing people reach for. It makes sense when daily updates are enough and you don’t feel like building or maintaining pipelines yourself. Setup is pretty quick, nothing too fancy. Monitoring is already there, so you’re not wiring alerts from scratch. And you don’t really have to think about infrastructure at all, which is a big plus. It’s still batch at the end of the day. No real sense of what changed between runs. No near real-time updates. And once you need something more custom, you start feeling the limits pretty quickly. It’s a comfortable middle ground. Works well while batch is good enough. Once you need fresher data or more control, you’ll probably start looking elsewhere. This is where things start to feel different. Instead of moving full tables every time, you’re only dealing with what actually changed. New rows, updates, deletes — that’s it. No full reloads, no unnecessary queries hitting your database over and over. MySQL already keeps track of every change in its binary log. CDC tools just read from that stream and replay those changes on the BigQuery side. So instead of constantly checking “what’s new?”, you’re just picking up changes as they come in and moving them along. If you want to see how this looks in practice, here’s a real use case with the Skyvia platform: https://skyvia.com/learn/mysql-and-bigquery-integration Once you’ve used this approach, batch starts to feel… slow. You can build CDC pipelines yourself. Or you can use something like Skyvia and skip most of that. The idea is the same: No matter which route you take, the same things tend to break. It’s rarely something new — just the usual suspects popping up in slightly different forms. Nothing here is exotic. It’s just stuff that gets overlooked. This part matters more than people expect. Don’t assume it worked. BigQuery charges for data scanned. Small changes make a big difference here. If you strip it down: The method matters, but what really determines success is everything around it: Most teams don’t switch approaches because they want to. Templates let you quickly answer FAQs or store snippets for re-use. as well , this person and/or - “data updates every night”
- and “data is actually usable during the day”