Drupal delayed by Azure MySQL

Managed Azure Database for MySQL cost us 50%–80% slow down. After six months of pain, we moved our Drupal 10 site back to a classic VM and halved our page‑load times.
TL;DR: For our transaction‑heavy Drupal (or WordPress or similar) site, we learned to steer clear of Azure's managed MySQL unless you enjoy 503s and coffee‑fuelled debugging nights.
I’m sharing our saga to help you steer clear of the same pitfall — or find your way out if you’ve already stumbled in.
Falling into a managed DB
In late 2024 our small web team moved our large Drupal 10 site over to a recommended standard of Azure cloud hosting. We followed best practice recommendation for the "fully managed, worry‑free MySQL at cloud scale!"
Reality begged to differ. Those ambition stretched into stress‑filled weeks. Editors grumbled that the admin theme now loaded "slower than dial‑up", cron jobs quietly died, and New Relic lit up like a retro equaliser.
As seasoned Drupal folk we blamed everything except the new database: contrib modules, PHP OpCache, custom code, back we just couldn't rule out that something was off in the DB's performance.
Non‑cached pages: the early smoke signal
Early in the migration process there were large warning signs—anything not already in cache felt like mud when we profiled 20 top‑traffic URLs plus 10 admin paths using JMeter scripts (50 concurrent users, 5‑minute ramp‑up):
- /news view filter – searching for "disaster risk" on Azure took 5.8 s server response versus 1.68 s on prod (same code, same data).
- Editor paths like
/admin/content?type=conference
routinely doubled in TTFB,/node/add
page went past 13 seconds to load, and the media modal didn't.
Pressure to finalise a migration was high and we -- in hindsight, mistakenly -- agreed to move environments and either fix the issue later. I still had not given up hope that this was purely a matter of priming the site and that once the site was lively, it would be more responsive.
I was wrong.
Measuring the pain
As the weeks wore on, the performance issues continue, and we needed to track down where the pain was coming from.
- Enabled Drupal 10's Performance Profiler with query logging at
watchdog.DEBUG
. - Alternated load tests hourly between the managed MySQL endpoint and the MariaDB VM to neutralise cache effects.
The numbers were grim: aggregate query time was 51% slower on Azure DB for MySQL, with p95 latency brushing three seconds. PHP execution time barely budged, confirming the real villain sat behind that "fully managed" façade.
At this point, we could prove that the production environment's database was too slow, but we could point out why.
To isolate this we took our Drupal configuration out of the equation and ran a number of queries to calculate performance between the Azure Database for MySQL with two alternatives: MySQL and MariaDB running on dedicated Azure virtual machines. Across every database-heavy test—including admin pages, content listings, and background operations—the VM-based databases crushed the baseline, slashing load times by up to 60%.
A massive hat thanks to Johan for orchestrating the load tests, crunching the stats, and getting us the raw data we needed to conclusively move away from what was billed as best practice.
The standout winner was MariaDB on a VM, which consistently outperformed both the current setup and VM-based MySQL. Even MySQL on a VM delivered big gains over the managed database, but MariaDB edged ahead in nearly every case, especially on complex pages. Here's a snapshot of the results:
Test Scenario | Azure MySQL (Baseline) | VM MySQL | VM MariaDB | Best Performer |
---|---|---|---|---|
DB01 (Heavy Query) | 27.0 sec | 18.4 sec | 10.8 sec | MariaDB |
PP02 (DB Benchmark) | 4.36 sec | 3.35 sec | 2.24 sec | MariaDB |
UI01 (Admin Page) | 47.5 sec | 40.35 sec | 40.29 sec | MariaDB |
UI02 (Cached Admin) | 0.54 sec | 0.53 sec | 0.41 sec | MariaDB |
ME01 Cold (Content) | 50.6 sec | 42.0 sec | 41.8 sec | MariaDB |
ME01 Warm (Content) | 1.15 sec | 1.00 sec | 1.01 sec | MariaDB |
Digging in to the average for transactions we can see the pain clearly: Azure’s managed service lagged 3–4× behind a self-hosted VM for write-heavy workloads. Reads fared a little better yet still showed a consistent penalty.
CRUD micro-benchmark (seconds)
Operation | Azure MySQL (Before) | MariaDB VM (After) | Local MariaDB |
---|---|---|---|
CREATE TABLE | 0.145 | 0.038 | 0.045 |
INSERT (1k rows) | 3.853 | 2.806 | 1.461 |
SELECT (same rows) | 0.246 | 0.141 | 0.099 |
DROP TABLE | 0.097 | 0.091 | 0.037 |
This last profiling stats were gathered using the excellent Performance Profiler module for Drupal.
Measuring the win
It took quite a bit of planning, approvals, doing and testing, but after several weeks we completed the move MariaDB. It was a win.
Scenario | Before: Azure MySQL (ms) | After: MariaDB VM (ms) | Δ |
---|---|---|---|
DB benchmark (total) | 5 434 | 2 636 | −51% |
Admin » Appearance (cold cache) | 7 291 | 3 072 | −58% |
Admin » Appearance (warm) | 3 008 | 698 | −77% |
Even end‑user stats told the same story: daily 503s dropped 77%, and anonymous page loads shaved ~1.7 s off the top.
As to why Azure Database for MySQL is so slow it's hard to say, it just seems not designed to work well for sprawling databases with thousands of transactions. If there's a secret lever we've missed, we'd love to hear it, as no ones been able to tell us nor have we researched one.
Bonus: MariaDB is now the RDBMS recommended by Drupal 11, so our pivot wasn't just pragmatic—it aligned us with core guidance.
A diversion in the quest: Admin Menu
Our early profiling runs were skewed by Drupal's admin_menu module, whose overhead clouded our view as we hunted for the real Azure bottleneck.
The trusty contrib module was helpfully doing what we asked, rendering drop‑downs by loading the entire menu tree four levels deep on every editor page. On a chunky local database that's fine, under Azure's MySQL it wasn't.
- +117 extra SELECTs per request. We counted.
- Backend TTFB spiked from 1.1 s → 3.2 s when admin_menu was enabled.
Our quick win during diagnostics was simply setting admin_menu to two levels of depth, which immediately shaved ~2 s off critical admin pages and proved the DB layer was already buckling. That has however crippled the usefulness of the menu.
We have a more holistic forthcoming fix by rolling our own admin toolbar that is much simpler in functionality and shows only the links our admins and editors need through a simple taxonomy system (watch for a future post here ...)
But what about PHP file reads?
Yes, PHP hits the file system on every request. On Azure App Service those reads come from a "Premium SSD SMB" share that simply slows down uncached reads (by how much we've yet to fully benchmark).
In our case Drupal's opcode cache masked most of that, so DB latency remained the dominant villain.
We still want to improve this, but for now it's not as crippling as the DB issues was.
Takeaways
- Benchmark first, migrate later. Never agree to complete a migration when you have strong indicators of slow non-primed paths.
- If you're planning on using Azure DB for MySQL, take extra care to benchmark thoroughly
- For all the wins, there are trade offs: don't forget you'll need to do more on your backups and continuity planning.
- Always compare against vanilla installs and raw performance. Before drawing conclusions, benchmark your stack against a clean, out-of-the-box install—sometimes custom modules, config, or cloud quirks hide the real culprit.
- Avoid monoliths, separate concerns. Where possible, resist the urge to build everything into a single, sprawling backend. Instead, consider mixing client-side rendering (for dynamic or interactive elements) with slimmer, purpose-built backend pages. This not only lightens the load on your database and server, but also makes it easier to optimize, debug, and scale each part independently.
Chow Azure DB for MySql, now you’re just ...