Brent Ozar knows all about going fast – he races cars and he revs up SQL Servers with breathtaking database performance results daily. In his webcast “How to measure SQL Server,” for Quest’s Database Training Days series, Brent reminded us that performance is all about measurement.
Revving up for performance
Brent took the opportunity to practice social distancing and dressed the part by donning a full racing suit and helmet. In some pre-webcast banter, we learned that he had to wire a microphone into the helmet and tape the earbuds to his ears! But we digress. The webcast was all about performance, and there were plenty of car analogies to go around.
To improve SQL Server performance, the premises are:
- Pick metrics to focus on improving
- Measure performance before and after making limited changes (basic scientific method)
- Understand when you have the wrong equipment for what you’re trying to achieve
Database performance tuning metrics
A lengthy discussion of Ford F150 trucks, Ford Fiestas and some other interesting vehicles illustrated that there are different ways to improve the time it takes to go from 0 to 60 miles per hour. You can decrease the vehicle’s weight, add a bigger engine or start stripping out non-essentials – like a windshield. There’s going to be a compromise between performance and utility. Databases are just like this – they often get loaded down. This is when custom performance tuning is needed, which requires knowing and improving the metrics.
Brent contends there are three primary metrics you need for performance tuning cars and databases: weight, speed benchmark (like 0 to 60), and how hard the engine (server) is working.
Measuring database size
Weight for SQL Server translates to total database size and how much data you have. This is usually measured in gigabytes or terabytes. From about 1-150 GB, SQL Server Standard Edition should be sufficient. From 150-500 GB is an easy load for the Enterprise Edition. Beyond 500 GB, it starts to matter whether it is active data and how it’s accessed. And, anything over 1 TB of OLTP data could be very challenging.
Tracking performance speed
The speed benchmark in cars is easy – MPH. For the database, it’s batch requests per second, but this needs to be trended hourly during different time periods. Obviously, the more queries there are, the slower the performance will be depending on the hardware.
Assessing query workloads
Finally, to understand how hard the database is working, you need to understand what queries are currently running and what’s waiting in the queue. This will give you a wait time ratio – basically how long are tasks waiting for other ones to complete. Your wait time ratio will be expressed as hours of wait time per hour (or seconds of wait time per second) – don’t mix your units of measurement. When you have a good handle on these stats over time, you can see what affects wait time, for example, if there are more or fewer batch requests, better or worse tuned queries, etc. Then, you can tackle those issues.
Watch the on-demand webinar recording for all of Brent’s sage advice and humor.