Had an interesting session with a client this week who was facing serious SQL Server performance issues. Long-running queries, CPU spikes, and timeouts during peak hours. We started by reviewing their execution plans and found a couple of red flags—missing indexes and suboptimal join patterns. 🔧 What we did: Tuned two critical server-level configurations (one related to MAXDOP, the other to cost threshold for parallelism). Added two well-targeted nonclustered indexes to reduce key lookups and improve seek performance. Made three precise query changes—including replacing scalar UDFs with inline logic and optimizing WHERE clause filters. 🚀 The outcome? The same workload that took minutes now completes in seconds. CPU utilization dropped significantly, and users noticed the difference right away. No hardware upgrade. No magic—just smart tuning. Performance tuning isn’t about throwing everything at the wall. Sometimes, just five well-placed changes can turn a system around. #SQLServer #PerformanceTuning #QueryOptimization #IndexingMatters #DatabaseEngineering #RealWorldSQL
How to Optimize SQL Server Performance
Explore top LinkedIn content from expert professionals.
Summary
Improving SQL Server performance means making your database run faster and use fewer resources, which leads to quicker responses and smoother workflows for everyone relying on the system. This involves carefully designing queries, choosing the right data types, and managing indexes to ensure that searches and updates happen swiftly and efficiently.
- Specify columns: Always select only the fields you need in your queries instead of using SELECT *, which keeps data transfers small and speeds up processing.
- Adjust data types: Use sensible data types for columns—such as limiting NVARCHAR fields—so indexes work efficiently and the database isn’t burdened with extra overhead.
- Create targeted indexes: Add indexes to columns that are frequently searched or joined, but be careful not to create too many as this can slow down updates and inserts.
-
-
A “Harmless” NVARCHAR(MAX) was burning 15–20% of a client’s server One of our retail clients - one of the largest distributors in their region – came to us with a very familiar complaint: “CPU is constantly high and everything feels slower than it should.” Nothing was outright broken. But the main SQL Server was running hot, and performance under load was clearly degrading. Here’s what our team did: Step 1: Let the data point at the villain We started with our monitoring to find which database was consuming most of the CPU, then drilled into Query Store’s Top CPU Consumers. One query jumped straight to the top. - Executed hundreds of thousands of times per day - Doing a simple = (equality) lookup on a string column - That column was defined as NVARCHAR(MAX) Because it was NVARCHAR(MAX), SQL Server couldn’t use it as a normal index key, so every lookup was far heavier than it needed to be. Step 2: Prove we can safely shrink the data type Before proposing any change, we checked the real data: - Scanned existing values in the column - Confirmed the maximum actual length was under 100 characters That gave us a very comfortable safety margin to change the column from: NVARCHAR(MAX) ➜ NVARCHAR(4000) No truncation risk, and still plenty of headroom for future growth. Step 3: Make it SARGable and indexable Once the column was NVARCHAR(4000), we: - Created a nonclustered index with that string column as the key - Added an included column to fully cover the query and avoid key lookups Now the equality predicate could finally use an efficient seek rather than an expensive scan. Step 4: The results (measured, not guessed) Using Query Store stats before and after the change, summarized in our internal reporting, we saw: - Average duration: from 125.97 to 0.11 (per execution) - Average CPU per execution: from 83.20 to 0.08 - Average disk I/O: from 51,200 to 8 That single fix: - Dropped total server CPU by roughly 15–20% - Improved application response times for end users - Gave the client the option to downscale the server and reduce infrastructure costs All from replacing a lazy NVARCHAR(MAX) with a sensible length and a proper index. Tiny schema decisions are never tiny at scale.
-
With a background in data engineering and business analysis, I’ve consistently seen the immense impact of optimized SQL code on improving the performance and efficiency of database operations. It indirectly contributes to cost savings by reducing resource consumption. Here are some techniques that have proven invaluable in my experience: 1. Index Large Tables: Indexing tables with large datasets (>1,000,000 rows) greatly speeds up searches and enhances query performance. However, be cautious of over-indexing, as excessive indexes can degrade write operations. 2. Select Specific Fields: Choosing specific fields instead of using SELECT * reduces the amount of data transferred and processed, which improves speed and efficiency. 3. Replace Subqueries with Joins: Using joins instead of subqueries in the WHERE clause can improve performance. 4. Use UNION ALL Instead of UNION: UNION ALL is preferable over UNION because it does not involve the overhead of sorting and removing duplicates. 5. Optimize with WHERE Instead of HAVING: Filtering data with WHERE clauses before aggregation operations reduces the workload and speeds up query processing. 6. Utilize INNER JOIN Instead of WHERE for Joins: INNER JOINs help the query optimizer make better execution decisions than complex WHERE conditions. 7. Minimize Use of OR in Joins: Avoiding the OR operator in joins enhances performance by simplifying the conditions and potentially reducing the dataset earlier in the execution process. 8. Use Views: Creating views instead of results that can be accessed faster than recalculating the views each time they are needed. 9. Minimize the Number of Subqueries: Reducing the number of subqueries in your SQL statements can significantly enhance performance by decreasing the complexity of the query execution plan and reducing overhead. 10. Implement Partitioning: Partitioning large tables can improve query performance and manageability by logically dividing them into discrete segments. This allows SQL queries to process only the relevant portions of data. #SQL #DataOptimization #DatabaseManagement #PerformanceTuning #DataEngineering
-
These were the optimizations I used to save $5k a month on a SQL query (learning them are pretty useful across many databases) I spend a lot of time optimizing SQL queries for data pipelines and dashboards, and I keep seeing the same performance killers over and over. Queries that should run in seconds take minutes, and teams don't realize why until I dig into the execution plans. Here are the most expensive gotchas I've found: 1. Using SELECT * in production queries - I see this constantly in ETL pipelines. You're pulling columns you don't need, bloating memory usage and network transfer. Always specify exact columns, especially in large tables or frequent queries. 2. Filtering after aggregation instead of before - I find this pattern everywhere. Moving WHERE clauses before GROUP BY can reduce the dataset by orders of magnitude before expensive operations run. 3. Using complex functions in WHERE clauses - Wrapping columns in complex functions prevents some query optimizations like predicate push down sometimes. 4. Unnecessary DISTINCT operations - Teams add DISTINCT as a quick fix for duplicate data, but it masks underlying data quality issues and forces expensive deduplication operations. 5. Using ORDER BY on transform pipelines - While sometimes those are legit for optimization purposes (bucketing or clustering), most times these are no-ops on transformation pipelines and just burn $$ 6. Not understanding your database's query planner - Each platform I work with (Snowflake, BigQuery, PostgreSQL) optimizes queries differently. Learning your specific system's behavior is crucial. The impact compounds quickly in my experience. A query that takes 5 minutes instead of 30 seconds might not seem terrible, but when it runs hourly in production pipelines or as part of a dashboard with 20 charts that 30% of the employees refresh daily, you're burning compute resources and slowing down dependent processes. The real win isn't just faster queries. When I optimize SQL, I'm reducing compute costs, keeping dashboards responsive for end users, and making sure pipelines don't break when data volume doubles. What's your experience with SQL performance issues? Share in the comments, follow for more insights on data engineering, and ♻️ repost if your network could benefit! #SQL #DataEngineering #QueryOptimization #DatabasePerformance #DataStrategy
-
5 𝗦𝗤𝗟 𝗧𝗿𝗶𝗰𝗸𝘀 𝘁𝗼 𝗠𝗮𝗸𝗲 𝗬𝗼𝘂𝗿 𝗪𝗼𝗿𝗸 𝗙𝗮𝘀𝘁𝗲𝗿 𝗮𝗻𝗱 𝗖𝗹𝗲��𝗻𝗲𝗿 Working with SQL doesn’t have to feel like a guessing game. Here are five technical SQL tricks that can help you streamline complex queries and optimize performance: 1. 𝗕𝗿𝗲𝗮𝗸 𝗗𝗼𝘄𝗻 𝗖𝗼𝗺𝗽𝗹𝗲𝘅 𝗤𝘂𝗲𝗿𝗶𝗲𝘀 𝘄𝗶𝘁𝗵 𝗖𝗧𝗘𝘀: Use Common Table Expressions (CTEs) to create temporary result sets within a query. CTEs improve readability and allow you to build logical steps. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 - WITH SalesSummary AS ( SELECT customer_id, SUM(sales_amount) AS total_sales FROM Sales GROUP BY customer_id ) SELECT * FROM SalesSummary WHERE total_sales > 5000; 2. 𝗔𝗱𝗱 𝗜𝗻𝗱𝗲𝘅𝗲𝘀 𝘁𝗼 𝗕𝗼𝗼𝘀𝘁 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲: Creating indexes on frequently joined or filtered columns can drastically reduce query time by helping the database locate rows faster. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 - CREATE INDEX idx_customer_id ON Orders (customer_id); 3. 𝗨𝘀𝗲 𝗜𝗡𝗡𝗘𝗥 𝘃𝘀. 𝗟𝗘𝗙𝗧 𝗝𝗢𝗜𝗡 𝗪𝗶𝘀𝗲𝗹𝘆: Understanding join types prevents unexpected data loss. For instance, use INNER JOIN when you only want matching records from both tables, and LEFT JOIN when you want to keep all records from the left table regardless of matches. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 - SELECT Orders.order_id, Customers.name FROM Orders LEFT JOIN Customers ON Orders.customer_id = Customers.customer_id; 4. 𝗘𝘅𝗽𝗹𝗼𝗿𝗲 𝗪𝗶𝗻𝗱𝗼𝘄 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀: Window functions like ROW_NUMBER(), RANK(), and SUM() enable calculations across a set of rows related to the current row without needing additional joins. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 - SELECT order_id, amount, SUM(amount) OVER (ORDER BY order_date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS running_total FROM Orders; 5. 𝗙𝗶𝗹𝘁𝗲𝗿 𝘄𝗶𝘁𝗵 𝗛𝗔𝗩𝗜𝗡𝗚, 𝗡𝗼𝘁 𝗪𝗛𝗘𝗥𝗘, 𝗔𝗳𝘁𝗲𝗿 𝗔𝗴𝗴𝗿𝗲𝗴𝗮𝘁𝗲𝘀: Use the HAVING clause to filter aggregated data in GROUP BY queries, as WHERE cannot be applied to aggregate functions. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 - SELECT customer_id, COUNT(order_id) AS total_orders FROM Orders GROUP BY customer_id HAVING total_orders > 10; These small adjustments can make a huge difference in query efficiency and accuracy. What’s one SQL trick that’s helped you the most?"
-
SQL Server isn’t slow. Your indexing strategy is. After years of working with SQL Server systems, one pattern keeps repeating. Indexes exist. Performance is still bad. Because SQL Server isn’t seeking. It’s scanning. Here are 6 common indexing mistakes quietly killing performance: 1️⃣ Indexing everything “just in case” Too many indexes confuse the optimizer and slow writes. More indexes ≠ more seeks. 2️⃣ Getting column order wrong The right columns in the wrong order turn seeks into scans. 3️⃣ Using functions on indexed columns Wrap a column in YEAR() or LOWER() and SQL Server can’t seek — it scans. 4️⃣ Missing indexes on JOIN columns No usable index means no seek. Just scans, hash joins, and memory pressure. 5️⃣ Ignoring fragmentation and maintenance Fragmented indexes can’t seek efficiently. They behave like scans. 6️⃣ Indexing low-selectivity columns If an index points to half the table, SQL Server stops seeking and starts scanning. Here’s the rule that never changes: Good indexes = index seeks Bad indexes = index scans And scans are where performance goes to die.
-
Before adding hardware resources to your SQL Server, do these 5 things..... Adding hardware may seem like a good idea. It could be a quick fix that will make users happy, at least for a while. But it’s not a panacea. It can come with a hefty price tag. So it shouldn’t be the first tool out of the bag. 1) Define the problem. In the SQL Server world, we’ve seen a wide array of “database problems” that weren’t actually “database” problems. So it’s a good idea to actually make sure the issue doesn’t reside somewhere else. For example, could it be a networking issue? Or perhaps a DNS issue? Maybe it’s a Citrix issue? 2) Assess the SQL Server configuration. Once you’ve determined that the problem may actually be with the SQL Server, it’s worth your time to do a high-level assessment of the configuration. This is especially important if you’ve inherited the SQL Server, didn’t set it up yourself, or it’s been a long time since you’ve looked at it. 3) Confirm maintenance plans. SQL Server's cost-based optimization is predicated on having good, accurate, and up-to-date statistics. If the statistics are stale, bad decisions are made. And bad decisions lead to poor performance. Verify that maintenance plans are in place to keep the statistics up to date. Verify that indexes are being maintained as well. 4) Examine SQL Server wait statistics. As SQL Server goes about its job of responding to queries, it actively manages and keeps track of its key resources. For example, memory, CPU, disk i/o and network throughput are all critical resources for SQL Server. Use the DMVs to sk it what it's waiting on when it's waiting. 5) Identify the most resource intensive queries. The DMVs will be your friend in searching for these. SSMS also has some standard reports built in, including: - Top Queries by Average CPU Time - Top Queries by Total CPU Time - All Blocking Transactions - Service Broker Statistics - Top Queries by Average IO - Top Queries by Total IO - and much more Throwing hardware at a performance can be costly. Determine why your performance is suffering so you'll know how to best resolve it.
-
SQL Query Optimization Best Practices Optimizing SQL queries in SQL Server is crucial for improving performance and ensuring efficient use of database resources. Here are some best practices for SQL query optimization in SQL Server: 1). Use Indexes Wisely: a. Identify frequently used columns in WHERE, JOIN, and ORDER BY clauses and create appropriate indexes on those columns. b. Avoid over-indexing as it can degrade insert and update performance. c. Regularly monitor index usage and performance to ensure they are providing benefits. 2). Write Efficient Queries: a. Minimize the use of wildcard characters, especially at the beginning of LIKE patterns, as it prevents the use of indexes. b. Use EXISTS or IN instead of DISTINCT or GROUP BY when possible. c. Avoid using SELECT * and fetch only the necessary columns. d. Use UNION ALL instead of UNION if you don't need to remove duplicate rows, as it is faster. e. Use JOINs instead of subqueries for better performance. f. Avoid using scalar functions in WHERE clauses as they can prevent index usage. 3). Optimize Joins: a. Use INNER JOIN instead of OUTER JOIN if possible, as INNER JOIN typically performs better. b. Ensure that join columns are indexed for better join performance. c. Consider using table hints like (NOLOCK) if consistent reads are not required, but use them cautiously as they can lead to dirty reads. 4). Avoid Cursors and Loops: a. Use set-based operations instead of cursors or loops whenever possible. b. Cursors can be inefficient and lead to poor performance, especially with large datasets. 5). Use Query Execution Plan: a. Analyze query execution plans using tools like SQL Server Management Studio (SSMS) or SQL Server Profiler to identify bottlenecks and optimize queries accordingly. b. Look for missing indexes, expensive operators, and table scans in execution plans. 6). Update Statistics Regularly: a. Keep statistics up-to-date by regularly updating them using the UPDATE STATISTICS command or enabling the auto-update statistics feature. b. Updated statistics help the query optimizer make better decisions about query execution plans. 7. Avoid Nested Queries: a. Nested queries can be harder for the optimizer to optimize effectively. b. Consider rewriting them as JOINs or using CTEs (Common Table Expressions) if possible. 8. Partitioning: a. Consider partitioning large tables to improve query performance, especially for queries that access a subset of data based on specific criteria. 9. Use Stored Procedures: a. Encapsulate frequently executed queries in stored procedures to promote code reusability and optimize query execution plans. 10). Regular Monitoring and Tuning: a. Continuously monitor database performance using SQL Server tools or third-party monitoring solutions. b. Regularly review and tune queries based on performance metrics and user feedback. #sqlserver #performancetuning #database #mssql
-
6 SQL optimizations that actually work (I tested all 20 popular ones) I spent a week benchmarking every SQL "optimization tip" I could find. Most made zero difference. Some made things worse. 𝐇𝐞𝐫𝐞'𝐬 𝐰𝐡𝐚𝐭 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐦𝐨𝐯𝐞𝐬 𝐭𝐡𝐞 𝐧𝐞𝐞𝐝𝐥𝐞: 𝟏. 𝐈𝐧𝐝𝐞𝐱𝐞𝐬 𝐚𝐫𝐞 𝐲𝐨𝐮𝐫 𝐛𝐞𝐬𝐭 𝐟𝐫𝐢𝐞𝐧𝐝 (𝐮𝐧𝐭𝐢𝐥 𝐭𝐡𝐞𝐲'𝐫𝐞 𝐧𝐨𝐭) • Index columns in WHERE, JOIN, and ORDER BY clauses • But too many indexes slow down INSERT/UPDATE operations • Monitor which indexes actually get used 𝟐. 𝐒𝐄𝐋𝐄𝐂𝐓 * 𝐢𝐬 𝐥𝐚𝐳𝐲 𝐚𝐧𝐝 𝐞𝐱𝐩𝐞𝐧𝐬𝐢𝐯𝐞 • Pulls unnecessary data across the network • Can't use covering indexes effectively • Name your columns – your database will thank you 𝟑. 𝐅𝐢𝐥𝐭𝐞𝐫 𝐞𝐚𝐫𝐥𝐲, 𝐟𝐢𝐥𝐭𝐞𝐫 𝐨𝐟𝐭𝐞𝐧 • Push WHERE conditions as close to the data source as possible • Filter before JOINs when you can • Smaller datasets = faster everything 𝟒. 𝐔𝐍𝐈𝐎𝐍 𝐀𝐋𝐋 𝐯𝐬 𝐔𝐍𝐈𝐎𝐍 • UNION removes duplicates (expensive!) • UNION ALL keeps everything (fast!) • Only use UNION when you actually need deduplication 𝟓. 𝐄𝐗𝐈𝐒𝐓𝐒 𝐯𝐬 𝐈𝐍 • EXISTS stops at the first match • IN processes the entire subquery • For large datasets, EXISTS usually wins 𝟔. 𝐃𝐈𝐒𝐓𝐈𝐍𝐂𝐓 𝐢𝐬𝐧'𝐭 𝐚𝐥𝐰𝐚𝐲𝐬 𝐲𝐨𝐮𝐫 𝐟𝐫𝐢𝐞𝐧𝐝 • Often a band-aid for bad JOINs • Fix the root cause instead • GROUP BY might be more efficient 𝐁𝐮𝐭 𝐡𝐞𝐫𝐞'𝐬 𝐭𝐡𝐞 𝐤𝐢𝐜𝐤𝐞𝐫... What speeds up one query might slow down another. Your data distribution, table size, and database engine all matter. 𝐌𝐲 𝐫𝐮𝐥𝐞? Test everything. Use EXPLAIN PLAN. Watch those execution times. Keep what works, toss what doesn't. The best optimization is the one that actually makes YOUR queries faster, not the one that got 1000 likes on LinkedIn. What SQL optimization surprised you the most when you actually tested it? 𝐏.𝐒. I share job search tips and insights on data analytics & data science in my free newsletter. Join 16,000+ readers here → https://lnkd.in/dUfe4Ac6
-
SQL Query Optimization: Filter First, Then Join! One of the simplest yet most effective SQL optimization techniques is filtering data before performing a join. 🔍 Why does this matter? When working with large datasets, joining tables before applying filters can lead to unnecessary computations and slow performance. Instead, filtering records first reduces the number of rows that need to be joined, making the query faster and more efficient. 💡 Example: Instead of this: SELECT o.order_id, c.customer_name FROM orders o JOIN customers c ON o.customer_id = c.customer_id WHERE c.country = 'USA'; Do this: SELECT o.order_id, c.customer_name FROM (SELECT * FROM customers WHERE country = 'USA') c JOIN orders o ON o.customer_id = c.customer_id; 📌 Key Benefits: ✅ Reduces the dataset size before the join ✅ Improves query execution speed ✅ Optimizes resource usage Small tweaks like this can have a huge impact on performance, especially in large databases! 🔗 Have you used this optimization before? Share your experience in the comments! ⬇️ #SQL #DataEngineering #PerformanceTuning #SQLOptimization