I’m Coming to #SQLBits! Here are My Sessions.

SQLBits
0

The annual SQLBits conference is April 22-25, and they just announced the session and speaker lineup.

This? It's just coffee, as far as you need to know.This year’s theme is Cartoon – something I love deeply to begin with – and I decided to lean into that theme, big time. Here are my sessions:

Training Day: Dev-Prod Demon Hunters: Finding the Real Cause of Production Slowness – I work with a lot of development teams who really struggle with queries that work just fine in development, but then fall apart in production scenarios. We’ll cover how to quickly identify server & database differences that really matter (and which ones don’t), what kinds of execution plan icons won’t scale well as data size increases. I themed it with K-Pop Demon Hunters because I can’t stop singing Golden.

Regular Session: Pokémon Battle, Choose Your Index: You Can’t Have Them All – An all-demo session where you’ll have cards that have index definitions on them, and you’ll get a query onscreen. You have to decide which cards to play against each query, and find out which ones SQL Server chooses to play, to learn which indexes are super-effective.

Panel Discussion: 20 Years of the Cloud: What Changed, What Didn’t, and What’s Next – It’s hard to believe it’s been so long, and it’s a good point in time to stop and discuss. I’ll build a panel of your peers to share their thoughts and take your questions.

Regular Session: Watch Brent Tune a Query in SQL Server 2025 – Ever wonder how someone else does it? In this all-demo session, I share a slow query (new each time that I give this popular session), and we talk through fix ideas and gauge their effectiveness.

Sound like fun? Register now for SQLBits before prices go up Feb 9. See you there!


[Video] Home Office Hours, Under Construction Edition

Videos
2 Comments

While in the midst of moving my home office into a new room in the house, I stopped to take your top-voted questions from https://pollgab.com/room/brento. About 18.5 minutes into the recording, my camera overheated because I hadn’t set up the fan on it yet, hahaha.

Here’s what we covered:

  • 00:00 Start
  • 02:28 Born to be glowin: Have you ever added a feature to first responder kit that you had hesitation or regrets on? If so, which one?
  • 05:41 Kulstad: Hi Brent. Have you changed your opinion on using Copilot AI inside of SSMS? Your post from May 2025 suggests we steer clear of it, but I wonder if MS has made any improvements to change your mind since then.
  • 08:47 YouTubeFreeLoader: I have a select query that is intermittently timing out after 30s and DPA is showing 98% of the total wait time for this query is PAGELATCH_SH. What query attributes would I be looking for that would be causing this issue? Any tips to mock up this scenario in SQLQueryStress?
  • 14:03 Now I’m shinin’: What’s your opinion of Azure Horizon DB? Does it stand a chance against Amazon Aurora?
  • 15:27 Fey Mood: Have you ever had any surprises with Dynamic Data Masking?
  • 15:40 Q-Ent: Hi Brent, based on the questions during office hours, how do you think SQL Server professionals’ thinking and maturity have evolved over time?
  • 18:30 jrl: Which types of technology workers do you think are least exposed to AI-related job loss?
  • 18:53 SteveE: In one of your PG courses, you say if you are having to ask developers to change code, then you as a DBA or the DB itself has failed. How is this so when you can’t compensate for the work of others? Some of the queries I see others write give the database no chance at all!

Automatic Stats Updates Don’t Always Invalidate Cached Plans

Statistics
8 Comments

Normally, when SQL Server updates statistics on an object, it invalidates the cached plans that rely on that statistic as well. That’s why you’ll see recompiles happen after stats updates: SQL Server knows the stats have changed, so it’s a good time to build new execution plans based on the changes in the data.

However, updates to system-created stats don’t necessarily cause plan recompiles.

This is a really weird edge case, and you’re probably never gonna hit it, but I hit it during every single training class I teach. I casually mention it each time to the class, and I don’t even take much notice of it anymore. However, a student recently asked me, “Is that documented anywhere?” and I thought, uh, maybe, but I’m not sure, so might as well document it here on the ol’ blog.

To illustrate it, I’ll take any version of the Stack Overflow database (I’ll use the big 2024 one), drop the indexes, free the plan cache. I’m using compat level 170 (2025) because I’m demoing this on SQL Server 2025, and I wanna prove that this still isn’t fixed in 2025. Then run a query against the Users table:

To run this query, SQL Server needs to guess how many rows will match our Location = ‘Netherlands’ predicate, so it automatically creates a statistic on the Location column on the fly. Let’s check it out with sp_BlitzIndex, which returns a result set row with all of the stats histogram data for that table:

sp_BlitzIndex showing statistics

I’m going to scroll down to the Netherlands area, and show a few more relevant columns. You’ll wanna click on this to zoom in if you want to follow along with my explanation below – I mean, if you don’t trust my written explanation, which of course you do, because you’re highly invested in my credibility, I’m sure:

Stats details for Netherlands

Things to note in that screenshot:

  • It’s an auto-created stat with a name that starts with _WA_Sys (system-created)
  • It was sampled: note the fractional numbers for range rows and quality rows, plus notice the “Rows Sampled” near the far right
  • It was last updated at 2025-12-24 01:10:55.0733333 – which tells you that I’m writing this post on Christmas Eve day, but the timing is odd because I’m writing this at a hotel in China, and my server is in UTC, so God only knows what time it is where you’re at, and no, you don’t have to worry about my mental health even though I’m blogging on Christmas Eve Day, because I’m writing this at the hotel’s continental breakfast while I wait for Yves to wake up and get ready, because we’re going out to Disney Shanghai today, which has the best popcorn varieties of any Disney property worldwide, and you’re gonna have to trust me on that, but they’re amazing, like seriously, who ever knew they needed lemon popcorn and that it would taste so good
  • Note the estimate for Netherlands: Equal Rows is 17029.47

Now let’s say we’re having performance issues, so we decide to update statistics with fullscan. Then, we’ll check the stats again:

The updated stats for Netherlands have indeed changed:

Updated stats on Netherlands

Stuff to note in that screenshot after you click on it while saying the word “ENHANCE!” loudly:

  • Netherlands Equal Rows has changed to 17100
  • The numbers are all integers now because Rows Sampled is the same as the table size
  • Stats last updated date has changed to 2025-12-24 01:17:54.5133333, so about 6 minutes have passed, which gives you an idea of what it’s like writing a blog post – this stuff looks deceivingly easy, but it’s not, and I’ve easily spent an hour on this so far, having written the demo, hit several road blocks along the way, then started writing the blog post and capturing screen shots, but it doesn’t really matter how long it takes because I’m sure Yves will be ready “in five minutes”, and we both know what that means, and by “we both” I mean you and I, dear reader

So if I run the query again, it gets a new plan, right? Let’s see its actual execution plan:

Actual execution plan

Focus on the bottom right numbers: SQL Server brought back 17,100 rows of an estimated 17,030 rows. That 17,030 estimate tells you we didn’t get a new query plan – the estimates are from the initial sampled run of stats. Another way to see it is to check the plan cache with sp_BlitzCache:

sp_BlitzCache Output

I’ve rearranged the columns for an easier screenshot – usually these two columns are further out on the right side:

  • # Executions – the same query plan has been executed 2 times.
  • PlanGenerationNum – 1 because this is still the first variation of this query plan

Brent in Shanghai Disneyland 2024 (the year before writing this post)So, what we’re seeing here is that if a system-generated statistic changes, even if the contents of that stat changed, that still doesn’t trigger an automatic recompilation of related plans. If you want new plans for those objects, you’ll need to do something like sp_recompile with the table name passed in.

In the real world, is this something you need to worry about? Probably not, because in the real world, you’re probably more concerned about stats on indexes, and your plan cache is likely very volatile anyway. Plus, in most cases, I’d rather err on the side of plan cache stability rather than plan cache churn.

Now, some of you are going to have followup questions, or you’re going to want to reproduce this demo on your own machines, with your own version of SQL Server, with your own Stack Overflow database (or your own tables.) You’re inevitably going to hit lots of different gotchas with demos like this because statistics and query plans are complicated issues. For example, if you got a fullscan on your initial stats creation (because you had a really tiny object – no judgment – or because you had an older compatibility level), then you might not even expect to see stats changes. I’m not going to help you troubleshoot demo repros here for this particular blog post just because it’d be a lot of hand-holding, but if you do run into questions, you can leave a comment and perhaps another reader might be willing to take their time to help you.

Me? I’m off to Shanghai Disneyland. Whee!


Maybe You Shouldn’t Even Be Using Clustering or AGs.

Backup and Recovery
17 Comments

Sandra Delany (LinkedIn) wrote a well-thought-out blog post called, “Should a SQL Server DBA Know Windows Clustering?” She’s got about 20 years of DBA experience, and she works for Straight Path (a firm I respect) as a consultant. You can probably guess based on her background that yes, she believes you should know how to set up, configure, and troubleshoot Windows clustering. It’s a good post, and you should read it.

But… I don’t agree.

Two things. First, I have a problem with any blog post (even my own) that say, “If you call yourself an X, you should definitely know Y.” The term “DBA” encompasses a huge variety of jobs, held by people with a huge variety of seniority levels. If someone’s in their first year in a DBA job, maybe even their fifth, I don’t necessarily expect them to know Windows clustering.

For example, I once (briefly) worked at a global company where the DBAs weren’t even given permissions to glance at the cluster. If the SQL Server service wouldn’t start, the DBAs had to transfer the issue to the Windows team, who handled clustering. The company’s logic was that clustering is a foundational part of Windows, and it doesn’t really have anything to do with SQL Server – and in fact, is reused by other cluster-savvy application. Clustering troubleshooting is a giant pain in the ass that involves Windows logs, DNS, IP addresses, etc, all things that DBAs aren’t good at – but the Windows team is (or at least should be.)

Which brings me to another blog post…

Chrissy LeMaire (LinkedInBluesky), the creator of the powerful and popular DBAtools PowerShell stuff, wrote a solid blog post called Have You Considered Not Using SQL Server High Availability? You should read that post too.

A lot of you are full time database administrators, and you’re already taking a deep breath in anticipation of yelling back at the screen, but hang on a second.

First, your SQL Servers still all need disaster recovery. That’s different. When we say disaster recovery, we’re usually talking about things like native SQL Server backups, log shipping, storage replication, etc. These are techniques that you can use to rebuild the SQL Server in a different place, like after a ransomware attack or a natural disaster. Nobody’s suggesting you get rid of that.

We’re specifically talking HA features here: failover clustered instances (FCIs), Always On Availability Groups, and database mirroring. Chrissy writes about the operational challenge of those features.

HA features require 2 kinds of labor from 4 kinds of people. Chrissy’s post points out that the teams who manage the databases, the storage, Active Directory/DNS, and networking all have to get involved. I’d add that it requires 2 kinds of labor from all of these people: both planned, and unplanned/chaotic. When there’s a production database outage, there’s a lot of finger pointing, and management demands that everybody drop their work and jump into conference calls. Everybody starts stabbing at various switches and dials, groping blindly and wasting time, until things go back to normal – and the next server has its next emergency.

Small companies don’t have 4 kinds of people. They just have a core handful of IT people who do everything. They’re experts in how the entire stack is configured at this shop, but they’re not experts in all of the underlying technologies. When things go wrong, the work is usually single-threaded, dependent on the one person unlucky enough to be on call. That person is even more likely to stab at various switches and dials, making unplanned, chaotic changes that end up making the environment even less stable over time.

Virtualization provides pretty dang good HA for many failures, in many shops. Properly configured, it protects you from individual host hardware failures and single network cable problems. No, it doesn’t protect you from a bad Windows or SQL Server patch, but in small shops, they don’t do a lot of patching anyway. (Ease up on the outrage – I’ve seen your SQL ConstantCare® data, I know you’re several CUs behind, and I know you’ve muted that recommendation to patch.)

Virtualization HA is easier to manage. It’s just one technology that works to protect all of your VMs. That’s less learning that the overworked staff have to do, and besides, they have to learn it anyway to protect the rest of your servers. As long as they’re using it for everything else, they might as well lean on it to protect SQL Server as well.

So when clients are talking to me about easy ways they can improve their uptime, and they’re already running SQL Server in a VM, we take a step back and look at their virtualization high availability setup. If that’s working well, I explain the part about the net-new planned & unplanned work for all the different roles, and then I ask about their on-call rotations, and their plans to hire more staff in order to handle this net-new work.

If they haven’t been adding staff, and don’t plan to, then I’d rather have the staff focus on improving their ability to rapidly restore SQL Server backups, provision & configure new servers, and just generally automate their environment. That’ll come in handy more often, and help with both disaster recovery, ransomware rebuilding, recovering from “oops” delete queries, and just generally reacting to day to day issues.

In summary:

  • If you don’t know how to troubleshoot DNS, file shares, Windows cluster validation, or PowerShell, then Chrissy’s blog post is right for you, and you should probably try virtualization for high availability.
  • If your company’s HA/DR needs require Availability Groups and/or failover clustered instances, then Sandra’s blog post is right for you, and you probably have a big learning journey ahead.

[Video] Office Hours in Tokyo, Japan

Videos
0

When I travel, I try to take y’all with me to really pretty or at least interesting locations. On my Tokyo visit last year, I did Office Hours in Akihabara, the electronics & anime shopping hub. Even though I’m not into anime at all and I don’t collect that kind of gear, I wanted to see it because it’s famous among geeks.

Well, this time around, I’m in Tokyo again, but, uh… we’re just hanging out outside my hotel, because I ran out of time to take y’all somewhere cool before I came home. Let’s go through your top-voted questions from https://pollgab.com/room/brento anyway.

Here’s what we covered:

  • 00:00 Start
  • 02:08 Rumi: How does the best PostgreSQL DB management app compare to SSMS 22?
  • 03:09 Mango: Hi brent, we are running on-prem SQL Server 2019 and Azure SQL. I am trying to get my boss (CIO) to purchase a monitoring tool but he insists that I use Azure Database Watcher (DW). DW can’t monitor on-prem databases. What are your thoughts regarding a monitoring tool. vs. DW?
  • 04:00 Feeling Turquoise: How worried should I be if sp_BlitzFirst takes 20 seconds to run?
  • 04:31 Chicago_Dav: How disruptive do you forsee the imminent AI bubble burst being?
  • 08:29 Radek: Hi Brent, I believe I heard you saying that it is a good idea to work with the most expensive thing in the room, so your salary looks small… I wonder why you decided to work with SQL Server, not Oracle?
  • 08:52 MyTeaGotCold: Do you see any value in enabling Accelerated Database Recovery for the sole purpose of making it easy to recover from accidental long-running transactions?
  • 10:07 View master: In SQL views, what is your take on naming convention? Use one? Don’t use one?
  • 11:03 Dopinder: What was your experience like with The Great Firewall of China? Does VPN work with it?
  • 12:42 Dopinder: Our company has no data retention so the SQL multi tenant database is massive. What are the cons / risks of no data retention? Do you run into this much? How do you convince management to apply data retention policies?
  • 12:44 Dopinder: Our company has no data retention so the SQL multi tenant database is massive. What are the cons / risks of no data retention? Do you run into this much? How do you convince management to apply data retention policies?
  • 14:16 Jan C. de Graaf – Blijleven: Hello Brent. You’ve recommended Itzik Ben-Gan’s books several times. Do you know of any good material, preferably books, for targeting EF on Sql Server? We’re specifically looking for in-depth guidance on todo’s and don’t’s with regard to a relatively busy OLTP system.
  • 15:03 DTech: Hi Brent My friend is going to work on a new client where client is asking which SQL version(2019 or 2022) . Which version he can recommend
  • 15:31 Elwood Blues: What are your thoughts on running a 8 core, 128 GB RAM SQL server for reporting on Azure Local infrastructure? Should I skip azure local and just aim to deploy in an Azure VM? There could be some cost savings with Azure local deployment.
  • 17:14 jrl: The coding models and tools have made rapid progress in recent months. Please talk about your approach to LLM-assisted coding. Can you quantify the productivity gain you and Richie are seeing with LLM-assisted coding, and how much autonomy are you giving these tools?

SQL Server 2025 CU1 is Off to a Rough Start

SQL Server 2025
16 Comments

SQL Server 2025 Cumulative Update 1 came out last week, and I was kinda confused by the release notes. They described a couple dozen fixed issues, and the list seemed really short for a CU1.

However, the more I dug into it, the weirder things got. For example, there were several new DMVs added – which is normally a pretty big deal, something to be celebrated in the release notes – but they weren’t mentioned in the release notes. One of the DMVs wasn’t even documented. So I didn’t blog to tell you about CU1, dear reader, because something about it seemed fishy.

Sure enough, Microsoft just pulled 2025 CU1 and 2022 CU23 because of an issue with database mail:

Database Mail stops working after you install this cumulative update. You might see the following error message:

Could not load file or assembly ‘Microsoft.SqlServer.DatabaseMail.XEvents, Version=17.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91’ or one of its dependencies. The system cannot find the file specified.

If you use Database Mail and already downloaded this update, don’t install it until a fix is available.

If you already installed this update, uninstall it to restore Database Mail functionality.

If you’ve already installed one of the affected CUs, and you need an emergency workaround fix until you can uninstall the Cumulative Update, check out this learn.microsoft.com post. Down in the answers, there’s a workaround with a PowerShell script to poll for unsent emails and send them manually. (I haven’t used this personally so I can’t vouch for it, but hey, any port in a storm.)


Thoughts on PASS’s Bankruptcy, Redgate’s Acquisition, and Private Equity

#SQLPass
36 Comments

A private equity (PE) company recently acquired the majority of Redgate Software, the current owners of the PASS Data Community Summit and SQLServerCentral.

Redgate SoftwareEverybody in tech has private equity stories, some good, some terribad. Private equity hands money to companies not out of generosity, but because they believe they can turn it into even more money over time. The new PE owners want to pump up the company’s revenues, cut expenses, and raise profits as quickly as possible. That way, better numbers help them turn around and offer the company’s stock to the public, getting their money back out plus a nice profit.

Redgate’s new owners (and they are owners, as noted by the “majority shareholders” wording in Redgate’s announcement) don’t have emotional ties to past baggage, so sometimes this means cutting things that the community loves, but that aren’t meaningfully profitable.

Like, uh… Summit and SQLServerCentral.

I don’t necessarily think these cool communities are on the chopping block, but that is the kind of thing that PE companies do when they’re under pressure to make money quickly. Even if it does happen, it’s probably unlikely to happen in the first year at least. I hope it never does – I hope these community resources stick around forever – but now is a good time to talk about how PASS came under Redgate’s ownership to begin with.

Rewind the clock to 2020.

PASS, the organization formerly known as the Professional Association for SQL Server, had been hosting an annual PASS Summit in November every year. The Summit was PASS’s only meaningful source of revenue, and they were living Summit paycheck to paycheck. I’d been very vocal about the problems with PASS, but I think it’s safe to say that nobody, including the Board of Directors members, were really happy about PASS’s finances.

Like Maximus said in Fallout, “Everyone wants to save the world. They just disagree on how.”

In 2020, the pandemic struck, and they had to cancel the November 2020 event. It was the straw that broke the camel’s wallet. Shortly thereafter, PASS announced they were bankrupt. After decades of serving the community, all that was left was millions of dollars in debt and a few assets, none of which were worth anywhere near enough to pay off the debt:

  1. An email list with community members, speakers, volunteers, and in-person event attendees
  2. Videos from many prior in-person and online events
  3. The brands of PASS and SQLSaturday, and their respective domains & web sites

I was worried about who would get those assets.

When the bankruptcy was announced, I immediately contacted my attorney and explained the situation. I told him I wanted to make an opening offer for PASS’s assets, but only if the deal included all 3 of the above asset groups. COVID had actually been really beneficial to my business’s war chest because I’d been well-positioned to teach online classes, so I could afford to pay cash for the assets and still have a war chest to start hiring staff to plan for more events. My attorney contacted PASS to get the ball rolling.

I wanted to move quickly because I was worried that a bad actor would get those assets. I could imagine a slimy vendor buying them, sticking their ads all over them, trying to sell the recordings as current knowledge, spamming the email list with junk, putting on a “conference” that was really just vendor spam, etc.

A couple of weeks later, PASS replied with a one-slide PowerPoint deck:

SQLSaturday Asset Offering

Note that the slide only mentions SQLSaturday – not PASS, not the email lists, not the session recordings, etc. Those “assets” were practically worthless. I told my attorney no, I wasn’t interested, but to keep me posted if anything about PASS itself came back. It never did.

PASS had already secretly been acquired by Redgate.

A couple weeks later, we found out Redgate had already bought PASS’s best assets behind the scenes.

My first reaction: I was really, really pissed off. Clearly, the PASS Board of Directors (who had a relationship with Redgate) had done fast, secret backroom deals with Redgate to sell/give those assets to Redgate, without telling the public, without giving the public a chance to bid. Depending on how it went down, that may have been illegal.

However, my first reaction only lasted a couple of minutes.

What was I gonna do, sue? Who would I even sue? PASS was defunct, and the assets had already changed hands. Any lawsuit would just slow down the process of keeping the community alive. Besides, I liked and trusted Redgate, and I thought they’d do their damnedest to keep the community up and running, without turning it into a spam-fest.

In hindsight, I’m glad I didn’t get PASS.

Had I won the bidding (that never happened), my plan had been to hire a few core people to manage the brand, and farm the rest of the event planning work out to a professional event management company. (No, not the prior management company because they only ran PASS and nothing else – I’m talking about a large event management company that manages tons of events, not just IT related.)

The years 2021 and forward would have been really hard, ugly work for me. I’ve already tried to build a company with employees before, and failed miserably, because I suck at management. Nothing in my personal skills library has changed to make me better at that task.

It would have been really hard because the (crappy) existing site was DotNetNuke, and I would have wanted that modernized into something like WordPress. That’s a monstrous amount of work.

I would have had to devote a ton of my time and resources to marketing the event. What online platform would we have picked? When would have been the right time to start the in-person event back up? What would the right prices be? How would we select sessions? All of that is a ton of work.

And finally, my name had a polarizing reaction amongst some community members. Some folks would have abandoned the conference just because they didn’t like my blogging or presenting style, and I’m sure there would have been folks at Microsoft who would have refused to participate. The community needs to bring people together rather than split them up.

PASS Data Community Summit logo

So in hindsight, I’m really glad I didn’t succeed at my attempt to get PASS and keep it going, and I’m glad Redgate got it. Redgate has invested a lot of time and money, way more than I’d have been able to dedicate without abandoning other work & personal stuff in my life. (I’d already written this post when Louis Davidson announced that the topic for today’s T-SQL Tuesday #194 would be admitting past mistakes, and I thought about submitting this post to that event, but I don’t know that there’s a lot y’all can learn from that particular mistake of mine, ha ha ho ho.)

But… the Summit is already getting cut back.

Before the private equity acquisition, Redgate had already announced that the 2026 Summit had already been cut down to just 3 days: Monday pre-cons, and then Tuesday/Wednesday regular sessions. This is the first time that I can remember where the Summit wasn’t a full week long.

At the time of the announcement, Redgate wrote:

We’ve heard your feedback: many data professionals are facing increasing challenges in securing time and budget for week-long events. By offering a broader range of events in more locations, we remain committed to supporting and empowering the data community.

The first sentence was probably true.

The first part of the second sentence was definitely not true. In 2025, Redgate offered the 5-day summit plus 3 2-day events in New York, Dallas, and Utrecht. The 2026 lineup represents a cutback in every way: less events, less cities, and the Summit is shorter.

But the last part of the second sentence was true: “we remain committed to supporting and empowering the data community.”

And that’s awesome.

Redgate was still putting on 2 2-day events, plus a 3-day Summit. In 2026, that’s hard as hell to do, and I respect them for it. AI is wreaking havoc on all kinds of business models, and the world is a chaotic, unpredictable place right now. Travel is hell for a lot of people – me and Yves included, having just gotten back from Asia and having gone through some nerve-wracking moments at border passings.

Redgate was taking a real financial risk by continuing to run these events. There are real risks that attendance goes down, which means revenue goes down, which also means sponsorship from other vendors goes down. Redgate (and their new owners) have real financial risks at stake to try to keep this community going.

But you’ll notice that I used past tense a lot here, as in “Redgate was” – because Redgate has new owners. I hope Bregal Sagemount continues to maintain the PASS Summit and SQLServerCentral into 2027, but we won’t really know until the 2026 Summit, when the event has always announced the next year’s location and dates. Fingers crossed, because I’d really like to keep seeing y’all there.


[Video] Office Hours in Kyoto, Japan

Videos
3 Comments

Kyoto feels like a timeless, classic version of historical Japan with quiet tree-lined streets, giant temples, and bubbling brooks. It’s the exact opposite of last week’s experience in noisy Osaka! Last year, I filmed in Kyoto outside a temple, and this year, I’m at another temple just after New Year’s, the time when Japanese folks traditionally go to visit their many gorgeous temples.

Let’s chill here and go through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:52 Hany: Hi Brent, have you ever seen Table Partitioning cross the finish line for database performance problem? If no, then what could be the best case that suits Partitioning?
  • 02:45 DBA in the Mirror: What problem does Database Mirroring solve that AGs without clusters do not?
  • 03:42 Gonna be golden: How many DBAs do you typically see at a shop when running SQL AG? Can it be done with one DBA?
  • 05:23 SQLSerenader: Hi Brent, is there any benefit to using distributed availability groups (2 two-node AGs in different DCs) for DR over just adding an additional third asynchronous node to an already existing two-node AG? The third node would be in a different data center.
  • 06:55 AccidentalDBA: What causes SQL to hold a buffer latch?I’m working with devs to fix a query that causes a buffer latch on one table and holds it until we restart the instance. Trying to rewrite and add an index in hopes of fixing (MSSQL 2019).
  • 08:08 Dopinder: Do you like any of the AI engines for decoding the SQL Server xml deadlock graph and making remediation recommendations?
  • 08:58 MyRobotOverlordAsks: My company announced during some AI training that within the next 12 months we won’t be writing any of our own code. Instead, we’ll be babysitting agents. What’s your opinion on this from a DB dev / DBA POV? MSSQL Dev tends to lag, so I’d personally be surprised.
  • 09:55 Brave Sir Robin: What’s your opinion of the open source SQL monitoring system, SQL Watch?
  • 11:19 DBAGreg14: We have a sp that runs 6-8x per min, getting compile waits resulting in occasional rolling blocking from 10 secs to 2 mins in duration. How can I find the root cause about why the SP is getting compile waits? no specific stmt is blocking, getting the plan for the whole SP blks.

SQL ConstantCare® Population Report: Winter 2026

It’s time for our quarterly update of our SQL ConstantCare® population report, showing how quickly (or slowly) folks adopt new versions of SQL Server. In short, people are replacing SQL Server 2016 and 2017 with 2022!

  • SQL Server 2025: 0% (but it’s 10 servers)
  • SQL Server 2022: 29%, up from 25% last quarter
  • SQL Server 2019: 43%, no change
  • SQL Server 2017: 9%, was 10%
  • SQL Server 2016: 10%, was 13%
  • SQL Server 2014 & prior: 6%, no change
  • Azure SQL DB and Managed Instances: 2%, no change

So, is SQL Server 2022 looking better after all, like it’s going to take the throne from 2019? To understand, let’s jump back to 2023, when I wrote:

SQL Server 2017 is now the version that time forgot: folks are just skipping past that version, standardizing their new builds on 2019 rather than 2017. There wasn’t anything wrong with 2017, per se, but it just came out too quickly after 2016. These days, if you’re going to do a new build, I can’t think of a good reason to use 2017.

SQL Server 2017’s adoption rate had peaked at 24% in 2020, about 3 years after its release. Today, it’s 3 years after 2022’s release, and SQL Server 2022’s adoption rate looks like it’s still climbing – but it has a new competitor, 2025. I’m guessing we’ll see one more adoption rate bump for 2022, and then it’ll start falling again as 2017 did, unable to defeat the powerhouse that is SQL Server 2019.

I’ve grouped together 2014 & prior versions because they’re all unsupported, and 2016 will join them quickly in July when it goes out of extended support. (I can’t believe it’s been almost 10 years already!) Here’s how adoption is trending over time, with the most recent data at the right: SQL ConstantCare Population Report Winter 2026

The new stuff continues its steady push from the top down, driving down the old versions out of support.


Announcing the 2026 Data Professional Salary Survey Results, And They’re Great!

Salary
11 Comments

The results are in! You can download the raw data in Excel for all 10 years and do some slicing and dicing to find out whether you’re underpaid, overpaid, or what it looks like for folks who are out there looking for work.

This year, I added a couple of new items to the survey asking about folks who are unemployed and currently looking for work. In hindsight, I wish I would have done this long ago so that we could have a baseline to know whether things have gotten better or worse. Ah, well – the best time to plant a tree was 20 years ago, and the second-best time is now. Let’s dig into the data and see what we find.

First off, salaries for DBAs (of any type) employed in the United States are up over last year, big time:

Salaries for employed DBAs in the US
Salaries for employed DBAs in the US

Woohoo! We’re in the money, and it’s the same for outside the United States – big jump here too:

DBA salaries outside the US
DBA salaries outside the US

Inflation might be up, depending on your salary or your country’s currency purchasing power, but I’m not going into those economic details – I’ll leave that to the more ambitious readers. My job here is just to host the survey, gather the data, and hand it to those smart folks to do their analysis.

If we look at non-DBAs worldwide, the story is still in the black, but not quite as pretty:

Salaries worldwide, non-DBAs
Salaries worldwide, non-DBAs

The primary database for all job types, worldwide, is still overwhelmingly SQL Server:

Responses by Primary Database
Responses by Primary Database

Most of the responses are still DBAs, but there’s a good chunk of architects, developers, engineers, and managers too:

Responses by Job Title
Responses by Job Title

About the New Unemployment Questions

So, who’s looking for work, and how long have they been looking?

Unemployed Responses
Unemployed Responses

The response counts here are fairly low, so I don’t wanna read too much into this, but I’d encourage you to check out the max months that people have been looking for work. These numbers are big.

Granted, some job seekers have very specific requirements, like a specific geographic area or high salary requirements relative to their peers, but… so do some of you employed folks, dear reader. I just want you to be careful when you see these skyrocketing salary numbers that you don’t get too greedy when talking to management about the raise numbers you’d like to see. If your numbers get too high, you might paint a target on yourself – and if you end up looking, it could be an awful long time before you find another job this cushy.


SQLBits Session Voting is Open Now! Wanna See My Sessions at Bits?

SQLBits
2 Comments

SQLBits 2026

I’d love to come over and speak at SQLBits this April, but for that to happen, the organizers need to hear from you that you’d come see my sessions.

If you’re going to Bits, you’ll need a login, and then you can vote on these sessions – but only if you want to see them! I’m not asking for non-attendees to vote to skew the results – that’s not fair to the other speakers.

This year, the theme is cartoons, and you can see how some of my sessions were heavily influenced by that, hahaha. Here’s what I submitted this year:

All-Day Workshop – Dev-Prod Demon Hunters: Finding the Real Cause of Production SlownessBrent Ozar loves K-Pop Demon Hunters, so this demo-driven class hunts down the real causes of production slowness. Watch dev and prod face off as Brent uncovers golden clues in plans, settings, and data that explain why the same query behaves so differently.

Calling AI from T-SQL: Real-Life Lessons from sp_BlitzCacheBrent Ozar cuts through AI hype with real-world lessons from calling LLMs inside sp_BlitzCache. Learn how to pick models, write effective prompts, pass the right context, handle failures, and use AI safely from your own stored procedures.

Pokémon Battle, Choose Your Index: You Can’t Have Them AllIn this demo-heavy, interactive session, Brent Ozar turns index tuning into a Pokémon-style battle. Using the Stack Overflow Users table, the audience plays index ���cards” against real queries to see which designs win, lose, or backfire—and why you can’t have them all.

The Big Red Button: How to Use sp_KillIn this quick talk, Brent Ozar introduces sp_Kill, a safer alternative to restarting SQL Server during emergencies. Learn when to push the big red button, how to identify runaway sessions, and how to kill the right queries while logging everything for later analysis.

Watch Brent Tune a Query in SQL Server 2025Ever wonder how somebody else does it? In this all-demo session, watch over Brent Ozar’s shoulder while he takes a slow query, analyzes it, and iterates over several improvements. He’ll explain his thought process and get feedback from the audience.

Panel Discussion: 20 Years of the Cloud: What Changed, What Didn’t, and What’s NextBrent Ozar leads a panel of experienced data professionals reflecting on 20 years of the cloud. With no vendor marketing, they discuss what actually changed, which problems never went away, and what they expect to face in the next 20 years based on real-world experience.

Panel Discussion: AI in Your Career: Sidekick, Hero, or Villain?Brent Ozar leads a panel discussion about how AI is reshaping data careers. Panelists share how they decide what to delegate to AI, how impacts differ by role, warning signs of over-automation, and how to intentionally cast AI as a sidekick, hero, or villain in your career.

Then, go through the other 700+ sessions to vote on others you’d like to see too. I hope to see you at Bits!


Should There Be Ads in SSMS?

Some folks are seeing an ad at the top of their SSMS v22, like this one reported in the feedback site:

PUNCH THE MONKEY TO WIN A $200 DISCOUNT

Today, Microsoft’s using this to test ads for an upcoming conference. Interesting that they give deeper discounts to Reddit readers as opposed to SSMS users, but I digress. Tomorrow, they might be pushing SQL Server 2025 upgrades, or Microsoft Fabric, or Copilot, or whatever.

Your first reaction is probably to say, “We’re paying massive amounts of SQL Server licensing money to Microsoft – get the spam out of SSMS.” And that would be fair, and I would certainly understand. After all, SQL Server Management Studio can only be used with paid products: SQL Server and Azure SQL DB. It’s not like SSMS connects to MySQL, MariaDB, Oracle, etc. So for me, right there, that means it shouldn’t be showing ads.

However, in today’s economy, even paying customers see ads these days. Every now and then, I’m forced to install a consumer version of Windows on someone’s laptop or desktop, and I’m horrified by how many ads are there, and I’m reminded of why I switched to Macs two decades ago. So set that one aside, and keep considering the discussion.

You think the question is, “Should SSMS have ads?” and the answers are “Yes” and “No.” In that world, sure, duh, everybody would choose “No.” However, that’s not really the right set of answer choices. Ads can bring in revenue, and when there’s revenue, that money could (theoretically) be used to fund development on the application.

The answer choices aren’t yes and no.

When a company like Microsoft asks, “Should SSMS have ads?” their answer choices look more like:

  • Yes, put ads in SSMS, and every 3 months, Microsoft builds the current top-voted SSMS feature request, whatever it is
  • No ads – but also no new features

That changes the discussion, doesn’t it? There are some pretty doggone cool feature requests out there, like easily exporting query results to Excel, clicking on column headers to sort the data client-side, keeping actual plans enabled for all tabs, and more. Wouldn’t it be cool to start getting more of those features delivered?

But the devil is in the details. Once SSMS ads go in place, Microsoft can say things like:

  • “The SQLCon ad didn’t pay us much, but we’ve got a really good ad offer for boner pills, so we’re running that one.”
  • “We decided to grow the ad size to 25% of SSMS’s screen real estate, and it’s an animated banner now.”
  • “Sorry, this quarter’s ads didn’t perform well, so we can’t afford to dedicate enough dev time to build the top-voted feature because it’s too hard.”
  • “While we wait for your query results, we’re going to play a video.”
  • “We only get paid for the video completions, so we’re going to hold your query results until after the 15-second video completes.”

It’s a really slippery slope, and it goes downhill fast.

Once a vendor starts showing ads to users – especially paying users – they’ve already decided that they don’t value the user’s time or screen real estate. They will continue to make uglier and uglier decisions. They might justify the ads by saying they’ll need the money for feature development today, but never disclose what percentage of the revenue actually goes towards development – and they’ll revise that number down over time.

So should SSMS have ads? In a perfect world, where Microsoft discloses how much revenue those ads are bringing in, and makes a commitment to users about how much of the revenue will be spent on feature development – we could have a discussion.

But that ain’t the world we live in.

In this world, Microsoft as a company has long ago decided that even paying consumers should see ads. I think it’s probably a lost battle, but if you think there’s still a chance that we could keep ads out of SSMS, you can vote on the SSMS ad feedback item here.


New Year’s Task: Quick, Easy Prep for a Raise

You’re busy, so I’ll keep this short.

Several months from now, you’re gonna have a salary review. Your manager is going to ask what you’ve been up to, and you’re not going to have a lot of great answers. Copilot isn’t tracking your successes for you.

To help Future You™, take a moment to log sp_Blitz to a table in the master database right now:

Then select the data back out just to see what you’re dealing with:

Several months from now, when your manager asks what you’ve been doing, run sp_Blitz to table again, and compare the two sets of results. Any warning that appeared in January, that no longer appears later, means you’ve improved the server. Count up those results and present ’em to management to show how much better/safer/faster you’ve made the environment.

That’s It! Go Do It.

But there are going to be a few inevitable comments, so lemme head off some of them.

“But I heard tables in master are bad!” They are in the sense that if something goes wrong with the server, or you fail over somewhere else, you’re not going to recover the contents of those tables. In this case, I don’t care. If we lose the whole server, well, comparisons between the old one and new one aren’t really relevant.

“But I wanna put it in a different database!” Okay, do that. I don’t care where you stick it. Let your freak flag fly.

“But I wanna centralize the data.” You’ll notice that the output table results include a server name, and you can use that to put data from multiple servers into the same table. Doing that is left as an exercise for the reader.

“But I wanna automate this.” Sure, schedule a job to run every month or quarter, whatever corresponds best to your HR schedules, outages, uptime, whatever. Some people schedule it for SQL Server Agent startup.

“But I’m in Azure SQL DB, and I don’t have Agent jobs.” Well, if you’re a production DBA, and your data lives in Azure SQL DB, I’m gonna save you a little time: your annual review at this company isn’t going to go well for long. You probably wanna start shifting your career into a different type of DBA.


Database Development with AI in 2026

AI
14 Comments

This seems like the appropriate first BrentOzar.com blog post in the year 2026, eh?

In the PollGab question queue for Office Hours, MyRobotOverlordAsks asked a question that merited a full blog post answer:

My company announced during some AI training that within the next 12 months we won’t be writing any of our own code. Instead, we’ll be babysitting agents. What’s your opinion on this from a DB dev / DBA POV? MSSQL Dev tends to lag, so I’d personally be surprised.

If this sounds completely alien to you, check out this blog post by developer Armin Ronacher. In it, he discusses how 2025 was the year when he reluctantly shifted his development process to the point where now he spends most of his time doing exactly what MyRobotOverlordAsks’ company is proposing: rather than writing the code directly, he now asks AI tools to build and debug things for him, and he spends his time tweaking what they produce. (Update 2025/01/07: for another example, check out Eugene Meidinger’s post on his uses of AI.)

Inside BrentOzar.com, Richie Rump is the only developer/architect/builder, and he relies heavily on AI day to day. In our company Slack chat room, he’ll frequently describe something he asked AI to build for him, and how it worked out. It’s not a rarity anymore – it’s a regularity.

So, is this going to affect database work?

I think there are 4 major factors that influence my answer.

First, the SQL language is extremely stable and well-documented. This means AI should have a much easier time with database development than it does with application development, which relies on constantly-changing frameworks that don’t have a huge volume of train-worthy articles and samples out online. If this was the only factor involved with AI doing database development, we would see sweeping adoption of AI database tools overnight, game over.

However, second, your existing databases probably aren’t stable or well-documented. The old ones are a hot mess of bad table designs, cryptic column names, and joins across all kinds of disparate systems with completely different histories and naming conventions. The documentation has never kept up with the reality of what’s in the database, and even if it did, the documentation is scattered across all kinds of locations, in different formats. AI can do its best job trying to decipher what the hell is going on, but… it’s not going to be an easy or accurate process.

Third, some database development demands a high grade of security and precision accuracy. If AI builds a to-do list app for you, and the resulting app doesn’t have exactly the layout you want, or doesn’t function quite right on some browsers, or has a few unpredicted side effects where someone can see someone else’s tasks now and then, it’s not the end of the world. Similarly, AI-driven queries often don’t need precision either: after all, your managers were slapping together half-baked NOLOCK queries for decades and calling them a “data warehouse.” That’s fine, and AI stands a great chance of taking over that work. However, some database development requires exacting precision: calculating tax returns, assigning doctors to patients, shipping expensive products to customers, tracking customer balances, etc. Mistakes here are dangerous, and expensive to fix.

Finally, database development tooling is terrible. A lot of for-profit companies compete to build the best development tooling. They make money selling licenses to it, plus use it as a gateway to their cloud services. However, database dev tooling is mostly an afterthought. There’s no one kick-ass IDE that can simply add AI tooling, and revolutionize database work.

There’s an interesting sub-section of the tooling market, though: reporting tools. For-profit companies compete to build the best reporting tools, and those tools usually handle data from lots of different source systems (SQL Server, Oracle, MySQL, Postgres, etc.) The reporting market is cutthroat competitive, and those vendors will be the ones racing to integrate AI first.

What this means for AI in 2026

Because of those factors above, I think that in the year 2026, there’s a pretty good chance that people who work on reporting queries – and preparing data for reports, like data engineers and data warehouse developers – will be the ones at the forefront of AI usage in databases. The reporting and ETL tooling vendors will be the ones who can quickly offer consumers the job of agent steerers rather than query writers.

In addition, people who are building new apps from the ground up in 2026 will likely be using AI right from the start to generate the database schema they’re working with. If they do a good job of that, they’ll keep the relevant context in memory as they work, and it’ll be easy for AI to then generate any necessary queries that an ORM can’t handle by itself. Ground-up new apps built in 2026 won’t yet have the time to build up the complexity that would necessitate a human to get involved in writing a query from scratch, only to steer and make corrections to AI-generated queries.

Those folks who write the reports and build the ground-up apps are also close in proximity to management. Your company’s execs will see the success stories of how their reports get to market faster thanks to AI, and execs will read that as confirmation that AI is the way of the future, even for database jobs.

However, people who are doing mission-critical, secure, accurate database development on large existing databases (and pools of databases) will still struggle in 2026 due to undocumented databases and bad tooling. 2026 won’t be the year that these people can relax back in their chairs, write prompts, and switch to an advisory, steering role rather than a hands-on, typing-furiously role.

In addition, 2026’s newly-built apps will gain complexity and edge cases over time, into 2027 and 2028. The AI tooling used to generate the database schema will change, and the context of the original design will be lost to the sands of time – because even in 2026, people don’t document their databases worth a damn. I would love to see people use extended schema properties to save the context and meaning of each table, column, constraint, relationship, etc, but that hasn’t caught on yet. That means as the new apps age, the humans will gradually have to take the wheel again to write increasingly complex queries and nightly jobs.

I do hope that steering-only future is coming! I would love for everyone’s existing databases to be easy to understand, clearly documented, secure, and to have easily accessible tooling that understood those databases, plus offered AI agents to take control. I have no idea if 2027 will be the year for that, or 2028. But it won’t be 2026.

And I shouldn’t have to mention this, but I feel like I do: none of these words were penned by AI. I do use AI from time to time, like answering Office Hours questions to test AI’s quality back in 2023, and I use AI to write queries. I just draw the line at using it to build content for the blog or training classes, because after all, why would you come here to read something you could generate yourself for free? You come here to get quality human insight (well, at least human insight) that you can’t get from an LLM, so I don’t see the point in using AI to generate content. (On the other hand, LinkedIn is rapidly becoming a cesspool of AI-generated “insight” that people are trying to pass off as their own thought leadership, which is a shame, because for a while there, I was actually enjoying reading LinkedIn content.)


[Video] Office Hours in Osaka, Japan

Videos
0

The Dotonbori area of Osaka is like a video game version of Japan: wild, over-the-top neon signs, awesome street food, cool street markets, Kabuki theater on a revolving stage, and more. It’s very touristy, but as touristy places go, it’s fantastic. Last year when I was here, I filmed Office Hours at the river, but this time around let’s go into one of the side streets where the food vendors gather.

Oh and hey, while we’re here, let’s cover your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:11 Elwood Blues: What’s your take on setting a users default database when creating a new login? The previous DBA always used master. Is this bad?
  • 01:57 retired_DBA: Hi Brent, what do you tell a client who is asking you to solve an issue but objects to the use of any/all diagnostic tools for fear they will ‘negatively impact the performance of the system’, more or less leaving you to shoot in the dark?
  • 04:58 MyFriendAsks: I recently got hired as a SQL Developer, which is a shift from being a .NET/Web Developer. What action points do you think I should focus on and subjects to brush up on? (Obviously I binge your YT shorts, and will be pouring over your blog too)
  • 05:46 TokTik: In my Top 10 waits for the output of sp_BlitzFirst, the VDI_CLIENT_OTHER wait appears at the top. I couldn’t find much information online, but I understand it’s related to AGs. Is my secondary AG taking a long time to get updates, or can I skip this wait and move on to the next?
  • 06:44 Elwood Blues: Is batch requests per second or transaction s per second a better metric for tracking how busy a SQL server is?
  • 07:00 Ejae: Do you think blogging as a medium is on its last legs?
  • 09:53 Fr: Hi Brent, Who should own CDC tools in an organization DBAs or BI teams?

Are You Looking for Work? Underpaid? Overpaid? Let’s Find Out.

Salary
4 Comments

Every year, I run a salary survey to help folks have better discussions with their managers about salaries, benefits, and career progression.

This year, the survey is for unemployed folks, too! The first question has a new option for “Unemployed (and looking).” Note that if you’re retired, thanks, but you can skip this survey. Congrats on the retirement though!

Take the survey now here.

The anonymous survey closes Sunday, January 11th. On Wednesday the 14th, I’ll publish the overall responses on the blog in Excel format so you can do slicing & dicing. The results are completely open source, and shared with the community for your analysis. You can analyze ’em now mid-flight – do not email me asking for edit permissions, buddy – but I’d wait until the final results come in. I’ll combine them into a single spreadsheet with the past results, and publish those here.

Thanks for your help in giving everybody in the community a better chance to talk honestly with their managers about salary.


Your Favorite Posts From This Year

Company News
3 Comments

Reading is fundamental, and fundamentally, here are the 2025 blog posts that y’all read the most this year. (This list doesn’t include the most popular timeless posts overall.)

10. Free Spring Training Webcasts – I’ve noticed that people have near-zero interest these days in paying to attend live online classes, but they’re still really interested in free ones, and they still pay for recorded classes. Weird. I’m guessing that’s a Zoom fatigue thing.

9. The 6 Best Things Microsoft Ever Did to SQL Server – Hey, the worst things list didn’t make the top 10 cut! I love it. Y’all are glass-half-full people.

8. SQL Server 2025 Makes Memory Troubleshooting Easier – With a new sys.dm_os_memory_health_history DMV.

7. What’s Coming in SQL Server 2025: DMV Edition – Leading up to releases, it’s fun to go spelunking through the system tables and undocumented stuff to figure out what Microsoft hasn’t announced yet, including limitations to features.

6. SQL Server 2025 Is Out and Standard Goes Up to 32 Cores, 256GB RAM – I had most of this post written months in advance, but the 32 core 256GB RAM thing was a pleasant surprise.

5. Microsoft Introduced AI Integrations for SQL Server – Specifically, an MCP server that you can install. Looking back in the rear view mirror, I don’t think this is going to be really popular, and it’s a shame because it adds some pretty cool capabilities.

4. How I Configure SSMS v21 – Which is already outdated because v22 has a totally different tools-options menu. (sigh) I suppose that means I’ve got an opportunity for a popular post in 2026.

3. Fabric Is Just Plain Unreliable, and Microsoft’s Hiding It – And after publication, Microsoft did the right thing and put up a (mostly) honest status page.

2. What’s New in SQL Server 2025 – Except this isn’t the post you’re expecting. This is the April 1 version.

1. SQL Server Reporting Services is Dead. Is SSIS Next? You know what’s really funny? After this went live, I got a lot of private off-the-record emails from Microsoft folks saying, “SSRS isn’t dead, it just basically got a name change.” But nobody, not one single soul, emailed me about the latter part about the blog post title. Hmm.

Most-Commented 2025 Posts

Here were the posts y’all commented the most on this year – and it’s a different list! It’s always hard to predict which ones are gonna light the comment section on fire, and I should be doing more of the contest posts.

10. SQL Server 2025 is Out (35) – lots of discussion about the new features, like Standard Edition going up to 256GB RAM.

9. How I Configure SSMS v21 (37) – which, like I said, is already outdated, because v22 is out with a totally different tools-options menu.

8. SSRS is Dead. Is SSIS Next? (37) – the writing is on the wall, folks.

7. The 6 Best Things Microsoft Ever Did to SQL Server (45) – let’s be honest: this is a pretty fun database platform to work with.

6. Query Exercise: Return Routes in the Right Order (51) – this one stemmed from a client problem that is so much harder than it looks at first glance.

5. The 8 Worst Things Microsoft Ever Did to SQL Server (71) – and of course y’all wanted to bring up more bad memories, heh.

4. Why Aren’t People Going to Local and Regional In-Person Events Anymore? (86) – I shared a few reasons and y’all brought up more good points.

3. Why I Mention My Sexuality and Gender (97) – just once a year (or less), I bring it up, and every year, the comment section gets rowdy immediately. I don’t expect that will change during my lifetime, and that’s why it won’t stop, ha ha ho ho.

2. Make the Comment Section Look Like a Junior DBA’s Search History (139) – lots of laughs.

1. We Hit 50,000 YouTube Subscribers (176) – another contest. I’m proud of that milestone because I never say stuff in the videos like “like and bash that subscribe button”, hahaha.


[Video] Office Hours in Zhengzhou, China

Videos
2 Comments

I really wanted to film Office Hours on the Great Wall of China, but the instant I got out there, I realized it was a lost cause. That place was SO overwhelming – tons of people, frigid winds, heights, totally overwhelming in every way. So, instead you get a nice peaceful park in downtown Zhengzhou, Yves’ hometown, and it’s one of those nice 360 degree videos where you can turn the camera around to see the goings-on. Let’s go through your top-voted questions from https://pollgab.com/room/brento:

Here’s what we discussed:

  • 00:00 Start
  • 01:37 CornFieldDBA: Hi Brent. We are considering installing SQL Server 2022 along side SQL Server 2019. According to Microsoft this is supported. Have you ran two versions of SQL Server on the same host? Anything to be worried about?
  • 03:10 AG Avoider: I’ve heard you mention synchronous SAN replication across DCs, such as what Pure Storage offers. Is this different from multi-subnet failover clustering?
  • 05:15 newbie dev dba: hi brent, my boss wants us to encrypt some columns in some of our transaction tables. My colleague brought up using the Column Master Key (CMK) method for this. Have you had any experience with it, or seen any of your clients use it? Just curious to hear your thoughts
  • 05:45 SQLHCDBA: Do you have a documentation on how to configure SSRS on Always on Availability group SQL database and the best practices?
  • 07:51 Elwood Blues: Which is worse : installing Cumulative updates as soon as they are released or installing them months later? Which is safer for Azure SQL VM?
  • 08:44 Land of the Lost: What’s your opinion of SQL Server 2025’s mirroring to Fabric OneLake feature?
  • 10:12 Wade Wilson : Will you be attending Microsoft SQLCon in Atlanta next year?
  • 13:07 MyTeaGotCold: Do you know of any monitoring tools that still get regular updates? Spotlight and Sentry look dead. I’m not sure about Idera.
  • 15:36 Elwood Blues: Our commercial SQL monitoring software caused corruption in our database. Have you seen or run into this?
  • 16:53 BrentIsTheMan: Hi Brent, With the new ADR functionality for tempdb in SQL Server 2025 I wonder if the version store for tempdb will clear out if I have long running transactions in other DBs? If you don’t wanna answer this because I can test myself I wonder what is your favorite tequila drink?
  • 17:39 Captain Hurry-Up-and-Commit: My queries sometimes slow down to due to large Version Store crawling, but I’ve noticed that enforcing clustered index scan on simpleton SELECTs during these times I get better query performance than with preferred covering index seek. What trivial knowledge am I missing?
  • 20:55 flyboy91901: We have multiple databases that now have to integrate with each other. The issue with many of these databases at customer sites has different collation which is problematic. What is your approach to changing collation for every aspect of the DB, table/columns, functions, views?

Known Issues So Far in SQL Server 2025

SQL Server 2025
16 Comments

Whenever a brand spankin’ new version of any software comes out, there are bugs, and SQL Server is no exception. This has led to a mentality where folks don’t wanna install a new version of SQL Server until the first couple of Cumulative Updates come out, hopefully fixing the first big round of bugs.

So… are there bugs this time around?

Microsoft maintains a list of SQL Server 2025 known issues, and honestly, they’re not bad! There’s stuff in here that would have sucked to be the person to learn for the first time, but no showstoppers as far as I’m concerned. Some of the highlights:

On readable secondaries, you can get access violations if you enable Query Store without disabling PSPO. The fix is to disable PSPO.

Auditing events don’t write to the security log. The workaround is to write to a file instead, or like I’ve always told clients, if you need your auditing to be legally defensible, you need to use a third party appliance that sits in between SQL Server and the rest of the network, capturing all network packets.

Full text search won’t index all of big plaintext documents whose size is larger than 25MB. The workaround is to edit the registry to remove the 25MB limit.

It won’t install without TLS 1.2. I’ve had a couple of clients whose sysadmins had a little too much time on their hands, and insisted on turning off TLS 1.2 everywhere because “it’s deprecated.” For now, the fix is… re-enable TLS 1.2, do the install, and then turn it back off again.

It won’t install if you have >64 cores per CPU. This has been a problem with 2022 as well, and I’m simplifying that for the sake of the headline: the technical details are a little more complicated. The most common fix I’ve seen is to use virtualization, and configure the VM’s socket/cores setup so that you have more sockets, but less cores per socket.

PowerShell doesn’t work if you enforce strict encryption. The fix: turn off strict encryption. I find this amusing because the kinds of proactive people who use PowerShell are also the kinds of proactive people who would enforce strict encryption.

SQL auth logins are slower, although you probably won’t notice this unless you’re not using connection pooling and you’re tracking login times at scale, as Aaron Bertrand notes.

There are others in the full list, and surely there are more that are currently being investigated and haven’t been fully solved/documented yet, but overall – you know what, this isn’t bad! Knock on wood, this is shaping up to be one of the better, more reliable releases so far. Have you hit any bugs that aren’t in the list above? Let your fellow readers know in the comments.


Thoughts On the Unemployed Salary Survey Responses

Salary
9 Comments

In this year’s data professional salary survey, I added a new response type for folks who are unemployed and looking for data work. The survey is still in progress, but you can view the data as it’s coming in, and I wanted to take a few minutes to read through the responses of folks who are unemployed.

First, how many unemployed folks have taken the survey:

Responses by employment

It’s only 32 responses out of 575 altogether – about 5% – so I’m hesitant to read too much into their individual responses. However, let’s filter the data for just the unemployed folks, and then look at their job titles to see if a particular job type stands out:

Unemployed job titles

Okay, deep calming breath: the number 8 does stand out head and shoulders above the rest, but remember, we’re only talking about 32 survey responses overall from people looking for jobs. Plus, remember that my audience is DBAs – here’s the percentages of job titles across the EMPLOYED audience:

Employed job titles

See, the numbers line up – 25% of the unemployed responses are from general DBAs, but also 27% of the employed responses – so they’re right in line with the audience overall. It’s not like a higher percentage of DBAs are unemployed than the other job roles – but then again, keep in mind that it wouldn’t take much of a response turnout to skew these numbers.

Let’s ask a different question: amongst the unemployed responses (for all job titles), how many years of experience do they have?

Years of experience for unemployed responses

A good chunk of the unemployed responses have 10-15 years of experience, and they’re making six figures. It’s not just junior folks who are looking for work. When these senior folks email me for job hunting advice, I say the same thing over and over: get back in touch with everyone you’ve ever worked with before, via their personal emails and phone numbers, and catch up. Tell them you’re in the market. You shouldn’t be ashamed – a lot of their companies may be hiring, and they’re faced with a deluge of unqualified applicants using AI garbage to get past interview screening. By offering yourself as a candidate, you’re doing them a favor! They know you and trust your work.

Moving on, let’s switch over to the employed folks and look at what their job plans are for 2026:

Job plans for 2026 from employed folks

A whopping 20% (7 + 13) plan to change employers! That’s a huge number because it affects the 5% of the audience that’s already looking for work – they’re all competing for the same jobs at new companies. That’s going to be a tough market.

One other note while I’ve got the survey data open – where are companies hosting their data these days?

Where the data lives

Most companies are using a hybrid approach of cloud, rented data center space, and cloud. It’s a wide mix though, and hopefully someone from the community takes this raw data and visualizes it in a better way, heh.

Speaking of which – help contribute to the raw data! Fill in the annual Data Professional Salary Survey now.