Don’t ignore the warning signs (in execution plans)

Some things in life we ignore. For example, the “check engine” light. That’s just there as a suggestion, right?

But when you’re performance tuning, you can’t afford to ignore the warning signs. I can’t count the number of times that I’ve found the issue with a query by looking at the warnings.

Enough talk, let’s look at an example

Let’s create a table of Students, with just one column.

CREATE TABLE [dbo].[Students]([StudentName] [varchar](100) NULL)
GO
CREATE CLUSTERED INDEX ix_Students_Name 
ON Students(StudentName)
GO

I need some dummy data to really increase the numbers here, so I’ll load in the names from StackOverflow2010.

INSERT INTO Students(StudentName)
SELECT DisplayName 
FROM StackOverflow2010.dbo.Users

Great! About 300,000 rows were loaded into my new table. Now I need to write a query. How about this query?

SELECT StudentName 
FROM Students
WHERE StudentName = N'Arthur'

35 rows returned! Disclaimer, none of those users are actually me. This post is about the execution plan, so let’s retrieve the execution plan along with the io statistics by using set statistics io on.

Table 'Students'. Scan count 1, logical reads 910, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


Ah, there’s the warning sign on the left there. Here’s the warning it’s giving us.

Type conversion in expression? What does that mean?

Well, because I used the “N'” prefix around my parameter of Arthur, I was using nvarchar as my data type, and the table’s data type is varchar. When the server received my request, it had to convert the entire column of StudentName into nvarchar before searching the list for the name I sent.

That’s called an Implicit Conversion, and this issue only gets worse as your data grows in size. Often, this issue doesn’t appear overnight. It can exist for weeks or months, just making each query run a little slower. Then in a few years, the query is slow and users start to complain. Since nothing has changed, it’s hard to track down.

Is it worth fixing?

I say absolutely yes. Let’s take a look at this example to see how much it will improve, and then I’ll talk more about the fix in the real world.

SELECT StudentName
FROM Students
WHERE StudentName = 'Arthur'
Table 'Students'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Ah, that’s better. Not only has the warning disappeared, but the index scan in the execution plan changed.

What happens in the real world?

The real world is never as easy or simple as a demo environment like this. If your company is international, I’d argue that the correct choice would be to change the data type of the column to nvarchar. That’s a lot harder than the easy fix in my demo.

At this point, when you have a slow query and there’s an implicit conversion that you believe is influencing the execution plan, it’s time to start asking questions. Start by finding your favorite application developer and ask them how they use the column.

More warning sign posts to come

I want to leave an honorable mention for the other warning signs that show up in execution plans, specifically the excessive memory grant warning. I’ll talk about that in the future, stay tuned!





Why NOLOCK could bite you

It’s no secret that a lot of DBAs don’t like the NOLOCK hint. Often, people will discuss the myriad of issues that can happen.

Introduction to NOLOCK

If you aren’t familiar with NOLOCK, it’s a query hint that can be applied to each table in a query. This hint tells SQL Server to change isolation level from Read Committed to Read Uncommitted. In plainer words, the query can now read data before and during the process of writing to the storage.

The main reason that this is bad is that your query can return bad data. That’s pretty bad.

But let me introduce you to another issue that can occur, which is blocking on the schema, AKA the wait type LCK_M_SCH_S.

What does LCK_M_SCH_S mean?

It means that the current session is trying to acquire a lock on the schema to make a change to an object in the schema. You might think that your environment doesn’t make changes to the schema too often, but there’s a few scenarios when that kind of lock is required.

Let’s say we have a production incident, and we need to drop an index. Let’s try to do that while there’s a query running on this table (using StackOverflow 2010).

SELECT * 
FROM Comments (NOLOCK)

On my machine, this query takes almost a minute to run. While this query runs, let’s create an index on this table.

CREATE NONCLUSTERED INDEX 
ix_test ON Comments(Score)

Taking a look at activity using sp_WhoIsActive, we can see that creating the index and running the query are compatible.

But here’s the tricky part: even though we could create the index, we can’t get rid of it now! Let’s try to drop it, and I’ll re-run the same Comments query first.

I also start a new “SELECT *” query, but it looks like now that I’m dropping an index, that new query is now waiting. Both queries haven’t done any work yet, just waiting for session 59 to finish.

What’s the solution?

Well, honestly there’s no easy fix once your environment has queries with NOLOCK applied in Production. You can try to avoid this issue by never dropping an index in Production, but what’s to stop another DBA or someone with sysadmin rights from creating an index or two?



One reason for using the Estimated Execution Plan

SQL Server provides performance tuners with two main methods of reviewing query performance, the estimated and actual execution plans. The estimated plan can be retrieved quickly, since the query doesn’t have to be executed. Alternatively, the actual plan will require you to execute the query entirely but it’s often worth the wait. The estimated plan is only as good as the estimates used to generate it, so as performance tuners, we want to review what actually happened.

So here’s the catch

In the real world, when you’re troubleshooting an issue, waiting for the entire query to execute is painful. If the query takes over 30 seconds, I know I’ll start to get impatient and I’ll get curious about what’s taking so long. This is where the estimated plan comes into play. While we’re waiting for our actual query to execute, let’s talk about one big reason we’d want to retrieve the estimated plan: parameters.

Parameters?

Yes, the parameters that were used to compile the plan. When SQL Server generates an execution plan, it records the parameters that were used. Here’s another tricky part: those might not be the same parameters that you’re using to test the query. Let’s take a look at an example using the Stack Overflow database.

10 second introduction to StackOverflow2010

(Download StackOverflow2010 here)

In the world of Stack Overflow, the Posts table has all the questions and answers from the site. There’s a lot of those questions and answers, but there’s just a few wiki type posts.

Here’s my stored procedure to retrieve all the posts of a certain type:

CREATE PROCEDURE [dbo].[QueryPostType] (@PostType INT) 
AS
BEGIN

SELECT PostTypeId, 
	  Tags,
	  Body 
FROM Posts where PostTypeId = @PostType

END

Now to actually test this, we need to run the stored procedure once.

EXEC QueryPostType @PostType = 1;

Alright. Executing the query with the parameter for the questions, post type of 1, should put that plan into the plan cache. Let’s take a look at the estimated plan for a different parameter, 3, which is the wiki post type.

EXEC QueryPostType @PostType = 3;

Just right click on that “SELECT” and look at the parameters. Here’s the useful part:

That’s all I have for this reason and this post.

I just wanted to provide the parameters in defense of estimated plans. This trick becomes really useful if you’re trying to determine if your query is a victim of parameter sniffing, since you can see what parameters were sniffed and placed into cache.

Where I’ve been and GroupBy.org thoughts

It’s been a while since I’ve posted a blog post! In the meantime, I gave a new presentation at GroupBy.org. Check it out here  and find the slides/demos here. The presentation is up on YouTube and the transcript is available on the link, so it’s pretty easy to access. GroupBy conference was the biggest event I’ve ever been a part of, and it was such an incredible experience.

Preparing for The Perfect Index

I spent a long time preparing to give this presentation. I had three months to prepare but I didn’t start preparing until about four weeks out. This was a mistake. I knew what I wanted to say, but I procrastinated so much that I had to start rushing about a week before the conference. That’s when I really kicked it into gear and I quickly found that mornings were the best times to work. I was on-call for work so by the time it was the evening, my brain was fairly worn out. I’d wake up at 5 AM, run to my work computer and write the demos/slides until I had to leave.

Practice #1

The first time I gave this session was Wednesday, September 5th. I had volunteered to give the presentation for my team, to test what worked and what didn’t. I’m very grateful for their patience for this run, since it didn’t go smoothly. ZoomIt wasn’t zooming correctly on the second monitor and the presentation only lasted about 35 minutes. This was a big warning sign to me, that I needed to beef up the material to prepare for Friday.

Final prep

I spent the last couple days in a frenzy to complete the presentation. The hardest part of this was practicing the demos. I don’t mind practicing my presentation, but it can be pretty boring to run the same queries over and over and focusing on the same parts. When I practice talking through the presentation I don’t have a script, so I’ll change words between each practice.

Friday – the big day

It was finally Friday morning. I didn’t log into GroupBy or start up any distractions, I wanted to remain super focused on the goal. When I loaded into the conference and Brent started talking to me, I knew things were getting real. The first five minutes were scary. I have only done one remote presentation before, and it’s definitely a different type of challenge. Once I got into the flow of the presentation, I felt much more natural and the presentation flew by. Before I knew it, Daniel Hutmacher and I were talking about questions and the feedback in the SQL Community Slack Channel.

Post-session

I mentioned this during the presentation; the feedback I received from my session was incredible. I saw so many familiar names saying good things and I was riding the adrenaline from this presentation for another couple hours after my session. I’m incredibly grateful to hear the feedback. I want to thank several people who made this session and conference possibly.

Thank you

First, thank you Daniel for spending the time with me during my session, relaying questions and talking to me. It made it feel like a conversation. Second, thank you Brent Ozar for hosting the entire GroupBy event, for giving me feedback on this session abstract, and for taking the time to give me feedback after my session. Third, thank you to everyone who contributed to this session. My fellow DBA co-workers, Brian and Gerard, for sharing their thoughts and helping me pick the right terms. Finally, thank you to everyone who attended my session, either while it was running or afterwards. All of you have helped me give this session and it means the world to me.

 

T-SQL Tuesday #104: Index column script

[fusion_builder_container hundred_percent=”no” hundred_percent_height=”no” hundred_percent_height_scroll=”no” hundred_percent_height_center_content=”yes” equal_height_columns=”no” menu_anchor=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_position=”center center” background_repeat=”no-repeat” fade=”no” background_parallax=”none” enable_mobile=”no” parallax_speed=”0.3″ video_mp4=”” video_webm=”” video_ogv=”” video_url=”” video_aspect_ratio=”16:9″ video_loop=”yes” video_mute=”yes” video_preview_image=”” border_size=”” border_color=”” border_style=”solid” margin_top=”” margin_bottom=”” padding_top=”” padding_right=”” padding_bottom=”” padding_left=””][fusion_builder_row][fusion_builder_column type=”2_3″ layout=”2_3″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” padding_top=”” padding_right=”” padding_bottom=”” padding_left=”” dimension_margin=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” class=”” id=””]

Introduction

Bert Wagner (blog) is hosting this month’s T-SQL Tuesday, and the topic is code that you’ve written that you’d hate to live without.

I’ve written scripts for myself that I keep in my folder of Toolbox scripts, so I had a few choices. I decided to deliver a script that I find myself using almost daily. This script was written to solve a specific problem: There’s no good way to display indexes and their columns, built into SQL Server.

[/fusion_text][/fusion_builder_column][fusion_builder_column type=”1_3″ layout=”1_3″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” padding_top=”” padding_right=”” padding_bottom=”” padding_left=”” dimension_margin=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_imageframe image_id=”11995″ max_width=”” style_type=”none” stylecolor=”” hover_type=”none” bordersize=”” bordercolor=”” borderradius=”” align=”none” lightbox=”no” gallery_id=”” lightbox_image=”” alt=”” link=”https://bertwagner.com/2018/07/03/code-youd-hate-to-live-without-t-sql-tuesday-104-invitation/” linktarget=”_self” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=””]https://www.dba-art.com/wp-content/uploads/2017/11/T-SQL-Tuesday-Logo.jpg[/fusion_imageframe][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container][fusion_builder_container hundred_percent=”no” hundred_percent_height=”no” hundred_percent_height_scroll=”no” hundred_percent_height_center_content=”yes” equal_height_columns=”no” menu_anchor=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_position=”center center” background_repeat=”no-repeat” fade=”no” background_parallax=”none” enable_mobile=”no” parallax_speed=”0.3″ video_mp4=”” video_webm=”” video_ogv=”” video_url=”” video_aspect_ratio=”16:9″ video_loop=”yes” video_mute=”yes” video_preview_image=”” border_size=”” border_color=”” border_style=”solid” margin_top=”” margin_bottom=”” padding_top=”” padding_right=”” padding_bottom=”” padding_left=””][fusion_builder_row][fusion_builder_column type=”1_1″ layout=”1_1″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” padding_top=”” padding_right=”” padding_bottom=”” padding_left=”” dimension_margin=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” class=”” id=””]

Requirements to run this script

You must be on SQL Server 2008 and above. Sorry, this script relies on the FOR XML PATH clause. You need access to the DMVs sys.indexes and sys.index_columns, and you must be using the right database context, i.e. USE [StackOverflow].

Here’s the script

[fusion_syntax_highlighter theme=”” language=”sql” line_numbers=”” line_wrapping=”” copy_to_clipboard=”” copy_to_clipboard_text=”Copy to Clipboard” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” font_size=”” border_size=”” border_color=”” border_style=”” background_color=”” line_number_background_color=”” line_number_text_color=”” margin_top=”” margin_left=”” margin_bottom=”” margin_right=””]U0VMRUNUIGkubmFtZSBhcyBJbmRleF9OYW1lCiAgICAsT0JKRUNUX05BTUUoaS5vYmplY3RfaWQpIGFzIFRhYmxlX05hbWUKICAgICxpLnR5cGVfZGVzYyBhcyBJbmRleF9UeXBlCiAgICAsU1RVRkYoKAogICAgICAgIFNFTEVDVCAnLCcgKyBDT0xfTkFNRShpYy5vYmplY3RfaWQsIGljLmNvbHVtbl9pZCkKICAgICAgICBGUk9NIHN5cy5pbmRleGVzIGFzIGluYwogICAgICAgIElOTkVSIEpPSU4gc3lzLmluZGV4X2NvbHVtbnMgYXMgaWMgb24gaW5jLm9iamVjdF9pZCA9IGljLm9iamVjdF9pZAogICAgICAgICAgICBhbmQgaW5jLmluZGV4X2lkID0gaWMuaW5kZXhfaWQKICAgICAgICBXSEVSRSBpc19pbmNsdWRlZF9jb2x1bW4gPSAwCiAgICAgICAgICAgIGFuZCBpbmMub2JqZWN0X2lkID0gaS5vYmplY3RfaWQKICAgICAgICAgICAgYW5kIGluYy5pbmRleF9pZCA9IGkuaW5kZXhfaWQKICAgICAgICBPUkRFUiBCWSBrZXlfb3JkaW5hbAogICAgICAgIEZPUiBYTUwgUEFUSCgnJykKICAgICAgICApLCAxLCAxLCAnJykgQVMgS2V5X1ZhbHVlcwogICAgLFNUVUZGKCgKICAgICAgICBTRUxFQ1QgJywnICsgQ09MX05BTUUoaWMub2JqZWN0X2lkLCBpYy5jb2x1bW5faWQpCiAgICAgICAgRlJPTSBzeXMuaW5kZXhlcyBhcyBpbmMKICAgICAgICBJTk5FUiBKT0lOIHN5cy5pbmRleF9jb2x1bW5zIGFzIGljIG9uIGluYy5vYmplY3RfaWQgPSBpYy5vYmplY3RfaWQKICAgICAgICAgICAgYW5kIGluYy5pbmRleF9pZCA9IGljLmluZGV4X2lkCiAgICAgICAgV0hFUkUgaXNfaW5jbHVkZWRfY29sdW1uID0gMQogICAgICAgICAgICBhbmQgaW5jLm9iamVjdF9pZCA9IGkub2JqZWN0X2lkCiAgICAgICAgICAgIGFuZCBpbmMuaW5kZXhfaWQgPSBpLmluZGV4X2lkCiAgICAgICAgT1JERVIgQlkga2V5X29yZGluYWwKICAgICAgICBGT1IgWE1MIFBBVEgoJycpCiAgICAgICAgKSwgMSwgMSwgJycpIEFTIEluY2x1ZGVkX0NvbHVtbnMKICAgIEZST00gc3lzLmluZGV4ZXMgYXMgaQogICAgV0hFUkUgb2JqZWN0X25hbWUoaS5vYmplY3RfaWQpID0gJ1Bvc3RzJwo=[/fusion_syntax_highlighter]

Results

Explanations and DMVs

So why is this script so useful to me? Well, I deal with tables that have wide indexes with many columns. In the picture above, I’m looking at the IX_ViewCount index. If I queried just the DMVs without the extra formatting, there would be five rows, one for each column.

Limitations

I use this script for Clustered and Nonclustered indexes. I haven’t tested it with more complex index types, like Columnstore or In-Memory, so there could be limitations there. I also didn’t include index size in pages or bytes, but it wouldn’t be too difficult to add.

[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

T-SQL Tuesday #100: The Future of SQL Server

[fusion_builder_container hundred_percent=”no” hundred_percent_height=”no” hundred_percent_height_scroll=”no” hundred_percent_height_center_content=”yes” equal_height_columns=”no” menu_anchor=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_position=”center center” background_repeat=”no-repeat” fade=”no” background_parallax=”none” enable_mobile=”no” parallax_speed=”0.3″ video_mp4=”” video_webm=”” video_ogv=”” video_url=”” video_aspect_ratio=”16:9″ video_loop=”yes” video_mute=”yes” video_preview_image=”” border_size=”” border_color=”” border_style=”solid” margin_top=”” margin_bottom=”” padding_top=”” padding_right=”” padding_bottom=”” padding_left=””][fusion_builder_row][fusion_builder_column type=”2_3″ layout=”2_3″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” padding_top=”” padding_right=”” padding_bottom=”” padding_left=”” dimension_margin=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_text]

I’m happy to say this is my second T-SQL Tuesday post! The topic this week is looking forward to many years in the future, at T-SQL Tuesday #200, or blogging on a new feature that might come out on June 2026. Thanks to Adam Machanic for starting the T-SQL Tuesday idea and for the topic this week. His round-up post can be found here, or simply click the logo on the right side. The post below is the second option, I’ll be blogging on something that hasn’t happened. This post is all science fiction!

Query Optimizer on Github

Microsoft started to embrace open sourcing some of their applications way back in the 2010s. It took a while to get here, but finally the query optimizer has been open sourced. I’ve spent the past couple days immersed in the internals and I was surprised to find that Compatibility Mode 130 is still supported! If you enable trace flag 42, you can still run your SQL Server 2016 workloads on SQL Server 2024!

Reverting your Compatibility Level to 130

Ever since we’ve upgraded to SQL Server 2024, our queries just run slower so I’ve applied trace flag 42 to our production instances. I think I found the issue though. We were using the legacy Query Store which had forced plans, and the new 2024 Query Store is dynamically deciding which plans should be forced, at run time. With the changes I’m merging, we can exclude certain queries from the Query Store AI. At least these days, we get weekly cumulative updates, so we can just fix things in one week. It’s hard to believe, but we used to wait a month to apply a new SQL patch! And you had to reboot to apply changes! It all seems so draconian.

[/fusion_text][/fusion_builder_column][fusion_builder_column type=”1_3″ layout=”1_3″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” padding_top=”” padding_right=”” padding_bottom=”” padding_left=”” dimension_margin=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_imageframe image_id=”11995″ style_type=”none” stylecolor=”” hover_type=”none” bordersize=”” bordercolor=”” borderradius=”” align=”none” lightbox=”no” gallery_id=”” lightbox_image=”” alt=”” link=”http://dataeducation.com/invitation-t-sql-tuesday-100-looking-forward-100-months/” linktarget=”_self” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=””]https://www.dba-art.com/wp-content/uploads/2017/11/T-SQL-Tuesday-Logo.jpg[/fusion_imageframe][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

Query Store Internals

Query Store Internals: Indexes on the system views

Foreword

The Query Store is a great new feature available starting in SQL Server 2016. I started using it a few months ago, and learned a lot about it from Erin Stellato (blog | twitter)! I got a chance to speak with her at SQL Saturday Cleveland, where we discussed the indexes in this post. Thank you Erin for all your help with this post and understanding the Query Store! To anyone reading this post who wants to learn more, I highly recommend Erin’s blog posts and Pluralsight courses on Query Store.

Structure of the Query Store

The easiest way to see what’s stored in the Query Store is to query the catalog views. Behind the scenes of catalog views, data is stored in system tables . There are many benefits to this, starting with the fact that your performance data will survive a reboot. If we want to see these tables, we can search for them by looking for clustered indexes or heaps, that store this data. But when we query for these tables or their indexes, nothing shows up:

(Query taken from Microsoft’s documentation on sys.indexes)

SELECT i.name AS index_name

,COL_NAME(ic.object_id,ic.column_id) AS column_name

,ic.index_column_id

,ic.key_ordinal

,ic.is_included_column

FROM sys.indexes AS i

INNER JOIN sys.index_columns AS ic

ON i.object_id = ic.object_id AND i.index_id = ic.index_id

WHERE i.name like ‘%query_store%’

order by i.name

Another approach

Let’s try looking for just one view. Here’s a query store catalog view-:

sys.query_store_runtime_stats  (and more from Microsoft)

Same query, different keyword:

SELECT i.name AS index_name

,COL_NAME(ic.object_id,ic.column_id) AS column_name

,ic.index_column_id

,ic.key_ordinal

,ic.is_included_column

FROM sys.indexes AS i

INNER JOIN sys.index_columns AS ic

ON i.object_id = ic.object_id AND i.index_id = ic.index_id

WHERE i.name like ‘%runtime%’

order by i.name

Now we’re talking!

It looks like internally Query Store is referred to as plan_persist. That makes sense, thinking about how the Query Store persists query plans to your database’s storage. Let’s take a look at those catalog views vs their clustered and nonclustered indexes. I’ve modified the query a bit at this point, to group together the key columns.

SELECT i.name AS index_name  ,

CASE when i.index_id = 1 THEN ‘Clustered’

WHEN i.index_id = 0 THEN ‘Heap’

ELSE ‘Nonclustered’ END as TypeOfIndex,

STUFF((

SELECT ‘,’ + COL_NAME(ic.object_id,ic.column_id)

FROM sys.index_columns as ic

WHERE i.object_id = ic.object_id AND i.index_id = ic.index_id

and is_included_column = 0  FOR XML PATH(”)),1,1,”)

AS [KeyColumns]

FROM sys.indexes AS i

WHERE i.name like ‘%plan_persist%’

Results:

Catalog View Name Index Name Key Columns Type of Index
sys.query_store_query_text (Transact-SQL) plan_persist_query_text_cidx query_text_id Clustered
sys.query_store_query (Transact-SQL) plan_persist_query_cidx query_id Clustered
sys.query_store_plan (Transact-SQL) plan_persist_plan_cidx plan_id Clustered
sys.query_store_runtime_stats (Transact-SQL) plan_persist_runtime_stats_cidx plan_id,runtime_stats_interval_id,execution_type Clustered
sys.query_store_runtime_stats_interval (Transact-SQL) plan_persist_runtime_stats_interval_cidx runtime_stats_interval_id Clustered
sys.query_context_settings (Transact-SQL) plan_persist_context_settings_cidx context_settings_id Clustered
Undocumented plan_persist_query_hints_cidx query_hint_id Clustered
Undocumented plan_persist_query_template_parameterization_cidx query_template_id Clustered
sys.query_store_wait_stats (Transact-SQL) plan_persist_wait_stats_cidx runtime_stats_interval_id,plan_id,wait_category,execution_type Clustered
plan_persist_query_text_idx1 statement_sql_handle Nonclustered
plan_persist_query_idx1 query_text_id,context_settings_id Nonclustered
plan_persist_plan_idx1 query_id Nonclustered
plan_persist_runtime_stats_idx1 runtime_stats_id Nonclustered
plan_persist_runtime_stats_interval_idx1 end_time Nonclustered
plan_persist_query_hints_idx1 query_hint_id Nonclustered
plan_persist_query_template_parameterization_idx1 query_template_hash Nonclustered
plan_persist_wait_stats_idx1 wait_stats_id Nonclustered

Knowing this, we can query the Query Store using predicates supported by indexes! We can also see the Query Store indexes if we request actual Execution Plans.

Here’s my query:

select * from sys.query_store_query

where query_id = 5

Here’s the execution plan:

 

And here’s the predicate that was applied:

 

Looking at this execution plan, the object is “plan_presist_query_cidx”, which is the index from above. This shows that Query Store uses these “plan persist” indexes, as the indexes behind the catalog views.

So what’s the point?

Well, Query Store size is user defined, so we can control how much space is consumed by the data in total. As our Query Store gets bigger, queries against the Query Store tables will start running slower unless we use the indexes properly. It’s important to know these indexes if you’re going to be querying Query Store regularly.

Final note (don’t do this in production)

As a cool side note, it’s possible to query the system tables directly if you’re connected to a SQL Server using the Dedicated Admin Connection (DAC). Please use only use the DAC on a test or local server for this example:

The connection prefix, “ADMIN:” shows that I’m connected with the DAC.

SQL 101: Two places to start troubleshooting

Getting started

Someone calls you at 1 AM on Saturday and says, “Is there an issue with SQL?” First, breath and take a look at your logs. This post is about two places you can take a look to see if there’s an issue, quick, so you can get back to sleep. Especially when I was new to SQL Server, I thought fixing an issue would involve querying the server to see what was there. However, both of these locations are available through the GUI and are easy to get to. The two places I’d start looking are the Windows Server Application Log, and the SQL Server Log.

What to look for

In both of these locations, look for the last date time when you knew SQL was working. Then look messages about errors from SQL in between then and now.

1: Windows Server Application Log

My favorite way to read this log is with the app Event Viewer. You can search for this app in Windows or you can go to the file directory: C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools

Start up this app. By default, it shows the logs of your local computer. To connect remotely, go to the top left of the app, click Action -> Connect to another computer

Once you’ve got the app running, go to Windows Logs and click on Application. It’s as easy as that. If you have multiple SQL Server instances running on the same server, you can look at the “Source” column to see what instance wrote to the log.

Here’s what you should see:

2: SQL Server Error Log

Another useful log. To find this one, you can go to the file location listed in the screenshot above. That’s for my local instance, yours will be slightly different since my instance is named “FOURTEENTWO” (SQL Server 2014, instance #2 on this server).

Or

You can open SQL Server Management Studio (SSMS). Connect to your server, in Object Explorer go to Management -> SQL Server Logs

Guaranteed?

No. I can’t guarantee that this will help you fix your issue. But I always forget it! It’s saved me a lot. Here’s a list of things that are logged in these locations, that I’ve needed to find:

  • When SQL Server started
  • When SQL Server crashed and where the log dump is stored from that crash
  • Changes in compatibility level of databases

 

SQL Saturday Nashville #698

Why pick my session?

Hi, I’m Arthur Daniels. I’m presenting Reading Execution Plans Successfully at SQL Saturday Nashville. If you’re considering attending my session, I’d really like to see you there! I deal with execution plans daily. They can be really gnarly, and it takes a lot of time to work through the larger execution plans. I wrote this session as a foundation for performance tuning. Some of the things I’ll be talking about are:

  • What an execution plan is, behind the picture in management studio
  • The execution plan cache and what it can be used for
  • Finding where to start reading an execution plan
  • Interpreting a lot of the information provided on an Index Seek
  • When to use Query Store, and when the execution plan cache could be better

Ask questions

I think the best place to ask a question is in person. I might not have the answer you’re looking for, but I know a lot of great places and great people that we can ask. If you’re stumped (especially on performance), I encourage you to come ask.

Who am I?

My name is Arthur Daniels. I work primarily on performance for SQL Servers at SAP Fieldglass. I spend a lot of my time finding ways to improve and rewrite queries, and occasionally adjust or add indexes. If you want to talk about something I covered or didn’t cover in my session, please feel free to catch me after my session!

Read the slides

If you want the latest version I’ve used, here is the SQL Saturday Minnesota where I last gave my presentation. I also provide my slides on my website, here‘s a copy of the slides when I first gave this presentation. I’ve been spending the time between presentations improving.

 

 

TSQL Tuesday #96: Folks Who Have Made a Difference

Well, this is my first T-SQL Tuesday! Here’s the original post: link. I’m very excited to participate. The host for this month is Ewald Cress (blog | twitter ) and the subject is talking about people who’ve made a difference. I’ve been working with SQL Server for a few years and I have many people to be grateful for! I thought I’d start by sharing the story that I use in my presentation, Reading Execution Plans Successfully.

My first SQL job was a Junior BI Developer, right out of college. I was learning from Treye Nekola , a Sr Database Administrator. At the time, I could write a SELECT statement, but I had trouble with JOINs and WHERE clauses. Sometimes my queries would run for minutes a time, even with NOLOCK hints they were taking a while. Treye had the wisdom to call me out on it, and I said something very foolish. I said, “I don’t care how long my query takes as long as it gets results.” Treye just looked at me, knowing how stupid that sounded. I let it go, because I was embarrassed. He was right, and my queries needed work. Over the next months, Treye showed me why my queries were poorly written, and with a godly amount of patience, he set me on the right path. I will forever be grateful to Treye for being an awesome mentor and friend!

The next person I’d like to appreciate is Brent Ozar. I doubt Brent knows much about me, but his material and training has seriously improved my skills. Treye showed me his blog, and I started to learn about the SQL Server online resources. Brent’s also the person who inspired me to start presenting. At PASS Summit 2016, Brent had a session titled 500-Level Guide to Career Internals. At this session, Brent encouraged the audience to start blogging and speaking. I haven’t blogged as much as I should, but I did take his advice on speaking. It has seriously improved my life to speak because I get to teach people and meet tons of really excellent people in the database world. It is so rewarding to teach people a new skill or fact.

And finally, I appreciate Andy Yun. I think I first met Andy at SQL Saturday Iowa City. We were both speakers and spent time talking in the speaker room. We parted ways for a while, but Andy mentioned he was planning on making a trip to the Iowa SQL user groups so I was sure to see him again. We met up before his presentation in Des Moines, Iowa, where I told him that I was taking a job in Illinois, and I was looking for advice. By sheer luck, it was the exact same city that Andy was living in! I had no idea he lived in the exact same suburb as I was headed to. Since that day, Andy has been a close friend and mentor to me. I’m incredibly grateful for the guidance and help he’s given me, in life and career. I say without hesitation that Andy has made a massive difference in my life.

I would like to make some honorable mentions to some people I couldn’t fit in this post, and I’m sure there’s many more that I couldn’t fit in this post.

 

Thank you for being great people:

Ted Schnieders

Bill Andersen

Nicole Loyd

Fred Botts (leader of the Des Moines SQL Server User Group)

 

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close