HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

SQL Server 2016 Enterprise poor performance

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
sqlpoorenterpriseperformanceserver2016

Problem

Sorry to be long, but I want to give you as much information as possible so that might be helpful to the analysis.

I know there are several posts with similar issues, however, I have already followed these various posts and other information available on the web, but the problem remains.

I have a serious performance problem in SQL Server that is driving users crazy. This problem drags on for several years, and until the end of 2016 was managed by another entity and from 2017 came to be managed by me.

In the middle of 2017, I was able to resolve the problem by following the indexing hints indicated by Microsoft SQL Server 2012 Performance Dashboard Reports. The effect was immediate, it sounded like magic. The processor that was in the last days almost always in the 100%, became super serene and the feedback of the users was resounding. Even our ERP technician was delighted, as it usually took 20 minutes to get certain listings and finally he could do it in seconds.

Over time, however, it slowly began to worsen. I avoided creating more indexes, for fear that too many indexes would worsen performance. But at some point I had to erase the ones that had no use and create the new ones that Performance Dashboard suggests to me. But no impact.

The slowness felt is essentially when saving and consulting, in the ERP.

I have a Windows Server 2012 R2 dedicated to SQL Server 2016 Enterprise (64-bit) with the following configuration:

  • CPU: Intel Xeon CPU E5-2650 v3 @ 2.30GHz



  • Memory: 84 GB



  • In terms of storage, the server has a volume dedicated to the operating system, another dedicated to the data and another dedicated to the logs.



  • 17 databases



  • Users:



  • In the biggest DB are connected more or less 113 users concurrent



  • In another there are about 9 users



  • In two of them are 3 + 3



  • The rest have only 1 user each



  • We have a web that also writes for the larger database, but where the use is much less regular, and that should have about 20 users.

Solution

You gave us a long (and very detailed) question. Now you have to deal with a long answer. ;)

There are several things I would suggest to change on your server. But lets start with the most pressing issue.

One time emergency measures:

The fact that the performance was satisfying after deploying the indexes on your system and the slowly degrading perfomance is a very strong hint that you need to start maintaining your statistics and (to a lesser degree) take care of index framentation.

As an emergency measure I would suggest an one time manual stats update on all of your databases. You can get the nessecary TSQL by executing this script:

DECLARE @SQL VARCHAR(1000)  
DECLARE @DB sysname  

DECLARE curDB CURSOR FORWARD_ONLY STATIC FOR  
   SELECT [name]  
   FROM master..sysdatabases 
   WHERE [name] NOT IN ('model', 'tempdb') 
   ORDER BY [name] 

OPEN curDB  
FETCH NEXT FROM curDB INTO @DB  
WHILE @@FETCH_STATUS = 0  
   BEGIN  
       SELECT @SQL = 'USE [' + @DB +']' + CHAR(13) + 'EXEC sp_updatestats' + CHAR(13)  
       PRINT @SQL  
       FETCH NEXT FROM curDB INTO @DB  
   END  

CLOSE curDB  
DEALLOCATE curDB


It is provided by Tim Ford in his blogpost on mssqltips.com and he is also explaining why updating statistics matter.

Please note that this is an CPU and IO intensive task that should not be done during buisiness hours.

If this solves your problem, please do not stop there!

Regular Maintenance:

Have a look at Ola Hallengren Maintenance Solution and then set up at least this two jobs:

  • A statistics update job (if possible every night). You can use this CMD code in your agent job. This job has to be created from scratch.



sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d MSSYS -Q "EXECUTE dbo.IndexOptimize @Databases = 'USER_DATABASES', @FragmentationLow = NULL, @FragmentationMedium = NULL, @FragmentationHigh = NULL, @UpdateStatistics = 'ALL', @OnlyModifiedStatistics = 'Y', @MaxDOP = 0, @LogToTable = 'Y'" -b

  • An index maintenace job. I would suggest starting with a scheduled execution once a month. You can start with the defaults Ola provides for the IndexOptimize Job.



There are several reasons I am suggesting the first job to update stats separately:

  • An index rebuild will only update the statistics of the columns covered by that index while an index reorganization does not update statistics at all. Ola separates fragmentation in three categories. By default only category high indexes will be rebuild.



  • Statistics for columns not covered by an index will only be updated by the IndexOptimize job.



  • To mitigate the Ascending Key Problem.



SQL Server will auto update the statistics if the default is left enabled. The problem with that are the thresholds (less of a problem with your SQL Server 2016). Statistics get updated when a certain amount of rows change (20% in older Versions of SQL Server). If you have large tables this can be a lot of changes before statistics get updated. See more info on thresholds here.

Since you are doing CHECKDBs as far as I can tell you can keep doing them like before or you use the maintenance solution for that as well.

For more information on index fragmentation and maintenance have a look at:

SQL Server Index Fragmentation Overview

Stop Worrying About SQL Server Fragmentation

Considering you storage subsystem I would suggest no to fixate to much on "external fragmentation" because the data is not stored in order on your SAN anyway.

Optimize your settings

The sp_Blitz script gives you an excellent list to start.

Priority 20: File Configuration - TempDB on C Drive:
Talk to your storage admin. Ask them if your C drive is the fastest disk available for your SQL Server. If not, put your tempdb there... period. Then check how many temdb files you have. If the answer is one fix that. If they are not the same size fix that two.

Priority 50: Server Info - Instant File Initialization Not Enabled: Follow the link the sp_Blitz script gives you and enable IFI.

Priority 50: Reliability - Page Verification Not Optimal: You should set this back to the default (CHECKSUM). Follow the link the sp_Blitz script gives you and follow the instruction.

Priority 100: Performance - Fill Factor Changed:
Ask yourself why there are so many objects with fill factor set to 70. If you do not have an answer and no application vendor strictly demands it. Set it back to 100%.

This basically means SQL Server will leave 30% empty space on these pages. So to get the same amount of data (compared to 100% full pages) your server has to read 30% more pages and they will take 30% more space in memory. The reason it is often done is to prevent index fragmentation.

But again, your storage is saving those pages in different chunks anyway. So I would set it back to 100% and take it from there.

What to do if everybody is happy:

  • See the rest of the output of sp_Blitz and decide if you change them as suggested.



  • Execute sp_BlitzIndex and have a look at the indexes you created, if th

Code Snippets

DECLARE @SQL VARCHAR(1000)  
DECLARE @DB sysname  

DECLARE curDB CURSOR FORWARD_ONLY STATIC FOR  
   SELECT [name]  
   FROM master..sysdatabases 
   WHERE [name] NOT IN ('model', 'tempdb') 
   ORDER BY [name] 

OPEN curDB  
FETCH NEXT FROM curDB INTO @DB  
WHILE @@FETCH_STATUS = 0  
   BEGIN  
       SELECT @SQL = 'USE [' + @DB +']' + CHAR(13) + 'EXEC sp_updatestats' + CHAR(13)  
       PRINT @SQL  
       FETCH NEXT FROM curDB INTO @DB  
   END  

CLOSE curDB  
DEALLOCATE curDB

Context

StackExchange Database Administrators Q#200191, answer score: 7

Revisions (0)

No revisions yet.