patternsqlMinor
Optimize for ad-hoc workload
Viewed 0 times
workloadforoptimizehoc
Problem
I know what this option does and how to enable it. My question is what things will happen if I were to enable this.
Without giving too much info, our accounting system is a Microsoft Dynamics product and it uses a VM, 32GB RAM (28GB available to SQL server [2008 R2]). They had their vendor come and take a look at our configuration to address certain performance issues that the accounting team has been seeing. They even had another DBA take a look at our configuration. One of his recommendations was to 'look at missing indexes'. We could say that about almost every SQL server instance in existence, each table has a unique non-clustered index on it, not my choice but im told that making any changes to the indexes on tables that the software accesses could potentially cause issues per the vendor. His second was to enable 'optimize for adhoc workloads'. I've looked into this option a few times before and many say it should always be enabled, whereas some say it's not truly necessary and can go either way.
With optimize for adhoc workloads, I know that single use plans are stored as a stub and the entire plan is not actually kept in cache until the plan is run two times. With a system such as this, are we REALLY going to see any sort of performance gains? Per Kimberly Tripp's article, I've run this query:
Without giving too much info, our accounting system is a Microsoft Dynamics product and it uses a VM, 32GB RAM (28GB available to SQL server [2008 R2]). They had their vendor come and take a look at our configuration to address certain performance issues that the accounting team has been seeing. They even had another DBA take a look at our configuration. One of his recommendations was to 'look at missing indexes'. We could say that about almost every SQL server instance in existence, each table has a unique non-clustered index on it, not my choice but im told that making any changes to the indexes on tables that the software accesses could potentially cause issues per the vendor. His second was to enable 'optimize for adhoc workloads'. I've looked into this option a few times before and many say it should always be enabled, whereas some say it's not truly necessary and can go either way.
With optimize for adhoc workloads, I know that single use plans are stored as a stub and the entire plan is not actually kept in cache until the plan is run two times. With a system such as this, are we REALLY going to see any sort of performance gains? Per Kimberly Tripp's article, I've run this query:
SELECT objtype AS [CacheType]
,count_big(*) AS [Total Plans]
,sum(cast(size_in_bytes AS DECIMAL(18, 2))) / 1024 / 1024 AS [Total MBs]
,avg(usecounts) AS [Avg Use Count]
,sum(cast((
CASE
WHEN usecounts = 1
THEN size_in_bytes
ELSE 0
END
) AS DECIMAL(18, 2))) / 1024 / 1024 AS [Total MBs - USE Count 1]
,sum(CASE
WHEN usecounts = 1
THEN 1
ELSE 0
END) AS [Total Plans - USE Count 1]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs - USE Count 1] DESCSolution
Kris,
•Do we really think that we will see a measurable performance gain
here? If so, how can we quantify it aside from looking at the space
used in the query above?
That depends, but my gut instinct with the data you've given is - no. Sure, you'll potentially save some of that space as a plan stub will still take memory just not all that much (compared to your 1 MB plans). So you'll net memory, we get that. However, we don't know how many of those plans were executed a single time, and then some point later while still in cache executed again. This brings up the question about compilations/recompiles and the cpu utilization to go along with it. If you have a good bit of headroom then it may be a trivial issue (pun intended).
If your server isn't under memory pressure, I would not expect to see too much of an improvement in terms of "performance" depending on how you want to classify that. If you're swapping and having some slight memory pressure this could alleviate it for a few moments - though upping the VM memory would have the same effect at a much faster implementation without negative side effect cost.
•Understandably, I hesitate throwing switches like this on production
systems. What implications should I be cautious of? Will the procedure
cache wipe itself and rebuild? Are there any issues that I should be
made aware of when turning this on?
According to BOL it will not affect anything currently in your plan cache: "Setting the optimize for ad hoc workloads to 1 affects only new plans; plans that are already in the plan cache are unaffected." http://msdn.microsoft.com/en-us/library/cc645587(v=sql.105).aspx
You may see the potential (depending on eventual re-use) of cpu hits for plan compiles (since it would have to do it twice, once for the original execution and then one for the second when it is stored).
Some other "weirdness" would include monitoring tools, especially if they are grabbing execution plans as plan stubs do not have any associated with them. Some odd results may come from tools that expect there to be one associated all the time.
I'm not extremely familiar with Dynamics but IIRC, it has a specific Microsoft setup guide like SharePoint. I'd double check this won't invalidate your supportability for the product.
•Do we really think that we will see a measurable performance gain
here? If so, how can we quantify it aside from looking at the space
used in the query above?
That depends, but my gut instinct with the data you've given is - no. Sure, you'll potentially save some of that space as a plan stub will still take memory just not all that much (compared to your 1 MB plans). So you'll net memory, we get that. However, we don't know how many of those plans were executed a single time, and then some point later while still in cache executed again. This brings up the question about compilations/recompiles and the cpu utilization to go along with it. If you have a good bit of headroom then it may be a trivial issue (pun intended).
If your server isn't under memory pressure, I would not expect to see too much of an improvement in terms of "performance" depending on how you want to classify that. If you're swapping and having some slight memory pressure this could alleviate it for a few moments - though upping the VM memory would have the same effect at a much faster implementation without negative side effect cost.
•Understandably, I hesitate throwing switches like this on production
systems. What implications should I be cautious of? Will the procedure
cache wipe itself and rebuild? Are there any issues that I should be
made aware of when turning this on?
According to BOL it will not affect anything currently in your plan cache: "Setting the optimize for ad hoc workloads to 1 affects only new plans; plans that are already in the plan cache are unaffected." http://msdn.microsoft.com/en-us/library/cc645587(v=sql.105).aspx
You may see the potential (depending on eventual re-use) of cpu hits for plan compiles (since it would have to do it twice, once for the original execution and then one for the second when it is stored).
Some other "weirdness" would include monitoring tools, especially if they are grabbing execution plans as plan stubs do not have any associated with them. Some odd results may come from tools that expect there to be one associated all the time.
I'm not extremely familiar with Dynamics but IIRC, it has a specific Microsoft setup guide like SharePoint. I'd double check this won't invalidate your supportability for the product.
Context
StackExchange Database Administrators Q#74091, answer score: 6
Revisions (0)
No revisions yet.