HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

Memory utilization by SQL Server

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
sqlutilizationmemoryserver

Problem

I m using SQL server 2012. I have a stored procedure that will eventually be tuned, but I have one question. My server memory is 32 GB and max memory assigned in SQL Server is 20GB. When I execute this SP, the logical reads is around 4000000, so 4M data pages times with 8kb, divide by 1024, i will get 30 GB worth of data in memory.

My question is how SQL Server can keep 30 GB of data in memory if i only have assigned 20 GB of memory? Might be my question look silly but need your help guys. Thanks

Solution

First, you have a scan count of 17. I.e., it re-reads the same set of data over and over again, and the total for those are 4616999 logical reads. Assuming that the exact same set of data is re-read 17 times, you would divide your 30 GB with 17 to get the amount of data read from disk. So, at first pass it reads som 1.8 GB data, and then it re-reads this same set of data 16 times more. 1.8 GB data is no problem to fit in memory.

Second, even if you did have a scan count of 1, and the data wouldn't fit in memory, then where would the problem be? As it read some data, that data can later removed from memory during the read as it get further along the read operation.

Finally, don't get fooled by "physical reads" from statistics IO. You can also have physical I/O done as read-ahead, blob reads etc. Make sure you read the whole line and all numbers after the logical reads should be summed to get the total physical reads. Also, the perfmon buffer cache hit ratio counter don't include read-aheads, meaning it is very misleading.

Context

StackExchange Database Administrators Q#267121, answer score: 9

Revisions (0)

No revisions yet.