HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlModerate

Recommended memory configuration for many SQL Server instances per server

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
instancesperrecommendedsqlformemorymanyserverconfiguration

Problem

I am currently managing a server cluster for a SaaS system. We currently have 6 SQL servers running MSSQL 2008 R2. Each server is a quad core virtual machine with 16GB of RAM provisioned on a server rack. We populate each server with the maximum of 50 instances allowed by SQL Server. Each instance will be accessed sporadically by a maximum of 10 clients simultaneously, and each database will in general only be around 100MB to 500MB in size.

Currently, each instance was installed without setting memory limits, but what we have found is that the first instances tend to use too much memory, leaving later instances (in starting order) with barely 200MB to work with, and the server OS with less than 1% physical memory available. This seems to cause excessive disk swapping, and latency issues.

What is the recommended way to split memory allocation in this case? Is there a formula to determine approximately how much memory one instance needs as a minimum from the number of clients and database size? Could I set a maximum of 300MB per instance and be done with it?

2018-12-13 Final solution

I have finally figured out how to configure everything where it seems very stable and is completely symmetric across all instance, and since it took some digging and doing I thought I should share the solution I came to. This is probably not for everyone, but it seems to be the best solution to me.

The key is in this API: https://learn.microsoft.com/en-us/windows/desktop/api/memoryapi/nf-memoryapi-setprocessworkingsetsizeex

This allows you to set a hard limit on the minimum and maximum working set size on a per process basis. This seems to be the cleanest to me, as it allows me to tell SQL server to use as much memory as it wants WITHIN this working set size, and it seems to optimize adequately by itself instead of running at capacity and trying to hog more memory all the time. Each instance will swap more overall, but since the usage is sporadic for each instance it seems to work

Solution

Each server is a quad core virtual machine with 16GB of RAM provisioned on a server rack.

We populate each server with the maximum of 50 instances allowed by SQL Server. Each instance will be accessed sporadically by a maximum of 10 clients simultaneously, and each database will in general only be around 100MB to 500MB in size.

IMHO, your total RAM is too low. Please read my answer (with relevant links) SQL Server Maximum and Minimum memory configuration. They change when you have multiple instances of sql server running on a given host.

Capping SQL server max memory on a multi instance server is a balancing act and max memory is applicable to only buffer pool. If sql server needs more memory, it is going to use it.

You can even use Lock Pages in Memory (I would still opt for more memory before enabling LPM in your case).

As a starting point,

-
Baseline your instances. This will help you gauge what is good / acceptable for your workload.

-
Use OptimizeInstanceMemory script from Aaron's blog to help you get started. The blog post covers how to balance max memory dynamically when failover happens.

As a side note, you should monitor CPU, memory and disk utilization and based on the usage per client, you should charge them as well. Alternatively, you can move to Azure :-)

Context

StackExchange Database Administrators Q#142578, answer score: 13

Revisions (0)

No revisions yet.