HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

maxscale memory requirements

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
maxscalerequirementsmemory

Problem

What is the expected production requirements for memory / cpu for maxscale

We have a server with 4 GB of memory configured, to run maxscale as a query r / w router, with a replication manager running on the same server.

I have found on doing a large number of inserts in a single transaction, in the millions of rows. 5G File with 10 Million rows
Using LOAD data infile. Running this same load of data infile against the backend server directly works with out any issues.

This is the max_allowed_packet on the backend server.

MariaDB [(none)]> SHOW GLOBAL VARIABLES LIKE 'max_allowed_packet'\G
*************************** 1. row ***************************
Variable_name: max_allowed_packet
        Value: 16777216


The backend server has 32 GB of ram, with no services currently useing it as we are still testing it out and tuneing configuration.

The max scale server runs out of memory. This large set of inserts causes maxscale to crash.

I see the following errors on my client side.

ERROR 2013 (HY000) at line 1: Lost connection to MySQL server during query  
ERROR 2013 (HY000) at line 1: Lost connection to MySQL server at 'reading initial communication packet', system error: 111  
ERROR 2013 (HY000) at line 1: Lost connection to MySQL server at 'reading initial communication packet', system error: 111


And on the server side I see the following in the max scale logs.

```
2017-10-16 19:06:32 notice : Started MaxScale log flusher.
2017-10-16 19:06:32 notice : MaxScale started with 7 server threads.
2017-10-16 19:15:14 notice : Waiting for housekeeper to shut down.
2017-10-16 19:15:15 notice : Finished MaxScale log flusher.
2017-10-16 19:15:15 notice : Housekeeper shutting down.
2017-10-16 19:15:15 notice : Housekeeper has shut down.
2017-10-16 19:15:15 notice : MaxScale received signal SIGTERM. Exiting.
2017-10-16 19:15:15 notice : MaxScale is shutting down.
2017-10-16 19:15:15 notice : MaxScale shutdown completed.
2017-10-16 19:1

Solution

Update:

The amount of memory MaxScale uses for data buffering can be limited with the writeq_high_water and writeq_low_water parameters. This is the recommended way of dealing with excessive memory use in MaxScale.

MaxScale should stream the LOAD DATA LOCAL INFILE directly to the server without buffering it.

As MaxScale uses non-blocking IO, some buffering can occur if the client side network has higher throughput than the backend side network. If this happens, it could be that MaxScale is forced to buffer the data until the network buffers on the backend side are emptied.

I did a quick test with a 1.5GB CSV file and a VM with 1GB memory. I was running MaxScale with the readconnroute router. Loading the file from the same machine caused a peak memory usage of around 90% for the MaxScale process. This leads me to believe that this is either a bug in MaxScale or an inherent limitation of the way MaxScale buffers data.

I would recommend opening a bug report on the MariaDB Jira under the MaxScale project to track this issue: https://jira.mariadb.org/browse/MXS

For the time being, I would say that adding more memory seems like an acceptable workaround for this.

Context

StackExchange Database Administrators Q#188589, answer score: 3

Revisions (0)

No revisions yet.