HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

Postgresql out of memory while trying to a request of ~500mb

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
postgresqlwhilerequesttryingmemory500mbout

Problem

I started getting out-of-memory errors on os postgresql-9.1 server running on a x64 Debian with 4GB of RAM, dedicated to postgresql.

Top returns something like this, which seems that the problem is not with the available memory on the machine itself.

Mem:   4113160k total,  3235968k used,   877192k free,    71028k buffers
Swap:   379896k total,       20k used,   379876k free,  2865136k cached


And the config:

max_connections = 100           # (change requires restart)
ssl = true              # (change requires restart)
shared_buffers = 1500MB         # min 128kB
work_mem = 2MB
maintenance_work_mem = 128MB
effective_cache_size = 3GB
log_min_duration_statement = 2000
log_lock_waits = on
track_counts = on
autovacuum = on


The query that is logged here is made by OpenFire, but the interesting part is that OpenFire database is almost empty. Still maybe this could be part of a bigger transation that could take so much memory (like trying to populate the list of users from ldap). Now no table has more than 50 rows.

```
2013-07-19 13:29:46 UTC ERROR: out of memory
2013-07-19 13:29:46 UTC DETAIL: Failed on request of size 536870912.
2013-07-19 13:29:46 UTC STATEMENT: SELECT bytes from ofRRDs where id = $1
TopMemoryContext: 68688 total in 10 blocks; 2184 free (7 chunks); 66504 used
TopTransactionContext: 8192 total in 1 blocks; 7328 free (0 chunks); 864 used
TableSpace cache: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 used
Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 used
MessageContext: 8192 total in 1 blocks; 6976 free (0 chunks); 1216 used
Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
smgr relation table: 24576 total in 2 blocks; 9808 free (4 chunks); 14768 used
TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 used
Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
PortalMemory: 8192 total in 1 blocks; 7888 free (0 c

Solution

A possible explanation would be that the requested ofRRDs.bytes columns is very big and the system fails to find a contiguous piece of about 512Mb of RAM to generate its textual representation.

A similar problem is explained in this message from pgsql-general. Excerpt:


The textual representation of arbitrary bytea data is normally several
times the size of the raw bits (worst case is 5x bigger, typical case
perhaps half that). In addition to that you have to consider that
there are likely to be several copies of the string floating around in
your process' memory space.

This was before Postgres 9.x and its default hex format for bytea textual representation, but still the size multiplier is 2x for PG 9.1

With only 4Gb of RAM and 1.5Gb of it taken for shared_buffers, it's plausible to end up in a legit out-of-memory condition.

But see the result of:

SELECT id,octet_length(bytes) FROM ofRRDs ORDER BY 2;


to check the actual contents sizes in this column.

Code Snippets

SELECT id,octet_length(bytes) FROM ofRRDs ORDER BY 2;

Context

StackExchange Database Administrators Q#46675, answer score: 3

Revisions (0)

No revisions yet.