debugsqlModerate
PostgreSQL Error: out of memory
Viewed 0 times
postgresqlerroroutmemory
Problem
I'm trying to run a query that should return around 2000 rows, but my RDS-hosted PostgreSQL 9.3 database is giving me the error "out of memory DETAIL: Failed on request of size 2048.".
What does that mean? My instance has 3GB of memory, so what would be limiting it enough to run out of memory with such a small query?
Edit:
I can't show the full SQL, but it's attempting to perform a pivot. I have two primary tables,
```
explain
select * from myapp_library_get_monthly_popular
where id in (5495060, 5495059, 5495048)
Nested Loop Left Join (cost=3645798.54..3750412.91 rows=3 width=2980)
-> Nested Loop Left Join (cost=3645798.10..3750388.98 rows=3 width=2994)
-> Nested Loop Left Join (cost=3645797.66..3750365.05 rows=3 width=2976)
-> Nested Loop Left Join (cost=3645797.23..3750341.13 rows=3 width=2958)
-> Nested Loop Left Join (cost=3645796.79..3750317.20 rows=3 width=2940)
-> Nested Loop Left Join (cost=3645796.35..3750293.27 rows=3 width=2922)
-> Nested Loop Left Join (cost=3645795.91..3750269.35 rows=3 width=2904)
-> Nested Loop Left Join (cost=3645795.48..3750245.42 rows=3 width=2886)
-> Nested Loop Left Join (cost=3645795.04..3750221.49 rows=3 width=2868)
-> Nested Loop Left Join (cost=3645794.60..3750197.57 rows=3 width=2850)
-> Nested Loop Left Join
What does that mean? My instance has 3GB of memory, so what would be limiting it enough to run out of memory with such a small query?
Edit:
SHOW work_mem;
"1024GB"I can't show the full SQL, but it's attempting to perform a pivot. I have two primary tables,
library and book, which points back to a library record. My query attempts to find the most popular book for each of the last 12 months for each library record, and join them to a separate column in the result queryset, to have something like:library_id, month_1_book_id, month_2_book_id, month_3_book_id, ...Explain shows this results in quite a few loops:```
explain
select * from myapp_library_get_monthly_popular
where id in (5495060, 5495059, 5495048)
Nested Loop Left Join (cost=3645798.54..3750412.91 rows=3 width=2980)
-> Nested Loop Left Join (cost=3645798.10..3750388.98 rows=3 width=2994)
-> Nested Loop Left Join (cost=3645797.66..3750365.05 rows=3 width=2976)
-> Nested Loop Left Join (cost=3645797.23..3750341.13 rows=3 width=2958)
-> Nested Loop Left Join (cost=3645796.79..3750317.20 rows=3 width=2940)
-> Nested Loop Left Join (cost=3645796.35..3750293.27 rows=3 width=2922)
-> Nested Loop Left Join (cost=3645795.91..3750269.35 rows=3 width=2904)
-> Nested Loop Left Join (cost=3645795.48..3750245.42 rows=3 width=2886)
-> Nested Loop Left Join (cost=3645795.04..3750221.49 rows=3 width=2868)
-> Nested Loop Left Join (cost=3645794.60..3750197.57 rows=3 width=2850)
-> Nested Loop Left Join
Solution
First, let's assume that
Anyway it's much too high. As said in Resource Consumption in PostgreSQL documentation, with some emphasis added:
work_mem (integer)
Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The
value defaults to one megabyte (1MB). Note that for a complex query,
several sort or hash operations might be running in parallel; each
operation will be allowed to use as much memory as this value
specifies before it starts to write data into temporary files. Also,
several running sessions could be doing such operations concurrently.
Therefore, the total memory used could be many times the value of
work_mem; it is necessary to keep this fact in mind when choosing the
value. Sort operations are used for ORDER BY, DISTINCT, and merge
joins. Hash tables are used in hash joins, hash-based aggregation, and
hash-based processing of IN subqueries.
The query whose plan is shown is complex and requires several levels of hashing, so you're clearly in the case the doc is warning against.
The default
work_mem is at 1024MB, and not the impossible 1024GB reported (impossible with a total of 3GB on the machine).Anyway it's much too high. As said in Resource Consumption in PostgreSQL documentation, with some emphasis added:
work_mem (integer)
Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The
value defaults to one megabyte (1MB). Note that for a complex query,
several sort or hash operations might be running in parallel; each
operation will be allowed to use as much memory as this value
specifies before it starts to write data into temporary files. Also,
several running sessions could be doing such operations concurrently.
Therefore, the total memory used could be many times the value of
work_mem; it is necessary to keep this fact in mind when choosing the
value. Sort operations are used for ORDER BY, DISTINCT, and merge
joins. Hash tables are used in hash joins, hash-based aggregation, and
hash-based processing of IN subqueries.
The query whose plan is shown is complex and requires several levels of hashing, so you're clearly in the case the doc is warning against.
The default
1Mb is conservative, but I wouldn't raise work_mem above 128MB for a 3GB instance. shared_buffers on the other hand could be set to 1024MB: this one is allocated only once and kept for the entire instance's lifetime.Context
StackExchange Database Administrators Q#64570, answer score: 15
Revisions (0)
No revisions yet.