patternsqlMinor
Postgres: understanding amount of WAL files/size generated
Viewed 0 times
understandinggeneratedpostgresamountsizefileswal
Problem
I'm constantly caught by surprised how certain operations generate me huge amount of WAL files.
I want these WAL files for point in time recovery (I also perform a nightly full dump in addition) so the basic functionality provided is wanted and I don't want to change that (i.e. I'm not searching for a way to turning WAL archives off, etc.)
Using Postgres 9.5 with these settings:
I had run this statement today:
Table1
15 Mio rows, table size ~3,4GB, index size 7GB
Table2
10 Mio rows, table size ~2GB, index size 3,4GB
The runtime was about 55 minutes (not complaining here).
The amount of generated WAL files was unexpectedly huge. The aforementio
I want these WAL files for point in time recovery (I also perform a nightly full dump in addition) so the basic functionality provided is wanted and I don't want to change that (i.e. I'm not searching for a way to turning WAL archives off, etc.)
Using Postgres 9.5 with these settings:
wal_level = archive
checkpoint_timeout = 20min
max_wal_size = 1GB
min_wal_size = 80MB
archive_command = 'test ! -f /backup/wal/%f && cp %p /backup/wal/%f'I had run this statement today:
WITH table2_only_names AS (
SELECT id , name FROM table2
)
UPDATE table1
SET table2_name = table2_only_names.name
FROM table2_only_names
WHERE table1.table2_id = table2_only_names.id;Table1
CREATE TABLE public.table1 (
id BIGINT PRIMARY KEY NOT NULL DEFAULT nextval('table1_id_seq'::regclass),
table2_id BIGINT,
table3_id BIGINT,
positive_count INTEGER NOT NULL DEFAULT 0,
neutral_count INTEGER NOT NULL DEFAULT 0,
negative_count INTEGER NOT NULL DEFAULT 0,
is_blocked BOOLEAN NOT NULL DEFAULT false,
blocker_id BIGINT,
group BIGINT,
created TIMESTAMP WITH TIME ZONE,
modified TIMESTAMP WITH TIME ZONE,
table4_id BIGINT,
name CHARACTER VARYING(255)
);
CREATE UNIQUE INDEX idx1 ON table1 USING BTREE (table3_id, table2_id);
CREATE INDEX idx2 ON table1 USING BTREE (table3_id);
CREATE INDEX idx3 ON table1 USING BTREE (table2_id);
CREATE INDEX idx4 ON table1 USING BTREE (group);
CREATE INDEX idx5 ON table1 USING BTREE (blocker_id);
CREATE INDEX idx6 ON table1 USING BTREE (table4_id);15 Mio rows, table size ~3,4GB, index size 7GB
Table2
CREATE TABLE public.table2 (
id BIGINT PRIMARY KEY NOT NULL DEFAULT nextval('table2_id_seq'::regclass),
name CHARACTER VARYING(255),
);10 Mio rows, table size ~2GB, index size 3,4GB
The runtime was about 55 minutes (not complaining here).
The amount of generated WAL files was unexpectedly huge. The aforementio
Solution
When you update a row in PostgreSQL, it generally makes a copy of the entire row (not just the column that was updated) and marks the old row as deleted. The new copy is going to need to get WAL logged in its entirety. The old row is probably also going to be WAL logged in its entirety, on average, if you have full_page_writes turned on and you are checkpointing too closely together.
Almost all of the updated rows are probably going to need to update all of the indexes for it, as well. That is because the new version of the row won't fit on the same page as the old version, so the indexes have to know where to find the new version.
So you are logging the entire table twice (once for the old rows, once for the new ones) and all if its indexes as well. And WAL records have quite a bit of overhead. And if you have full_page_writes turned on and checkpoint frequently, that will make it even worse.
So what are your options to reduce the volume?
1) If many of your updates are degenerate (updated to the value they already have) you can suppress those updates with an additional where clause:
2) Most WAL files are extremely compressible. You can include a compresssion command in your archive_command, something like
Of course you will have to make your recovery_command do the reverse.
3) Since you are using 9.5, you can try turning wal_compression on.
4) You could try turning off full_page_writes, although this does but your data at risk of corruption in the case of a crash, on most storage hardware. Or, if you have frequent checkpoints during this operation you could make checkpoints occur much less frequently, which will lessen the impact of having full_page_writes turned on.
Almost all of the updated rows are probably going to need to update all of the indexes for it, as well. That is because the new version of the row won't fit on the same page as the old version, so the indexes have to know where to find the new version.
So you are logging the entire table twice (once for the old rows, once for the new ones) and all if its indexes as well. And WAL records have quite a bit of overhead. And if you have full_page_writes turned on and checkpoint frequently, that will make it even worse.
So what are your options to reduce the volume?
1) If many of your updates are degenerate (updated to the value they already have) you can suppress those updates with an additional where clause:
WITH table2_only_names AS (
SELECT id , name FROM table2
)
UPDATE table1
SET table2_name = table2_only_names.name
FROM table2_only_names
WHERE table1.table2_id = table2_only_names.id
AND table2_name is distinct from table2_only_names.name;2) Most WAL files are extremely compressible. You can include a compresssion command in your archive_command, something like
archive_command = 'set -C -o pipefail; xz -2 -c %p > /backup/wal/%f.xz'Of course you will have to make your recovery_command do the reverse.
3) Since you are using 9.5, you can try turning wal_compression on.
4) You could try turning off full_page_writes, although this does but your data at risk of corruption in the case of a crash, on most storage hardware. Or, if you have frequent checkpoints during this operation you could make checkpoints occur much less frequently, which will lessen the impact of having full_page_writes turned on.
Code Snippets
WITH table2_only_names AS (
SELECT id , name FROM table2
)
UPDATE table1
SET table2_name = table2_only_names.name
FROM table2_only_names
WHERE table1.table2_id = table2_only_names.id
AND table2_name is distinct from table2_only_names.name;archive_command = 'set -C -o pipefail; xz -2 -c %p > /backup/wal/%f.xz'Context
StackExchange Database Administrators Q#154188, answer score: 9
Revisions (0)
No revisions yet.