HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlCritical

Select columns inside json_agg

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
selectjson_aggcolumnsinside

Problem

I have a query like:

SELECT a.id, a.name, json_agg(b.*) as "item"
  FROM a
  JOIN b ON b.item_id = a.id
 GROUP BY a.id, a.name;


How can I select the columns in b so I don't have b.item_id in the JSON object?

I have read about ROW, but it returns a JSON object like:

{"f1": "Foo", "f2": "Bar"}


I would need to remap the JSON object once it is fetched to match the proper column keys. I'd like to avoid that and keep original column names.

Solution

Assuming a.id is the PRIMARY KEY, so it covers all columns of table a in GROUP BY.

There is no provision in SQL syntax to say "all columns except this one".
Postprocessing

But since Postgres 9.5 we can say "all keys except this one" for jsonb objects with the minus operator - taking text as 2nd operand. After converting the row to jsonb, but before the aggregation. We do need jsonb instead of json!

SELECT a.id, a.name
     , jsonb_agg(to_jsonb(b) - 'item_id') AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;


Since Postgres 10 "except several keys" is implemented with the - operator taking text[]:

SELECT a.id, a.name
     , jsonb_agg(to_jsonb(b) - '{item_id, col2}'::text[]) AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;


fiddle

Note the explicit cast ::text[].

(We could use json_agg() instead of jsonb_agg() to output the type json, but I don't see the point after converting to jsonb.)

Related:

  • How to select sub-object with given keys from JSONB?



Preprocessing

Or we can achieve your goal by spelling out the remaining list of columns in a row-type expression. Before converting to JSON:

SELECT a.id, a.name
     , json_agg((b.id, b.col1, b.col2)) AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;


That's short for the more explicit form: ROW(b.id, b.col1, b.col2).

However, columns names are not preserved in a row-type expression. We get generic key names in the JSON object. That's what you observed in your attempt.

I see 3 options to preserve original column names:
  1. Cast to registered type



Cast to a registered row type. A type is registered implicitly for every existing table(-like object) or with an explicit CREATE TYPE statement. We might create a temporary type as ad-hoc solution for the current session:

CREATE TYPE pg_temp.tmp_x AS (id int, col1 int, col2 text);  -- adequate data types!


Then:

SELECT a.id, a.name
     , json_agg((b.id, b.col1, b.col2)::tmp_x) AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;


  1. Use a subselect



Use a subselect to construct a derived table and reference that as a whole. This also carries column names. More verbose, but we don't need a registered type:

SELECT a.id, a.name
     , json_agg((SELECT x FROM (SELECT b.id, b.col1, b.col2) AS x)) AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;


  1. json_build_object() in Postgres 9.4 or later



SELECT a.id, a.name
     , json_agg(json_build_object('id', b.id, 'col1', b.col1, 'col2', b.col2)) AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;


fiddle

All the same for jsonb with the respective functions jsonb_agg() and jsonb_build_object().

Related:

  • Return as array of JSON objects in SQL (Postgres)

Code Snippets

SELECT a.id, a.name
     , jsonb_agg(to_jsonb(b) - 'item_id') AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;
SELECT a.id, a.name
     , jsonb_agg(to_jsonb(b) - '{item_id, col2}'::text[]) AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;
SELECT a.id, a.name
     , json_agg((b.id, b.col1, b.col2)) AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;
CREATE TYPE pg_temp.tmp_x AS (id int, col1 int, col2 text);  -- adequate data types!
SELECT a.id, a.name
     , json_agg((b.id, b.col1, b.col2)::tmp_x) AS item
FROM   a JOIN b ON b.item_id = a.id
GROUP  BY a.id;

Context

StackExchange Database Administrators Q#69655, answer score: 109

Revisions (0)

No revisions yet.