HiveBrain v1.2.0
Get Started
← Back to all entries
gotchasqlModerate

Why does this derived table improve performance?

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
thiswhyimproveperformancedoesderivedtable

Problem

I have a query which takes a json string as a parameter. The json is an array of latitude,longitude pairs.
An example input might be the following.

declare @json nvarchar(max)= N'[[40.7592024,-73.9771259],[40.7126492,-74.0120867]
,[41.8662374,-87.6908788],[37.784873,-122.4056546]]';


It calls a TVF that calculates the number of POIs around a geographical point, at 1,3,5,10 mile distances.

create or alter function [dbo].fn_poi_in_dist
returns table
with schemabinding as
return
select count_1 = sum(iif(LatLong.STDistance(@geo)

The intent of the json query is to bulk call this function. If I call it like this the performance is very poor taking nearly 10 seconds for just 4 points:

select row=[key]
,count_1
,count_3
,count_5
,count_10
from openjson(@json)
cross apply dbo.fn_poi_in_dist(
geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326))


plan = https://www.brentozar.com/pastetheplan/?id=HJDCYd_o4

However, moving the construction of the geography inside a derived table causes the performance to improve dramatically, completing the query in about 1 second.

select row=[key]
      ,count_1
      ,count_3
      ,count_5
      ,count_10
from (
select [key]
      ,geo = geography::Point(
                convert(float,json_value(value,'$[0]'))
               ,convert(float,json_value(value,'$[1]'))
               ,4326)
from openjson(@json)
) a
cross apply dbo.fn_poi_in_dist(geo)


plan = https://www.brentozar.com/pastetheplan/?id=HkSS5_OoE

The plans look virtually identical. Neither uses parallelism and both use the spatial index. There is an additional lazy spool on the slow plan that I can eliminate with the hint
option(no_performance_spool)`. But the query performance does not change. It still remains much slower.

Running both with the added hint in a batch will weigh both queries equally.

Sql

Solution

I can give you a partial answer that explains why you are seeing the performance difference - though that still leaves some open questions (such as can SQL Server produce the more optimal plan without introducing an intermediate table expression that projects the expression as a column?)

The difference is that in the fast plan the work needed to parse the JSON array elements and create the Geography is done 4 times (once for each row emitted from the openjson function) - whereas it is done more than 100,000 times that in the slow plan.

In the fast plan...

geography::Point(
                convert(float,json_value(value,'$[0]'))
               ,convert(float,json_value(value,'$[1]'))
               ,4326)


Is assigned to Expr1000 in the compute scalar to the left of the openjson function. This corresponds to geo in your derived table definition.

In the fast plan the filter and stream aggregate reference Expr1000. In the slow plan they reference the full underlying expression.

Stream aggregate properties

The filter is executed 116,995 times with each execution requiring an expression evaluation. The stream aggregate has 110,520 rows flowing into it for aggregation and creates three separate aggregates using this expression. 110,520 * 3 + 116,995 = 448,555. Even if each individual evaluation takes 18 microseconds this adds up to 8 seconds additional time for the query as a whole.

You can see the effect of this in the actual time statistics in the plan XML (annotated in red below from the slow plan and blue for the fast plan - times are in ms)

The stream aggregate has an elapsed time 6.209 seconds greater than its immediate child. And the bulk of the child time was taken up by the filter. This corresponds to the extra expression evaluations.

By the way.... In general it is not a sure thing that underlying expressions with labels like Expr1000 are only calculated once and not re-evaluated but clearly in this case from the execution timing discrepancy this happens here.

Code Snippets

geography::Point(
                convert(float,json_value(value,'$[0]'))
               ,convert(float,json_value(value,'$[1]'))
               ,4326)

Context

StackExchange Database Administrators Q#237217, answer score: 15

Revisions (0)

No revisions yet.