patternsqlMinor
Performance of Polybase
Viewed 0 times
polybaseperformancestackoverflow
Problem
We've been experimenting with Polybase on a SQL Server 2019 (CU2) box, using SQL Server external data sources and the performance has not been good - in most cases a 1400% increase. In each case all of the tables/views we're querying are from external tables pointing at the same external data source. We've tried both running the query broken out on the local box, and using the same query as a view pulled in as an external table. We've also scripted every statistic from the remote server onto the external tables with no change. You can see the performance difference below, using a sample query.
The servers are setup identical resource wise: 32GBs of RAM, 8 vCPU, SSD disks, and no other running queries. I've tried against two different remote servers, one running SQL Server 2016 with the latest SP/CU, and a separate 2019 box running CU2. The servers are VMs running on the same host, and we've ruled out any type of host contention.
Sample Query:
Running on the remote 2016/2019 box directly:
Running on the 2019 box with Polybas
The servers are setup identical resource wise: 32GBs of RAM, 8 vCPU, SSD disks, and no other running queries. I've tried against two different remote servers, one running SQL Server 2016 with the latest SP/CU, and a separate 2019 box running CU2. The servers are VMs running on the same host, and we've ruled out any type of host contention.
Sample Query:
SELECT
StockItem_StockNumber, BlanktypeId, NameHTML, BackgroundStrainName, IsExact, IsConditional
,ROW_NUMBER() Over(Partition By StockItem_StockNumber, BlanktypeId Order By pt.Name, p.Name, gptr.Text) as row_num
,pt.Name as Level1, p.Name as Level2, gptr.Text as Level3, MGIReference_JNumber
,gptr.Type as Level3Type
FROM
1 sig
INNER JOIN 2 g on g.BlanktypeId = sig.Blanktype_BlanktypeId
INNER JOIN 3 gpt on gpt.Blanktype_BlanktypeId = g.BlanktypeId
INNER JOIN 4 p on p.StocktypeTermId = gpt.StocktypeTerm_StocktypeTermId
INNER JOIN 5 gptr on gptr.BlanktypeStockTerm_BlanktypeStockTermId = gpt.BlanktypeStockTermId
INNER JOIN 6 ptc on ptc.ChildStockTerm_StocktypeTermId = p.StocktypeTermId
INNER JOIN 7 pt on pt.StocktypeTermId = ptc.ParentStockTerm_StocktypeTermId
WHERE
ptc.ParentHeaderKey = 3Running on the remote 2016/2019 box directly:
SQL Server Execution Times:
CPU time = 3486 ms, elapsed time = 5035 ms.Running on the 2019 box with Polybas
Solution
I'll work from the assumption that all of the tables listed in the query above are external tables pointing to the same external data source. With that in mind, here are a few considerations.
Distributed Requests
There are two DMVs which will provide a bit more information than what you have currently:
You might end up seeing multiple results like so:
For each one of these, you can get the set of steps involved (changing the query to use whatever your execution IDs are):
What I tend to look for is "excessive" row counts. For example, if I'm expecting a small number of rows back but the row count is much higher for a given step, then the PolyBase data movement service is sending over many more rows than I'd ideally like and forcing the PolyBase engine to do the dirty work of combining tables together. That leads to the next consideration.
Filters and Predicates
In your query, I don't see any explicit filters or predicates, but I wonder if there are implicit filters. For example, in the
There are several reasons why this might be the case, including complex filters (which this might be), a predicate which PolyBase cannot push down (there are some stringent limitations on what is available for pushdown), or a predicate formed from two different external data sources (which is a scenario I'd love to see but does not work well today). Given the lack of a
Network Performance
If you have network issues between the external data source and your local SQL server instance, that can result in slowdowns. In your second update, you mention creating an external table against a single table and streaming all of its data down, and the timing differences being minor. This is an indication that network speed is not a significant issue in your case.
Specific Guidance
With the above in mind, here's what I would recommend for your specific scenario as a way of figuring out what might be the issue.
If you see that more rows are coming to the local SQL Server instance than the 313,585 indicated in your execution plan, that means there is extra work happening on your local SQL Server instance which could be handled remotely. There are two things which might help.
First, try to run your query with
Second, create a view with the above SQL query on your remote data source. Then, create an external table locally that references that remote view. What this does is force the external data source to do the work of joining together all of these tables before sending anything to the local SQL Server.
In my simple case, creating a view for my query resulted in fewer distributed requests and less time retrieving data:
Distributed Requests
There are two DMVs which will provide a bit more information than what you have currently:
sys.dm_exec_distributed_requests and sys.dm_exec_distributed_request_steps. Try running your slow query and see what shows up in the distributed requests DMV. Here is a sample query I like to use for this purpose:SELECT TOP(100)
r.execution_id,
r.status,
r.error_id,
r.start_time,
r.end_time,
r.total_elapsed_time,
t.text
FROM sys.dm_exec_distributed_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
ORDER BY
r.end_time DESC;
GOYou might end up seeing multiple results like so:
For each one of these, you can get the set of steps involved (changing the query to use whatever your execution IDs are):
SELECT
rs.execution_id,
rs.step_index,
rs.operation_type,
rs.distribution_type,
rs.location_type,
rs.[status],
rs.error_id,
rs.start_time,
rs.end_time,
rs.total_elapsed_time,
rs.row_count,
rs.command
FROM sys.dm_exec_distributed_request_steps rs
WHERE rs.execution_id IN ('QID573', 'QID574')
ORDER BY
rs.execution_id DESC,
rs.step_index ASC;
GOWhat I tend to look for is "excessive" row counts. For example, if I'm expecting a small number of rows back but the row count is much higher for a given step, then the PolyBase data movement service is sending over many more rows than I'd ideally like and forcing the PolyBase engine to do the dirty work of combining tables together. That leads to the next consideration.
Filters and Predicates
In your query, I don't see any explicit filters or predicates, but I wonder if there are implicit filters. For example, in the
StocktypeTerms table, I see ParentHeaderKey = 3 in the join criteria. If this is a highly selective join criterion, it is likely that PolyBase is streaming all of the rows over and then performing the filters on the local side rather than performing the filter operation remotely and pulling in just the rows it needs.There are several reasons why this might be the case, including complex filters (which this might be), a predicate which PolyBase cannot push down (there are some stringent limitations on what is available for pushdown), or a predicate formed from two different external data sources (which is a scenario I'd love to see but does not work well today). Given the lack of a
WHERE clause, I won't dive into more detail on this section.Network Performance
If you have network issues between the external data source and your local SQL server instance, that can result in slowdowns. In your second update, you mention creating an external table against a single table and streaming all of its data down, and the timing differences being minor. This is an indication that network speed is not a significant issue in your case.
Specific Guidance
With the above in mind, here's what I would recommend for your specific scenario as a way of figuring out what might be the issue.
If you see that more rows are coming to the local SQL Server instance than the 313,585 indicated in your execution plan, that means there is extra work happening on your local SQL Server instance which could be handled remotely. There are two things which might help.
First, try to run your query with
OPTION(FORCE EXTERNALPUSHDOWN) on. You mentioned enabling and disabling predicate pushdown (by setting PUSHDOWN = ON and PUSHDOWN = OFF on the external data source definition), but it wasn't clear if you specified this hint. There is a non-zero chance that this will trigger the PolyBase engine to behave differently than what you see simply by writing the query. Given your specific query, my guess is that this query hint will not make a difference.Second, create a view with the above SQL query on your remote data source. Then, create an external table locally that references that remote view. What this does is force the external data source to do the work of joining together all of these tables before sending anything to the local SQL Server.
In my simple case, creating a view for my query resulted in fewer distributed requests and less time retrieving data:
Code Snippets
SELECT TOP(100)
r.execution_id,
r.status,
r.error_id,
r.start_time,
r.end_time,
r.total_elapsed_time,
t.text
FROM sys.dm_exec_distributed_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
ORDER BY
r.end_time DESC;
GOSELECT
rs.execution_id,
rs.step_index,
rs.operation_type,
rs.distribution_type,
rs.location_type,
rs.[status],
rs.error_id,
rs.start_time,
rs.end_time,
rs.total_elapsed_time,
rs.row_count,
rs.command
FROM sys.dm_exec_distributed_request_steps rs
WHERE rs.execution_id IN ('QID573', 'QID574')
ORDER BY
rs.execution_id DESC,
rs.step_index ASC;
GOContext
StackExchange Database Administrators Q#259551, answer score: 6
Revisions (0)
No revisions yet.