SyntaxHighlighter

Thursday, October 17, 2013

Forensic Query Pathology 201: Exceptionally Bad Query Plans

This one is rather difficult to explain without a diagram, so here it is: 


Here is a SELECT query. You can see that it gets a fair number of rows from the base tables/clustered indexes -- approximately 6 million. But then something odd happens. The filter takes out approximately 5 million rows and leaves the query with only 1 row. You are probably saying to yourself, "That's perfectly normal. You've clearly written a WHERE clause that filters out all the rows."

That's reasonable.

Next, the same SELECT query wrapped by an INSERT DML statement:


Interesting. So the query optimizer believes we are inserting a single row into this table, so it chooses nested loop joins for the foreign keys -- and that makes a lot of sense because with only one row, the best join would be a nested loop.

All's well that ends well right? This INSERT statement takes only four minutes to run because the SELECT grabs 6 million rows, filters it down, and inserts only a single row.

No.

Here's why. It turns out the original query plan was bunk. It turns out that the Filter in the original query plan completely messed up its estimate. It was supposed to estimate 6 million rows, because that's how many actually passed through the filter. And since 6 million rows in any sort of nested join usually end up multiplying with the number of rows in the join tables (at least in this case), we actually had something on the order of 3.6 trillion rows to loop through ... So in fact, the query was taking dozens of hours to insert those records.

I am really beginning to hate the SQL Server query optimizer.

So now you're asking yourself if updating the statistics helped, and the answer is no. We have so many hundreds of millions of rows in the base tables that inserting a fraction of a percent during ETL per day makes the stats engine completely underestimate them to the point where they are zero. This is a big problem for Microsoft's query planner and an even bigger problem for us. The actual solution was to dematerialize the query (in this case I used a temp table) and then the estimate for insertion was 100% accurate and the proper merge joins/hash joins were selected by the query planner.




No comments:

Post a Comment