• I have experimented the approach above for a couple of years in a production system.

    While it seems to simplify development on a very first time, personnally now I do not recommend the use of dynamic SQL in heavy OLTP environments, for those reasons :

    1) The execution plans are not easily predictible and there is a lot of possible code paths. The developers will usually work with 2 or 3 use cases, but not a hundred. It's easy to miss a combination of parameters that is very slow, contains some index scan, or whatever you haven't thought about. This lack of predictability have to be compensated by a lot of tests or by a very close survey of the production machines. But it's a shared resource, most of the time. Even a single slow running corner case combination of parameters executed once per 5 second can choke the whole system.

    2) Considering an application has to be stable over time and modifications, the consequence of the problem above is that it gets worse over time, taking in account developers turnover. The natural future of this application is to have so many "IFs" that you do not really know which one is used or not.

    3) Since there is some SQL within quotes, there is always a probability that, over time and modifications, it does not compile in some corner cases you have not seen in tests.

    4) It gets complicated to have a good index structure to handle all those use cases. The tendency is then to have too many or too large indexes.

    5) It should not be a reason, but I have seen SQL servers on production crashing because they were not able to compile a execution plan in some corner case of such dynamic queries. I have opened cases to Microsoft SQL customer support for those.

    6) The only reason why people use it is that they were unsuccessful with the former approach, but without really understanding why the SQL compiler was not giving what they wanted. But if they really understood what was problematic, I bet they would like to solve it in a more stable way.

    Let's take the original query as an example

    SELECT [Name], [Color], [ListPrice], [ModifiedDate]

    FROM [Production].[Product]

    WHERE [Name] LIKE coalesce(@Name, [Name])

    AND Colour] LIKE coalesce(@Color, [Colour] )

    AND [ListPrice] >= coalesce(@ListPriceMin, [ListPrice])

    AND [ListPrice] <= coalesce(@ListPriceMax, [ListPrice])

    AND ModifiedDate >= coalesce(@ModifiedDateMin, ModifiedDate)

    AND ModifiedDate <= coalesce(@ModifiedDateMax, ModifiedDate)

    If you look at the execution plan, you will see a clustered index scan on this table, and you will be a little puzzled about why. The reason is that SQL can't choose an index because one optimized query for one criteria will perform very poorly with another criteria. In addition to this, in this example, the COALESCE function forces a computation for every row, thus impeaching the use of indexes. As a SQL developer, you have to understand all this.

    You can help SQL in choosing its indexes by just adding the CTE below:

    WITH Filtering As

    (

    SELECT ProductId

    FROM [Production].[Product]

    WHERE @Name IS NOT NULL AND @Name = Name

    UNION ALL

    SELECT ProductId

    FROM [Production].[Product]

    WHERE @Colour IS NOT NULL AND @Colour = Colour

    UNION ALL

    SELECT ProductId

    FROM [Production].[Product]

    WHERE NOT (@ListPriceMin IS NULL AND @ListPriceMax IS NULL)

    AND (@ListPriceMin IS NOT NULL OR ListPrice >= @ListPriceMin)

    AND (@ListPriceMax IS NOT NULL OR ListPrice <= @ListPriceMax)

    -- ...

    )

    SELECT [Name], [Color], [ListPrice], [ModifiedDate]

    FROM [Production].[Product]

    WHERE ProductID In (SELECT ProductID From Filtering)

    AND [Name] LIKE coalesce(@Name, [Name])

    AND [Colour] LIKE coalesce(@Color, [Colour] )

    AND [ListPrice] >= coalesce(@ListPriceMin, [ListPrice])

    AND [ListPrice] <= coalesce(@ListPriceMax, [ListPrice])

    AND ModifiedDate >= coalesce(@ModifiedDateMin, ModifiedDate)

    AND ModifiedDate <= coalesce(@ModifiedDateMax, ModifiedDate)

    As an outcome you have all the possible combinations of indexes in one single execution plan, and then it is way more convenient to optimize which indexes you want, do not want, are still useful, and so on.

    On other environments on which the compilation duration is not really a problem, and not exposed to users directly, I advise to keep the former statement and use RECOMPILE option, this is usually more accessible to developers - though it forces DBAs to look at slow queries more often.

    The key tricks to understand here are:

    - Having one statement per optimized case (in the CTE, separated by UNION ALL) allows SQL to choose the proper index for each one. A simple look at the execution plan is enough to realize it.

    - It's important to have "WHERE @Name IS NOT NULL" statements in the Filtering CTE, because if it happens that @Name IS NULL, then SQL will not perform any read on this part of the union (scalar expression are evaluated before any fetching). So even if the execution plan looks bigger, actually the # of reads will remain economic. It is written as a static query, but it is executed as a dynamic one.

    Having experimented it for two years in a team, there was a serious improvement in quality of deliverables from developers. After some training, most of the time the first shot was working properly enough to get on production.