• Thanks! 

    Remember, to SQL Server, the queries are not the same. Without the IF, the query is asking for the actual count of rows from the view that meet the WHERE clause criteria. With the IF...>0, you're just asking if 1 such row exists, and you don't need the exact count. The second question should be more easily answered than the first, but sometimes the optimizer's assumptions are very, very wide of the mark.

    I know the actual plan for the bad query would be rough to acquire, if it's taking 2.5 hours to run, but that would be nice to have. The estimated plans lack information that could be helpful (discrepancies between estimated and actual rows, and between estimated executions and actual executions, for example).

    In this case, I suspect we would see something like those discrepancies at work. Specifically, I would guess that in the actual plan for the long-running query that the outer input for the nested loop operator joining the Sort output to the scan of InvTransactions has well more than 1 row, and that the scan is reading more than 76,000 rows for each execution (note that in the "good" plan the estimated row count for that scan is 2.6 million).

    It's probably assuming the implied row goal of 1 from the EXISTS (remember, it's transforming the IF (SELECT COUNT...)>0 to an EXISTS) is going to reduce the rows coming out of some of the operators much more than it actually is in this case. There's a hint of the row goal being at play because of the Sort operator reducing the estimated rows, not something you would otherwise see.

    There's been some investigation of this sort of problem (see http://blogs.msdn.com/b/bartd/archive/2012/03/14/row-goals-gone-rogue.aspx for one such discussion), where introducing a row goal via TOP or EXISTS causes the optimizer to make a poor decision that drastically slows down an otherwise acceptable query. 

    Without digging in much more deeply (looking at the exact data, definitions of the base tables, potential rewrites of the view, actual execution plans and such), the sort of workaround you're using now may be the best short-term fix.

    Here's some code using the sample data from the above article that illustrates all this (step through the queries and comments, don't just run it all in one batch):

    --Bart Duncan's sample data

    USE tempdb;

        GO

    IF OBJECT_ID('even') IS NOT NULL

        DROP TABLE even;

    IF OBJECT_ID('odd') IS NOT NULL

        DROP TABLE odd;

        GO

    CREATE TABLE even ( c1 INT, c2 CHAR(30) );

    CREATE TABLE odd ( c1 INT, c2 CHAR(30) );

        GO

    SET NOCOUNT ON;

    DECLARE @x INT;

    SET @x = 1;

    BEGIN TRAN;

    WHILE ( @x <= 10000 )

        BEGIN

            INSERT  INTO even

                    ( c1, c2 )

            VALUES  ( @x * 2, '' );

            INSERT  INTO odd

                    ( c1, c2 )

            VALUES  ( @x * 2 - 1, '' );

            IF @x % 1000 = 0

                BEGIN

                    RAISERROR ('Inserted %d rows...', 0, 1, @x) WITH NOWAIT;

                    COMMIT TRAN;

                    BEGIN TRAN;

                END;

            SET @x = @x + 1;

        END;

    WHILE @@TRANCOUNT > 0

        COMMIT TRAN;

        GO

    --This will complete quite speedily

    SELECT  COUNT(*)

    FROM    even t1

            INNER JOIN odd t2 ON t1.c1 = t2.c1;

    --You'll want to go get a coffee, because this next one will take a while.

    --If you run this one with Include Actual Execution Plan enabled,

    --notice the difference between estimated/actual rows and executions

    --for the scan operators

    IF ( SELECT COUNT(*)

         FROM   even t1

                INNER JOIN odd t2 ON t1.c1 = t2.c1

       ) > 0

        PRINT 'Aha!';

    --To illustrate the effect of the row goal on a Sort operator,

    --look at the estimated plan for the below queries.

    --Notice how the sorts in IF (SELECT COUNT...)>0

    --are estimated to reduce the row counts. That's the row

    --goal in action, affecting our estimates.

    --If you also run the below queries, you'll notice that 

    --forcing SQL Server to use a different join type

    --brings the duration back down, since it's avoiding 

    --the costly 10000 executions of one of the scans.

    --The estimates for the rows coming into the join 

    --are still way off, but this plan's a bit 

    --more resilient here.

    SELECT  COUNT(*)

    FROM    even t1

            INNER MERGE JOIN odd t2 ON t1.c1 = t2.c1;

    IF ( SELECT COUNT(*)

         FROM   even t1

                INNER MERGE JOIN odd t2 ON t1.c1 = t2.c1

       ) > 0

        PRINT 'Aha!';

    --Just for fun, here's the slow query again, but

    --this time fooling SQL Server by using >0.5.

    --It doesn't assume this is an EXISTS anymore.

    IF ( SELECT COUNT(*)

         FROM   even t1

                INNER JOIN odd t2 ON t1.c1 = t2.c1

       ) > 0.5

        PRINT 'Aha!';

    --Now let's see what happens if we help the optimizer

    --out by making its assumptions correct, by putting a 

    --matching row where it will be quickly found

    INSERT INTO odd VALUES (2,'')

    IF ( SELECT COUNT(*)

         FROM   even t1

                INNER JOIN odd t2 ON t1.c1 = t2.c1

       ) > 0

        PRINT 'Aha!';

    --Clean up after ourselves

    DROP TABLE odd, even;

    Cheers!