Execution Times vs Logical Reads/Scan Count

  • Hello guys,

    I want to discuss about your experience what is better.

    for example. I have procedure which has lot of logical reads also I modified

    Before

    Table 'Worktable'. Scan count 2, logical reads 1477,

    Table 'TB_MASTER_MFD01'. Scan count 2, logical reads 40

    Table 'TB_REINPUT'. Scan count 2, logical reads 114,

    Table 'TB_WORK_ORDER_MFD01'. Scan count 2, logical reads 951,

    Table 'TB_MASTER_MFD01'. Scan count 2, logical reads 39

    Table 'TB_REINPUT'. Scan count 1, logical reads 33

    Table 'TB_WORK_ORDER_MFD01'. Scan count 1, logical reads 948,

    CPU time = 31 ms, elapsed time = 43 ms.

    After optimize was better

    Table 'Worktable'. Scan count 2, logical reads 876,

    Table 'TB_MASTER_MFD01'. Scan count 2, logical reads 26

    Table 'TB_REINPUT'. Scan count 2, logical reads 37,

    Table 'TB_WORK_ORDER_MFD01'. Scan count 2, logical reads 19

    Table 'TB_MASTER_MFD01'. Scan count 2, logical reads 24,

    Table 'TB_REINPUT'. Scan count 1, logical reads 34,

    Table 'TB_WORK_ORDER_MFD01'. Scan count 1, logical reads 14,

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 8 ms.

    But I still investigate the WorkTable, because inside are inside select.. so for this select * from (select ... I use OPTION(RECOMPILE) and result is here

    Table 'TB_MASTER_MFD01'. Scan count 7, logical reads 84,

    Table 'TB_REINPUT'. Scan count 7, logical reads 14,

    Table 'TB_WORK_ORDER_MFD01'. Scan count 2, logical reads 26,

    Table 'TB_MASTER_MFD01'. Scan count 2, logical reads 25,

    Table 'TB_REINPUT'. Scan count 1, logical reads 34,

    Table 'TB_WORK_ORDER_MFD01'. Scan count 1, logical reads 15,

    CPU time = 78 ms, elapsed time = 79 ms.

    THIS procedure is in timer 1second ..

    SO my point is WHAT IS BETTER FROM ANOTHER POINT OF VIEW, I PREFER LONG TIME ( because i think this is still short time ) THAN LOT OF LOGICAL READS, what do you think ?

    Thank you for experience and discuss



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

  • Not seeing the query, I can't say what's going on, but, on a guess, you're probably seeing some sort of bad parameter sniffing issue where the plan you generated after tuning is different than the one generated using the RECOMPILE option. The added time you're seeing is more than likely the time it takes to recompile. So... are you more likely to be IO bound or CPU bound? That's the trade-off you're presented with. RECOMPILE is creating a new plan, each time, possibly a better plan (as shown by the reads improvement), but it comes at the cost of additional CPU power, each and every time the query is called.

    Which is better? It depends on your situation and the load on the system.

    ----------------------------------------------------
    The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore Roosevelt
    The Scary DBA
    Author of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd Edition
    Product Evangelist for Red Gate Software

  • Grant Fritchey (10/30/2013)


    Not seeing the query, I can't say what's going on, but, on a guess, you're probably seeing some sort of bad parameter sniffing issue where the plan you generated after tuning is different than the one generated using the RECOMPILE option. The added time you're seeing is more than likely the time it takes to recompile. So... are you more likely to be IO bound or CPU bound? That's the trade-off you're presented with. RECOMPILE is creating a new plan, each time, possibly a better plan (as shown by the reads improvement), but it comes at the cost of additional CPU power, each and every time the query is called.

    Which is better? It depends on your situation and the load on the system.

    I know what RECOMPILE does.

    yes of course there is problem with parameters, also with this I fixed logical reads for smaller numbers...

    And we have full memory and dont have problem with cpu, also for us will be better

    and problem is that lot of procedure I do just optimize, but there were make few years ago, and almost all are very poor..

    ALTER PROCEDURE [dbo].[UP_CS_MAT_DISPLAY_LIST_MFD01]

    @P_LINE_CODEVARCHAR(5)

    , @P_STATION_IDVARCHAR(8)

    AS

    BEGIN

    SET NOCOUNT ON

    DECLARE @ORDER_DATEVARCHAR(8)--ORDER DATE

    DECLARE @COMMIT_NOVARCHAR(4)--COMMIT NO

    DECLARE @LAST_ORDER_DATEVARCHAR(8)--LAST ORDER DATE

    DECLARE @LAST_COMMIT_NOVARCHAR(4)--LAST COMMIT NO

    DECLARE @NEXT_ORDER_DATEVARCHAR(8)--LAST ORDER DATE

    DECLARE @NEXT_COMMIT_NOVARCHAR(4)--LAST COMMIT NO

    SELECT @ORDER_DATE = ORDER_DATE, @COMMIT_NO = COMMIT_NO, @LAST_ORDER_DATE = LAST_ORDER_DATE, @LAST_COMMIT_NO = LAST_COMMIT_NO

    FROM TB_TRACKING_MFD01 (NOLOCK) WHERE LINE_CODE = @P_LINE_CODE AND STATION_ID = @P_STATION_ID

    SET @ORDER_DATE = ISNULL(@ORDER_DATE,'')

    SET @COMMIT_NO = ISNULL(@COMMIT_NO,'')

    SET @LAST_ORDER_DATE = ISNULL(@LAST_ORDER_DATE,'')

    SET @LAST_COMMIT_NO = ISNULL(@LAST_COMMIT_NO,'')

    --SELECT @ORDER_DATE, @COMMIT_NO, @LAST_ORDER_DATE, @LAST_COMMIT_NO

    --WORK ORDER WORK LIST SEARCH

    IF @ORDER_DATE = '' AND @COMMIT_NO = ''

    BEGIN

    --WORK ORDER SEARCH

    SELECT * FROM (

    SELECT TOP 2 ORDER_DATE, COMMIT_NO, BODY_NO, CAR_CODE, PROD_FLAG "@@PROD_FLAG" FROM TB_WORK_ORDER_MFD01 (NOLOCK)

    WHERE ORDER_DATE<=@LAST_ORDER_DATE AND ORDER_DATE + COMMIT_NO <= @LAST_ORDER_DATE + @LAST_COMMIT_NO AND DATA_TYPE <> 'SD'

    AND ORDER_DATE+COMMIT_NO NOT IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_REINPUT (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID)

    AND ORDER_DATE+COMMIT_NO IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_MASTER_MFD01 (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID )

    ORDER BY ORDER_DATE DESC, COMMIT_NO DESC

    ) T

    UNION ALL

    SELECT * FROM (

    SELECT TOP 5 ORDER_DATE, COMMIT_NO, BODY_NO, CAR_CODE, PROD_FLAG "@@PROD_FLAG" FROM TB_WORK_ORDER_MFD01 (NOLOCK)

    WHERE ORDER_DATE>=@LAST_ORDER_DATE AND ORDER_DATE + COMMIT_NO > @LAST_ORDER_DATE + @LAST_COMMIT_NO AND DATA_TYPE <> 'SD'

    AND ORDER_DATE+COMMIT_NO NOT IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_REINPUT (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID)

    AND ORDER_DATE+COMMIT_NO IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_MASTER_MFD01 (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID )

    ORDER BY ORDER_DATE, COMMIT_NO

    ) A

    ORDER BY ORDER_DATE, COMMIT_NO OPTION(RECOMPILE)

    SELECT TOP 1 @NEXT_ORDER_DATE = ORDER_DATE, @NEXT_COMMIT_NO = COMMIT_NO FROM TB_WORK_ORDER_MFD01 (NOLOCK)

    WHERE ORDER_DATE>=@LAST_ORDER_DATE AND ORDER_DATE + COMMIT_NO > @LAST_ORDER_DATE + @LAST_COMMIT_NO AND DATA_TYPE <> 'SD'

    AND ORDER_DATE+COMMIT_NO NOT IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_REINPUT (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID)

    AND ORDER_DATE+COMMIT_NO IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_MASTER_MFD01 (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID )

    ORDER BY ORDER_DATE, COMMIT_NO

    SET @NEXT_ORDER_DATE = ISNULL(@NEXT_ORDER_DATE,'')

    SET @NEXT_COMMIT_NO = ISNULL(@NEXT_COMMIT_NO,'')

    SELECT @NEXT_ORDER_DATE "ORDER_DATE", @NEXT_COMMIT_NO "COMMIT_NO", 'B' "WORK_STATUS"

    FROM TB_TRACKING_MFD01 WHERE LINE_CODE = @P_LINE_CODE AND STATION_ID = @P_STATION_ID

    --WORK LIST SEARCH(MASTER)

    SELECT MT.ITEM_CODE "ITEM_CODE"

    , CI.ITEM_NAME "ITEM_NAME"

    , MT.WORK_SEQ "WORK_SEQ"

    , MT.WORK_POS "WORK_POS"

    , CASE WHEN ISNULL(MT.OPTION_VALUE,'') = '' THEN MT.ALC_CODE ELSE MT.OPTION_VALUE END "ALC_CODE"

    , '' "@@WORK_VALUE"

    , '' "@@WORK_RESULT"

    FROM TB_MASTER_MFD01 MT (NOLOCK)

    LEFT JOIN TB_CODE_ITEM CI (NOLOCK)

    ON MT.ITEM_CODE = CI.ITEM_CODE

    WHERE MT.LINE_CODE= @P_LINE_CODE AND MT.STATION_ID = @P_STATION_ID AND ORDER_DATE = @NEXT_ORDER_DATE AND COMMIT_NO = @NEXT_COMMIT_NO

    ORDER BY MT.WORK_SEQ, MT.WORK_TYPE

    END

    ELSE

    BEGIN

    --WORK ORDER SEARCH

    SELECT * FROM (

    SELECT TOP 2 ORDER_DATE, COMMIT_NO, BODY_NO, CAR_CODE, PROD_FLAG "@@PROD_FLAG" FROM TB_WORK_ORDER_MFD01 (NOLOCK)

    WHERE ORDER_DATE<=@LAST_ORDER_DATE AND ORDER_DATE + COMMIT_NO <= @LAST_ORDER_DATE + @LAST_COMMIT_NO AND DATA_TYPE <> 'SD'

    AND ORDER_DATE+COMMIT_NO NOT IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_REINPUT (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID)

    AND ORDER_DATE+COMMIT_NO IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_MASTER_MFD01 (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID )

    ORDER BY ORDER_DATE DESC, COMMIT_NO DESC

    ) A

    UNION ALL

    SELECT * FROM (

    SELECT TOP 5 ORDER_DATE, COMMIT_NO, BODY_NO, CAR_CODE, PROD_FLAG "@@PROD_FLAG" FROM TB_WORK_ORDER_MFD01 (NOLOCK)

    WHERE ORDER_DATE>=@LAST_ORDER_DATE AND ORDER_DATE + COMMIT_NO >= @ORDER_DATE + @COMMIT_NO AND DATA_TYPE <> 'SD'

    AND ORDER_DATE+COMMIT_NO NOT IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_REINPUT (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID)

    AND ORDER_DATE+COMMIT_NO IN (SELECT ORDER_DATE+COMMIT_NO FROM TB_MASTER_MFD01 (NOLOCK) WHERE ORDER_DATE>=@LAST_ORDER_DATE AND STATION_ID=@P_STATION_ID )

    ORDER BY ORDER_DATE, COMMIT_NO

    ) B

    ORDER BY ORDER_DATE, COMMIT_NO

    SELECT ORDER_DATE, COMMIT_NO, CASE WHEN WORK_COMPLETE = 'C' THEN 'G' ELSE 'Y' END "WORK_STATUS"

    FROM TB_TRACKING_MFD01 WHERE LINE_CODE = @P_LINE_CODE AND STATION_ID = @P_STATION_ID

    --WORK LIST SEARCH(MASTER)

    SELECT TRD.ITEM_CODE "ITEM_CODE"

    , CI.ITEM_NAME "ITEM_NAME"

    , TRD.WORK_SEQ "WORK_SEQ"

    , TRD.WORK_POS "WORK_POS"

    , CASE WHEN ISNULL(MT.OPTION_VALUE,'') = '' THEN MT.ALC_CODE ELSE MT.OPTION_VALUE END "ALC_CODE"

    , TRD.WORK_VALUE "@@WORK_VALUE"

    , CASE WHEN ISNULL(WORK_RESULT,'') = '' THEN 'NONE' ELSE TRD.WORK_RESULT END "@@WORK_RESULT"

    FROM TB_TRACKINGDTL_MFD01 TRD (NOLOCK) LEFT JOIN TB_MASTER_MFD01 MT (NOLOCK)

    ON MT.ORDER_DATE = @ORDER_DATE AND MT.COMMIT_NO = @COMMIT_NO AND MT.LINE_CODE = @P_LINE_CODE AND MT.STATION_ID = @P_STATION_ID AND MT.WORK_SEQ = TRD.WORK_SEQ

    LEFT JOIN TB_CODE_ITEM CI (NOLOCK)

    ON TRD.ITEM_CODE = CI.ITEM_CODE

    WHERE TRD.STATION_ID = @P_STATION_ID

    ORDER BY TRD.WORK_SEQ, TRD.WORK_TYPE

    END

    SET NOCOUNT OFF

    END



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

  • Lots of NOLOCK in there. That's scary.

    One thing you can do that will immediately help performance, whether you keep the recompile in place or not. The IF statement puts the query down two different paths. When you run the query, you're going to get both paths compiled at the same time (and every time if you put RECOMPILE on it). I would suggest keeping this query as a wrapper and then put each of the individual paths within the IF statement into separate procedures that you then call from this query. That will allow each of them to either generate a specific plan when called, or, if you still want to keep the RECOMPILE in place, each will only be recompiled as called.

    Other than that, I don't see any other immediately obvious tuning opportunities. I'd need to see the execution plans, more specifically the actual plans, to make any other suggestions.

    And I would strongly recommend getting rid of the NOLOCK hints all over the place. Heck, even if you're willing to put up with the potential for extra rows or missing rows, just set the transaction isolation level to READ_UNCOMMITTED. It'll make your code cleaner and it'll make it much easier to back out if you start to see bad data due to the lack of locking.

    ----------------------------------------------------
    The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore Roosevelt
    The Scary DBA
    Author of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd Edition
    Product Evangelist for Red Gate Software

  • Don't look at scan count. It is not the number of times a table has been scanned, it's not consistently the number of times a table has been accessed either.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Grant Fritchey (10/30/2013)


    Lots of NOLOCK in there. That's scary.

    One thing you can do that will immediately help performance, whether you keep the recompile in place or not. The IF statement puts the query down two different paths. When you run the query, you're going to get both paths compiled at the same time (and every time if you put RECOMPILE on it). I would suggest keeping this query as a wrapper and then put each of the individual paths within the IF statement into separate procedures that you then call from this query. That will allow each of them to either generate a specific plan when called, or, if you still want to keep the RECOMPILE in place, each will only be recompiled as called.

    Other than that, I don't see any other immediately obvious tuning opportunities. I'd need to see the execution plans, more specifically the actual plans, to make any other suggestions.

    And I would strongly recommend getting rid of the NOLOCK hints all over the place. Heck, even if you're willing to put up with the potential for extra rows or missing rows, just set the transaction isolation level to READ_UNCOMMITTED. It'll make your code cleaner and it'll make it much easier to back out if you start to see bad data due to the lack of locking.

    I know about problem with NOLOCK, but for this time I didnt see yet... the nolock is there, because these tables use lot of select for example in number 200, but to this table is inserted for example 300 and you know with lot of select and lot of insert, but no just in time I think is ok..Like I said, I didnt see yet problem with no lock ...

    and in attachment are sqlplans inside in procedure.. thx for response...



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

  • tony28 (10/30/2013)


    Like I said, I didnt see yet problem with no lock ...

    Doesn't mean it didn't happen. If you have locking problems, consider one of the snapshot isolation levels. Readers don't block writers and you don't have the intermittent incorrect results problem.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (10/30/2013)


    tony28 (10/30/2013)


    Like I said, I didnt see yet problem with no lock ...

    Doesn't mean it didn't happen. If you have locking problems, consider one of the snapshot isolation levels. Readers don't block writers and you don't have the intermittent incorrect results problem.

    yes without dirty reads, I have to read few block for refresh about this things ,, thx



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

  • Your second query, the UNION, is getting a timeout from the optimizer. That means the plan is likely to be suboptimal. I'd suggest looking at ways to simplify the query. All those nested sub-selects within the queries with NOT IN might be better done as derived tables in an outer join looking for NULL values, but you'd need to test it to be sure. You also have an index scan in that query, but it looks like a pretty light part of the overall estimated cost, so it may not be anything to worry about. You also have some slight disparities on estimated rows, 7000+ in some cases where 900 are returned. Event taking into account the estimated number of executions (2), that still puts the number of rows off by ~ double. I don't think that's indicative of out of date or inaccurate statistics, but it might not hurt to look.

    ----------------------------------------------------
    The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore Roosevelt
    The Scary DBA
    Author of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd Edition
    Product Evangelist for Red Gate Software

  • Grant Fritchey (10/30/2013)


    Your second query, the UNION, is getting a timeout from the optimizer. That means the plan is likely to be suboptimal. I'd suggest looking at ways to simplify the query. All those nested sub-selects within the queries with NOT IN might be better done as derived tables in an outer join looking for NULL values, but you'd need to test it to be sure. You also have an index scan in that query, but it looks like a pretty light part of the overall estimated cost, so it may not be anything to worry about. You also have some slight disparities on estimated rows, 7000+ in some cases where 900 are returned. Event taking into account the estimated number of executions (2), that still puts the number of rows off by ~ double. I don't think that's indicative of out of date or inaccurate statistics, but it might not hurt to look.

    Hello, thx for reply..

    The union is because there is sequencial feeding, 1-2 are before=done, 3 in progress on position , 4-7 will arrive... I will try thinking about little different solution, but I tried before , but now I have few more experience, so maybe it will be possible..

    And problem with statistic we can dennied, because I updated all statistics every saturday. But maybe it will be possible that old ex.plan was after, when this table had lot of data and now I optimized with pk behind where clausule and etc... I will try also delete old ex.plan for this...

    201310301300

    201310301301

    201310301302

    201310301303

    201310301304

    201310301305

    201310301306



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

  • GilaMonster (10/30/2013)


    Don't look at scan count. It is not the number of times a table has been scanned, it's not consistently the number of times a table has been accessed either.

    it means that for example

    If I have SCAN 1 and logical reads 100

    and scan 2 and logical reads 50 ,

    it same or the second stats are better ? and the 50 is last value and not x2 .. yes?



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

  • tony28 (10/30/2013)


    GilaMonster (10/30/2013)


    Don't look at scan count. It is not the number of times a table has been scanned, it's not consistently the number of times a table has been accessed either.

    it means that for example

    If I have SCAN 1 and logical reads 100

    and scan 2 and logical reads 50 ,

    it same or the second stats are better ? and the 50 is last value and not x2 .. yes?

    Query 1 has 100 logical reads, query 2 has 50 logical reads. Ignore the scan count. If you're just looking at logical reads and nothing else (time, CPU), then query 2 does half the reads that query 1 does.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (10/31/2013)


    tony28 (10/30/2013)


    GilaMonster (10/30/2013)


    Don't look at scan count. It is not the number of times a table has been scanned, it's not consistently the number of times a table has been accessed either.

    it means that for example

    If I have SCAN 1 and logical reads 100

    and scan 2 and logical reads 50 ,

    it same or the second stats are better ? and the 50 is last value and not x2 .. yes?

    Query 1 has 100 logical reads, query 2 has 50 logical reads. Ignore the scan count. If you're just looking at logical reads and nothing else (time, CPU), then query 2 does half the reads that query 1 does.

    OK, for best understanding last question ... it means that the query loaded data to memory and then is just scan inside memory, also it doesnt matter , maybe it can be little more time, but for me is not important, right ?



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

  • Logical reads are reads done from memory. Not necessarily scans.

    Slower queries isn't important to you? Really?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (10/31/2013)


    Logical reads are reads done from memory. Not necessarily scans.

    Slower queries isn't important to you? Really?

    yes thx ,,..

    slower queries are important but i think the CPU time = 78 ms, elapsed time = 79 ms are ok no??

    or do you think that is it problem ?

    and I read the info about this solution with database snapshot (I will read some example next time ), I am not sure about it, we have lot of procedures or few programs have adhoc queries or i dont have source code from this ... and I am not alone who has access to DB, but I try optimize most running queries ( logical reads and etc... ) .. , it is very dangerous, because we work 24h/5days.

    now we have full RAM -- 30GB from 32GB,, but next months we will increase to 64 or 128.. depends on few issue, you know I work in Korea company and buy something is for looooong time... very long time...



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

Viewing 15 posts - 1 through 15 (of 23 total)

You must be logged in to reply to this topic. Login to reply