Identifying SQL Injection possabilities with dapper -spExecute and IS this sp open to SQL injection

  • Hi can anyone offer input on the below code. Assuming the below SP is accessed via dapper.net (dapper calling the SP using params) .. and as you can see the use of "sp_executesql" is this still prone to SQl injection ? and if so are there any reccomendations to call different tables (based on a tenant ID being the tenants table idenitifer). in all honesty i am open to any type of discussion surrounding this not just a solution. Thanks. 

    [tenant].[Add_InboundMessage]
        -- Add the parameters for the stored procedure here
        (@id uniqueidentifier = NULL,
        @sender            varchar(400) ,
        @content        varchar(max) = NULL,
        @comments        varchar(max) = NULL,
        @inNumberId        uniqueidentifier NULL,
        @email            varchar(max) = NULL,
        @tenantId uniqueidentifier ,
        @creditCost decimal(10, 2),
        @forwardToEmailAddresses varchar(max),
        @autoResponseMessage varchar(400)
    )
    AS
    BEGIN
        -- SET NOCOUNT ON added to prevent extra result sets from
        -- interfering with SELECT statements.
        SET NOCOUNT ON;

        DECLARE @DynamicSQL nvarchar(1000);
      -- Insert statements for procedure here
        set @DynamicSQL =

            concat('insert into [tenant].[' , @tenantId , '-InboundSMS] ',
            '(',
            '[id]                    ,',
            '[sender]                    ,',
            '[content]                    ,',
            '[comments]                    ,',
            '[inNumberId]                    ,',
            '[forwardToEmailAddresses]    ,',
            '[autoResponseMessage]        ,',
            '                    ,',
            '[creditCost]',
            ')',
            'values',
            '(',
            '@id        ,',
            '@sender        ,',
            '@content    ,',
            '@comments    ,',
            '@inNumberId    ,',
            '@forwardToEmailAddresses,',
            '@autoResponse,',
            '@email        ,',
            'isnull(@creditCost,0)',
            ') OPTION (RECOMPILE)')
            -- dynamic SQL needs params.. also recompile to remove param sniffing.
            EXEC sp_executesql @DynamicSQL,
                N'@id uniqueidentifier,
                @sender varchar(400),
                @keyword varchar(200),
                @content varchar(max),
                @comments varchar(max),
                @inNumber varchar(200),
                @email varchar(max),
                @forwardToEmailAddresses varchar(max),
                @autoResponseMessage varchar(400),
                @creditCost decimal(10, 2)',
            @id        = @id        ,
            @sender        = @sender        ,    
            @content    = @content        ,
            @comments    = @comments        ,
            @inNumberId    = @inNumberId        ,
            @email        = @email        ,
            @forwardToEmailAddresses = @forwardToEmailAddresses ,
            @autoResponseMessage = @autoResponseMessage,
            @creditCost = @creditCost

  • lazynewt - Friday, February 22, 2019 6:20 AM

    Hi can anyone offer input on the below code. Assuming the below SP is accessed via dapper.net (dapper calling the SP using params) .. and as you can see the use of "sp_executesql" is this still prone to SQl injection ? and if so are there any reccomendations to call different tables (based on a tenant ID being the tenants table idenitifer). in all honesty i am open to any type of discussion surrounding this not just a solution. Thanks. 

    [tenant].[Add_InboundMessage]
        -- Add the parameters for the stored procedure here
        (@id uniqueidentifier = NULL,
        @sender            varchar(400) ,
        @content        varchar(max) = NULL,
        @comments        varchar(max) = NULL,
        @inNumberId        uniqueidentifier NULL,
        @email            varchar(max) = NULL,
        @tenantId uniqueidentifier ,
        @creditCost decimal(10, 2),
        @forwardToEmailAddresses varchar(max),
        @autoResponseMessage varchar(400)
    )
    AS
    BEGIN
        -- SET NOCOUNT ON added to prevent extra result sets from
        -- interfering with SELECT statements.
        SET NOCOUNT ON;

        DECLARE @DynamicSQL nvarchar(1000);
      -- Insert statements for procedure here
        set @DynamicSQL =

            concat('insert into [tenant].[' , @tenantId , '-InboundSMS] ',
            '(',
            '[id]                    ,',
            '[sender]                    ,',
            '[content]                    ,',
            '[comments]                    ,',
            '[inNumberId]                    ,',
            '[forwardToEmailAddresses]    ,',
            '[autoResponseMessage]        ,',
            '                    ,',
            '[creditCost]',
            ')',
            'values',
            '(',
            '@id        ,',
            '@sender        ,',
            '@content    ,',
            '@comments    ,',
            '@inNumberId    ,',
            '@forwardToEmailAddresses,',
            '@autoResponse,',
            '@email        ,',
            'isnull(@creditCost,0)',
            ') OPTION (RECOMPILE)')
            -- dynamic SQL needs params.. also recompile to remove param sniffing.
            EXEC sp_executesql @DynamicSQL,
                N'@id uniqueidentifier,
                @sender varchar(400),
                @keyword varchar(200),
                @content varchar(max),
                @comments varchar(max),
                @inNumber varchar(200),
                @email varchar(max),
                @forwardToEmailAddresses varchar(max),
                @autoResponseMessage varchar(400),
                @creditCost decimal(10, 2)',
            @id        = @id        ,
            @sender        = @sender        ,    
            @content    = @content        ,
            @comments    = @comments        ,
            @inNumberId    = @inNumberId        ,
            @email        = @email        ,
            @forwardToEmailAddresses = @forwardToEmailAddresses ,
            @autoResponseMessage = @autoResponseMessage,
            @creditCost = @creditCost

    I would say that it's not open to SQL injection because the only thing that you're concatenating is a UNIQUEIDENTIFIER. That said, I would make sure that the object exists with a simple check on wheter this returns NULL or not.

    OBJECT_ID(CONCAT('[tenant].[' , @tenantId , '-InboundSMS]'))

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • By the way, an INSERT INTO...VALUES won't have problems with Parameter Sniffing, so you don't need to use RECOMPILE.
    Have you considered not having lots of tables with uniqueidentifiers as part of the name? That could get messy really fast. The common design is to have a column with the identifier to simply use that on queries and prevent a huge amount of dynamic code.

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Thanks Luis. Appreciate the feedback and noted the for the "RECOMPILE". I was under the impression that a recompile was advised to avoid potential performance issues whilst passing params into sp_executesql .  This is a multi-tenant application so the guids are relevant to the tenants. They are all segmented in the [tenants] schema to avoid any messy-ness. The rest of the db other than these 3 tenant specific transaction tables uses Guids as the tenantIDs as you suggest. I am constantly back and forth between switching to a db per tenant scenario which would increase overall development time/effort (trying to avoid that at the moment). The reason for multiple tables (3 per tenant) is that this will potentially reach the billion record mark (logged records) and beyond across all tenants relatively quickly (months). Segmenting it would most likely reduce this to Millions per table over the next year/2, giving me time/resource to alleviate any potential table locking/blocking on high rate inserts.  Also there is scope for there to be a heavy write requirement on the inbound tables. I need to avoid any table locks/blocks/delays if i get big number concurrent inserts (though i will be looking at a caching solution to feed transactions in or NOSQL but just not yet..  I realise that db throughput will be the bottleneck should i use multiple tenant tables and am confident that i will have additional dev resources once i reach that point. This is just a "build it so it can scale as well as possible in the short term but ensuring that we do not need to alter the main structure of the database any time soon.. ). 

    This is relatively new ground with such a high concurrent transaction rate coming in and i am just trying to ensure that this is not an issue. Again although i have reasonable experience with SLQ/.NET/JS / SAAS i am going off my own research/experience and as a self taught dev other peoples experience/suggestions are always appreciated.  

    Also i agree that checking existence is a good idea but i was wondering about any slight performance reduction in doing so even millisecond wise.  Can you confirm then that as long as we pass params to sp_executesql SQL injection should not be that much of a concern ?? Also would you advise the use of Quotename()

    Thanks

  • lazynewt - Friday, February 22, 2019 7:58 AM

    Thanks Luis. Appreciate the feedback and noted the for the "RECOMPILE". I was under the impression that a recompile was advised to avoid potential performance issues whilst passing params into sp_executesql .  This is a multi-tenant application so the guids are relevant to the tenants. They are all segmented in the [tenants] schema to avoid any messy-ness. The rest of the db other than these 3 tenant specific transaction tables uses Guids as the tenantIDs as you suggest. I am constantly back and forth between switching to a db per tenant scenario which would increase overall development time/effort (trying to avoid that at the moment). The reason for multiple tables (3 per tenant) is that this will potentially reach the billion record mark (logged records) and beyond across all tenants relatively quickly (months). Segmenting it would most likely reduce this to Millions per table over the next year/2, giving me time/resource to alleviate any potential table locking/blocking on high rate inserts.  Also there is scope for there to be a heavy write requirement on the inbound tables. I need to avoid any table locks/blocks/delays if i get big number concurrent inserts (though i will be looking at a caching solution to feed transactions in or NOSQL but just not yet..  I realise that db throughput will be the bottleneck should i use multiple tenant tables and am confident that i will have additional dev resources once i reach that point. This is just a "build it so it can scale as well as possible in the short term but ensuring that we do not need to alter the main structure of the database any time soon.. ). 

    This is relatively new ground with such a high concurrent transaction rate coming in and i am just trying to ensure that this is not an issue. Again although i have reasonable experience with SLQ/.NET/JS / SAAS i am going off my own research/experience and as a self taught dev other peoples experience/suggestions are always appreciated.  

    Also i agree that checking existence is a good idea but i was wondering about any slight performance reduction in doing so even millisecond wise.  Can you confirm then that as long as we pass params to sp_executesql SQL injection should not be that much of a concern ?? Also would you advise the use of Quotename()

    Thanks

    I do use option (recompile) if each run of a procedure produces a new table. My theory is that if this isn't done, the procedure cache will accumulate all those unique incantations of query text if the server isn't set up for "ad hoc queries".

Viewing 5 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply