Is there a safe way to use SqlBulkCopy across multiple threads?

  • We've introduced SqlBulkCopy on a batch job that executes across multiple threads but are now getting intermittent deadlocks.

    - We're not using the table lock option.
    - The table being inserted has a clustered index.
    - Each thread represents a 'compute + persist' component of the job, completing whenever the work is done, and inserting data for a unique PortfolioSnapshotId (with a batch of AssetIds)

    Is there any way we can safely use SqlBulkCopy in this scenario? Thanks

    XML below:

      <event name="xml_deadlock_report" package="sqlserver" timestamp="2017-12-06T02:38:31.382Z">
      <data name="xml_report">
       <type name="xml" package="package0" />
       <value>
        <deadlock>
        <victim-list>
         <victimProcess id="process248e1c96108" />
        </victim-list>
        <process-list>
         <process id="process248e1c96108" taskpriority="0" logused="63904" waitresource="PAGE: 14:1:289932632 " waittime="4523" ownerId="2409980792" transactionname="user_transaction" lasttranstarted="2017-12-06T02:38:26.657" XDES="0x2453d39ce58" lockMode="IX" schedulerid="14" kpid="24604" status="suspended" spid="151" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2017-12-06T02:38:26.657" lastbatchcompleted="2017-12-06T02:38:26.657" lastattention="2017-12-06T02:38:23.760" clientapp="XXXX" hostname="XXXX" hostpid="137000" loginname="XXXX" isolationlevel="read committed (2)" xactid="2409980792" currentdb="14" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056">
          <executionStack>
          <frame procname="adhoc" line="1" stmtend="1206" sqlhandle="0x020000000f2bea1e535f1cbf04ef8318984df8965399be020000000000000000000000000000000000000000">
      unknown  </frame>
          </executionStack>
          <inputbuf>
      insert bulk TableName ([AssetId] NVarChar(255) COLLATE Latin1_General_CI_AS, [ParentAssetId] NVarChar(255) COLLATE Latin1_General_CI_AS, [PortfolioSnapshotId] UniqueIdentifier, [Quantity] Decimal(28,8), [CompressedProperties] VarBinary(max)) with (CHECK_CONSTRAINTS) </inputbuf>
         </process>
         <process id="process248e1c8cca8" taskpriority="0" logused="67244" waitresource="PAGE: 14:1:289933280 " waittime="4523" ownerId="2409980337" transactionname="user_transaction" lasttranstarted="2017-12-06T02:38:26.220" XDES="0x25e82396e58" lockMode="IX" schedulerid="2" kpid="33292" status="suspended" spid="158" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2017-12-06T02:38:26.220" lastbatchcompleted="2017-12-06T02:38:26.220" lastattention="2017-12-06T02:38:24.507" clientapp="XXXX" hostname="XXXX" hostpid="137000" loginname="XXXX" isolationlevel="read committed (2)" xactid="2409980337" currentdb="14" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056">
          <executionStack>
          <frame procname="adhoc" line="1" stmtend="1206" sqlhandle="0x020000000f2bea1e535f1cbf04ef8318984df8965399be020000000000000000000000000000000000000000">
      unknown  </frame>
          </executionStack>
          <inputbuf>
      insert bulk TableName ([AssetId] NVarChar(255) COLLATE Latin1_General_CI_AS, [ParentAssetId] NVarChar(255) COLLATE Latin1_General_CI_AS, [PortfolioSnapshotId] UniqueIdentifier, [Quantity] Decimal(28,8), [CompressedProperties] VarBinary(max)) with (CHECK_CONSTRAINTS) </inputbuf>
         </process>
        </process-list>
        <resource-list>
         <pagelock fileid="1" pageid="289932632" dbid="14" subresource="FULL" objectname="XXXX.dbo.TableName" id="lock247f3004d00" mode="X" associatedObjectId="72057594048872448">
          <owner-list>
          <owner id="process248e1c8cca8" mode="X" />
          </owner-list>
          <waiter-list>
          <waiter id="process248e1c96108" mode="IX" requestType="wait" />
          </waiter-list>
         </pagelock>
         <pagelock fileid="1" pageid="289933280" dbid="14" subresource="FULL" objectname="XXXX.dbo.TableName" id="lock247f4f97c80" mode="X" associatedObjectId="72057594045661184">
          <owner-list>
          <owner id="process248e1c96108" mode="X" />
          </owner-list>
          <waiter-list>
          <waiter id="process248e1c8cca8" mode="IX" requestType="wait" />
          </waiter-list>
         </pagelock>
        </resource-list>
        </deadlock>
       </value>
      </data>
      </event>

    Table:

      CREATE TABLE [dbo].[TableName](
      [AssetId] [nvarchar](255) NOT NULL,
      [ParentAssetId] [nvarchar](255) NULL,
      [PortfolioSnapshotId] [uniqueidentifier] NOT NULL,
      [Quantity] [decimal](28, 8) NULL,
      [CompressedProperties] [varbinary](max) NULL
      CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED
      (
      [PortfolioSnapshotId] ASC,
      [AssetId] ASC
      )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
      ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
      GO

      ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_TableName_PortfolioSnapshot] FOREIGN KEY([PortfolioSnapshotId])
      REFERENCES [dbo].[PortfolioStore_PortfolioSnapshot] ([Id])

      CREATE NONCLUSTERED INDEX [IX_TableName_PortfolioSnapshotId] ON [dbo].[TableName]
      (
      [PortfolioSnapshotId] ASC
      )
      INCLUDE ( [AssetId],
      [ParentAssetId],
      [Quantity],
      [Price]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
      GO

  • We generally adopt an ETL approach and copy our data into 'worktables' created by our multithreaded app. Each thread creates a worktable ( generated by guid ) in a work database. The data are bulk copied to the worktable and then a procedure to load the work table to the target table is executed to finish the load. Worktables are dropped at the end of the load. The load procedure then lets you code against deadlock situations.
    Obviously a big change in method and performance from directly loading into a table, but we don't encounter any deadlocks this way and we often have 15 threads running at a time.

    Regards
    Dave

Viewing 2 posts - 1 through 1 (of 1 total)

You must be logged in to reply to this topic. Login to reply