• SQL Server handles concurrent operations in whatever manner the developers choose to tell it.  It can enforce very strict table locking during some operations, which is good for speeding up one transaction at the expense of allowing no other concurrent transactions.  This is common for large bulk inserts, but is not mandatory.  Inserting one row at a time allows a lot of concurrent operations, but the insert might take forever.

    There are more imaginative ways to load tables and allow concurrency.  You can enable snapshot isolation, allowing concurrent sessions to read from a snapshot of the table created when the load transaction began. There are partitioning tricks that can be used to create another table with the same structure, load it, and then swap it into the original table as a partition metadata operation.  Or you might have two identical tables and a synonym that points to one of them.  While the read operations read from the synonym you can import data into the other table, when the load is finished just redefine the synonym to point to the table with the new data.

    Wrapping the read API calls in error recovery code that waits a bit and tries again might be helpful.  A page that displays "Data refresh in progress, try again in a few minutes" would be better than one that says "Error 500 you're screwed".