November 4, 2018 at 11:40 pm
Comments posted to this topic are about the item Import flat files to SQL Server on Linux using Azure Data Studio
Carlos Robles
DBA Mastery
Data Platform MVP | MCSE, MCSA, MCTS, MCP | ITIL v3
w: www.dbamastery.com
e: crobles@dbamastery.com
June 20, 2019 at 3:45 pm
I've had success with the wizard when using smaller files. However when I try to import a big file (700+ MB, 255 Columns, ~450k Rows) I'm stuck with the loading circle on step two before I can preview the data. I was wondering if I increased my CPU usage on Docker (Currently set at 2 CPU) along with my run time memory (currently at 4 GB), would that speed up the loading time for that file? I'm not too technical when it comes to hardware components such as CPU's and RAM but I have a general understanding (New to Docker as well). I also have another laptop that has a GPU, I'm wondering if I can put that to use within docker for this container? I have even left the file to be imported over night and I was still stuck with the loading circle the next day. I know this extension is still somewhat new and was wondering if it is something on their end or if there is anything else I can try?
Next step: I'm going to try to upload in chunks. Start big with 75,000 - 100,000 records per chunk to see if that will work. Or try converting it into a txt or JSON from CSV. There are also a lot of Nulls in the file. Not sure if that can throw off the import steps? I also know that my file is wide (255 columns) and know that that is usually the max for some applications and RDBMS.
Thanks
Viewing 2 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy