I have to import csv files with hundreds of columns and only few thousands of records. The flat file import wizard does not do full sampling and may identify a column which has 60 character longest record as nvarchar(50) or the one with 1950 character longest record as nvarchar(1800). When the import fails it does not tell which column failed to insert the data due to the truncation, so I have to run formulas in Excel to determine the maximum length of each column, which is quite annoying. Is there a setting to tell the import flat file wizard to do full sampling of the file to make sure the data types and lengths are determined correctly?
Ok... so let's say that you had such a thing... What would you do when if grows bigger than the data in your permanent tables?
The real key here is that you're getting data from someone or something. There should be some knowledge as to what the maximum size will ever be for any column just as there should be an assigned datatype.
If no such knowledge or documentation is available, then set everything to as big as possible and go from there with the understanding that you will someday break the data in permanent tables.
And this is just as fast as doing anything else because a scan of the data is a scan of the data. You might as well do it all in one scan and call it a day.
As far as it not telling you which column(s) is/are being truncated, which version of SQL Server are you using? A good ol' fashioned BULK INSERT in later versions will tell you.