I think that the points that Phil and Eirikur have brought up are really good ones.
To amplify, yes, it is possible and really not so difficult to have something auto-magically read through, say, the first ten rows to find the "first longest row", which should be the column header row, interrogate that row for the header names and delimiters, and then have the code build a staging table with all text columns.
The problem is, what do you do with 50 tables that no one knows anything about? Even if you have code that correctly guesses datatypes, who knows what each column in each table actually means? It's like loading an XML file. Yes... relatively easy to split out the elements but what the hell do they mean and how should they be used?
There has to be a plan here. Someone has to have a record layout of these files in a spreadsheet somewhere because someone built the files. Someone has to know what these files contain and have a plan for what to do with the data. Use that data to make a handful of BCP format files automatically in the spreadsheet (or Word document or text file or whatever, much quicker than manually mapping SSIS).
Importing 50 unknown content data files into equally obscure tables just doesn't strike me as the right way to do any of this.
--Jeff Moden
Change is inevitable... Change for the better is not.