Something inside me really wanted to hate this technique, but I know it has its uses in a pure "staging" environment (i.e. one where you have already committed to do the T/L part of ETL in procedures between the stage and the ODS).
That said, I do want to offer some pointers with the .NET code itself, which I feel would make this better as a teaching article or reusable code block:
1. Remove all MessageBox.Show calls. Replace with Dts.Log(message) -- might be obvious but noone wants to press OK 30,000 times to load a file
2. In any case, favour StringBuilder over message += "blah"
3. Don't populate an ArrayList with the lines, they have no requirement to stay in memory once they have been read/inserted -- instead, do your processing in the Do/While Not (sLine Is Nothing) loop and discard each row once it is created. You can use an isFirst Boolean, or a lineCount Integer to handle your first-line semantics inside that loop -- this should allow for files far larger than 10,000 rows
4. Consider System.IO.Path.Combine over (di.ToString & fi.ToString) when building paths programatically
4a. For what it is worth, fi.FullName is the full path to the file (so the di.ToString & fi.ToString combo is not needed)
5. Consider System.IO.Path.GetFileNameWithoutExtension(fi.Name) over fi.ToString.Substring(0, fi.ToString.Length - 4)
6. Wrap your StreamReader code in a Using/End Using block [i.e. Using objReader As StreamReader = New StreamReader(...)] -- if the ReadLine code throws an exception (e.g. if a line length is > 2bn), the End Using will automatically clean up
7. Consider batching your inserts, running 10-100 at a time. Build the insert statements up in a StringBuilder, separating with a semi-colon/newline combination [builder.Append(sql).AppendLine(";"c)] and run them when ((lineCount - 1) Mod batchSize = 0)
7a. Consider offering the batchSize as a Dts.Variable
8. Properly escape table/column names, either with a RegEx replacement for any non-alphanumeric character, or by using a SqlClient.SqlCommandBuilder object, with QuotePrefix = "[", QuoteSuffix = "]", using the QuoteIdentifier method.
9. Properly escape CSV data values, with a SqlClient.SqlCommandBuilder object, with QuotePrefix = "'", QuoteSuffix = "'", using the QuoteIdentifier method.
9a. Alternatively, consider building your insert string as INSERT INTO table VALUES (@p1, @p2, @p3, ...); and using the Parameters collection on the SqlCommand object to populate the values
10. Consider NVARCHAR(MAX) instead of VARCHAR(20) as your default data type -- .NET strings are UTF16/UCS2 as standard, so it makes sense to store them as such, and the MAX length prevents data truncation
11. Catch specific exceptions, if you are going to handle them, e.g. IO.IOException is thrown by FileInfo.MoveTo if a file already exists
12a. Consider checking that the backup folder exists before attempting to copy to it [If Not Directory.Exists(Path.Combine(di.FullName, "Backup")) Then Directory.CreateDirectory(Path.Combine(di.FullName, "Backup"))]
12b. Consider checking that the backup file does not exist before attempting to move the source file [If Not File.Exists(Path.Combine(Path.Combine(di.FullName, "Backup"), fi.Name)) Then File.Delete(Path.Combine(Path.Combine(di.FullName, "Backup"), fi.Name))]
12c. Consider storing the backup folder location in a variable to reduce on Path.Combine calls -- allow the user to configure this location with a Dts.Variable
13. Consider making a IDENTITY PK on your imported tables, to aid in row identification later in the T/L process, either named tableName_id or according to a Dts.Variable -- you could turn this feature on or off by whether the variable is set or not (String.IsNullOrEmpty is your friend here).
14. Consider a recreateSchema Dts.Variable which triggers the drops of existing tables -- this will allow further development of the staging database, e.g. indexing, partitioning, which would be retained between data loads