• usererror - Monday, February 5, 2018 1:53 PM

    Zidar - Monday, February 5, 2018 1:35 PM

    Maybe you should read something on relational database design, independent of SQL. If you need to constantly add columns to your tables, then your problem is of a different sort. MS SQL database is not used for teh same things as CSV or Excel. Whatever you are doing to CSV files or Excel files, should not mechanically be reproduced in any database system. That is a bigger problem than sinatx error in some script.

    Zidar,
    At the moment I'm following what the previous "DBA" was doing.  Every month we have to assemble a report.  It is significantly easier to assemble the report using T-SQL Queries.  I attempted to try it with Excel at one point and it was a fruitless effort.  The sheets became too large.  The data in the database is not used for storage of any kind.  I'm just using SQL to manipulate the data because as the previous person found trying to do it with Excel was just a waste of time.  All the data import into SQL is in CSV or Spreadsheets, though, which are quite large, 50,000 rows or more.  In a nutshell this is the process:

    0. Remove prior months tables.
    1. Download membership 'rosters' CSVs from insurance companies (which are all with different column headers)
    2. Import the rosters into their own tables.
    3. Add the "Program" column to each table and fill the program row with the program name for that particular roster.
    4. Filter the Membership Rosters to remove certain records.
    5. Join Membership Rosters into one large Roster
    6. Use Unions and Joins to create two new tables on certain data columns.
    7. Export and send records from two new table created from Step 6 to parties requiring them.

    I realize this is not glorious DBA (and I use the title DBA loosely) work, but it is far more efficient than Excel.

    Sounds like a job for an SSIS package, rather than a stored procedure.
    (Please note, I'm not an SSIS developer, but I've got developers who do similar workloads of importing data)

    As for the 0 step, why remove the prior tables?  I'm presuming that indicates you drop them?  Why not simply truncate the tables, which would keep the table structures (columns, etc,) reset any ID values, and save some log thrashing?  Or, do the headers in the files you're importing change from month-to-month?