String or binary data would be truncated

  • For those having an issue with this error, an easy workaround is to change the INSERT INTO/SELECT statement into a SELECT... INTO statement. That way the new table will contain the actual size of every column which can be compared against the target table's schema column sizes. This workaround is just a temporary way of determining the error, after the culprits are found the SELECT... INTO statement needs to be converted back into the INSERT INTO/SELECT statement. I hope it helps...

  • orosado - Thursday, June 28, 2018 10:04 AM

    For those having an issue with this error, an easy workaround is to change the INSERT INTO/SELECT statement into a SELECT... INTO statement. That way the new table will contain the actual size of every column which can be compared against the target table's schema column sizes. This workaround is just a temporary way of determining the error, after the culprits are found the SELECT... INTO statement needs to be converted back into the INSERT INTO/SELECT statement. I hope it helps...

    That works if you  do an insert/select. If you're loading from SSIS or BULK INSERT, this error is maddening. You often need to create a table with varchar(max) columns and then start searching

  • Steve Jones - SSC Editor - Thursday, June 28, 2018 9:50 AM

    Matthew Joughin - Thursday, June 28, 2018 4:53 AM

    I think a table or column option would be a great solution for this - then its up to the developer to turn it on for the table or column if they want it to truncate varchar's and nvarchar's

    It's not always a fundamental coding issue. This could be a data issue for a single data load.

    I do get that - but thats exactly it - for mass data imports, which i use to do for every client that went live on my old companies system, there was often data from the previous system that didn't quite fit - and it didn't really matter.
    Anyway - if Microsoft gave a better error message like you are asking then it would solve the issue better

    if you don't have the time to do it right, when will you have the time to do it over ?

  • For sure. This is a horrible error message.

  • If I had ever written code like this I would hope I would have been fired.  But unfortunately, my employers also pretty much tolerated this also.  If the code is smart enough to tell two fields are incompatible, then tell me which fields.  That should be the easy part.  I have fought this issue while learning to create packages for my personal use in retirement.  I finally ended up manually writing code adding a single field at a time out of records with 20 or 30 fields.  There is no excuse for this.

    I fight the same lack of caring and making excuses with another popular product I use, Quicken which used to be owned by Intuit but now is owned by some accounting firm.  They want you to use their software, which I have for over 28 years, to record your financial data.  OK, so I have 28 year history of investment transactions in their files.  And yes, you can even EXPORT the investment accounts.  And you can also IMPORT transactions from financial institutions. OH, BUT WAIT! 

    You can export your data to several different formats, but you can't IMPORT that same data back into their own software.   Account types other than investments CAN be re-imported.   The excuse is that investment accounts 'don't have a transaction register'.  OK, so what is that thing you use for me to enter and display my data?

    In this situation, I and many others will not be spending any more money on their software or services.  So who wins?

    Rick
    Disaster Recovery = Backup ( Backup ( Your Backup ) )

  • Steve Jones - SSC Editor - Thursday, June 28, 2018 9:50 AM

    Matthew Joughin - Thursday, June 28, 2018 4:53 AM

    I think a table or column option would be a great solution for this - then its up to the developer to turn it on for the table or column if they want it to truncate varchar's and nvarchar's

    It's not always a fundamental coding issue. This could be a data issue for a single data load.

    OK, but even if it IS a data issue, the CODE should handle it gracefully.  THEN you move on to the data problem.

    Rick
    Disaster Recovery = Backup ( Backup ( Your Backup ) )

  • skeleton567 - Thursday, June 28, 2018 1:16 PM

    OK, but even if it IS a data issue, the CODE should handle it gracefully.  THEN you move on to the data problem.

    That, I agree with.

  • Correction to the article - "*some* developers dislike relational schemas"
     I would not want to use anything but a relational schema.

  • The Informix relational database engine has had this feature for years.  During a bulk load, you can tell it to ignore the error or place the offending record into a separate file.

    I am an "ex" Informix DBA.

  • It would be very helpful if the data truncation error message were more like the duplicate key message, having the actual column name and offending value are included.

    If you anticipate occasional truncation errors on a specific table, perhaps a staging table for an ETL process, then one way to help mitigate this problem is to define the VARCHAR length greater than what is allowed, but then declare named check constraints to throw the exceptions.

    For example:

    CREATE TABLE MyTable
    ( col1 VARCHAR(30)
    CONSTRAINT CC_MyTable_col1
    CHECK (DATALENGTH(col1) <= 3) );

    INSERT INTO MyTable ( col1 ) VALUES ('aaaa');

    Msg 547, Level 16, State 0, Line 10
    The INSERT statement conflicted with the CHECK constraint "CC_MyTable_col1".
    The conflict occurred in database "master", table "dbo.MyTable", column 'col1'.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • And with some row or list of data values.

  • I agree the error provides no useful information for correcting the problem and it should be something that Microsoft can correct relatively easily.

    However, instead of letting the problem annoy us given that we know it exists, we should build our systems to allow for the problem.  In an ETL process, have a data validation step that catches the problems before a load.  In a transaction based application, validate the data before an insert is attempted.  Yes, this is extra work for us (that one can argue shouldn't be necessary) but it is proactive and it eliminates frustration that we are well aware may occur.

    This isn't about letting Microsoft off the hook, just minimizing our own annoyance levels.

  • mercurej - Monday, July 2, 2018 8:39 AM

    I agree the error provides no useful information for correcting the problem and it should be something that Microsoft can correct relatively easily.

    However, instead of letting the problem annoy us given that we know it exists, we should build our systems to allow for the problem.  In an ETL process, have a data validation step that catches the problems before a load.  In a transaction based application, validate the data before an insert is attempted.  Yes, this is extra work for us (that one can argue shouldn't be necessary) but it is proactive and it eliminates frustration that we are well aware may occur.

    This isn't about letting Microsoft off the hook, just minimizing our own annoyance levels.

    We can do these, things, but we can't always catch where/when this error happens in a load. If we engineer this to be sure we always can, then we're going to have a very complex, double work effort. How can you know that a name field you've spec'd at 50 char has data in a flat file of 1,000,000 rows has one row that's 51 char? Unless you read and pre-process, or you load into a large staging table.

    This isn't a simple, it's easy to account for, issue. This is one that's a PIA when you run into it, despite your efforts.

  • Steve Jones - SSC Editor - Thursday, June 28, 2018 9:45 AM

    Japie Botma - Thursday, June 28, 2018 2:15 AM

    I am sorry Steve, but truncating data without the user knowing is an error.  It is frustrating that the source and target schema don't match, but it is still an error.  Either the source data is wrong (swapped columns by mistake, etc) or the target schema is wrong.  Either way: It should tell me that.  A simple substring in the source query can fix it, if you are willing to lose data.

    You've missed the entire point of the piece. This isn't about eliminating the error or silently truncating data. This is about ensuring the developer or DBA can find the source of the error easily.

    Totally agree with you, Steve.  No where did you suggest it wasn't an error and the link you provided is pretty good confirmation of that.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 14 posts - 16 through 28 (of 28 total)

You must be logged in to reply to this topic. Login to reply