mister.magoo (5/15/2014)
Try copying the first data row down so all the rows have exactly the same data - does it fail then also?
Or copy the contents into a brand new Excel file.
ps: you can ignore the warning about global shared memory
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
You could also try saving the contents of the Excel file to CSV and importing that.
If it still fails, you'll know that you have to focus your investigations on the target rather than the source.
Is the target a SQL Server db, by the way?
Phil Parkin (5/16/2014)
You could also try saving the contents of the Excel file to CSV and importing that.If it still fails, you'll know that you have to focus your investigations on the target rather than the source.
Is the target a SQL Server db, by the way?
Target is SQL Server 2008. Table has no constraints, no "not null" fields. All the fields are 255 nvarchar characters, except the Description field, which is 1000. And two date fields. Longest description value in the incoming file is 240 characters.
I did convert to a CSV. It gave me a variety of problems. Didn't like the "/" character in some field names, so I changed that. Kept giving me a lot of truncation errors, even though the incoming data was not as long as the field, even taking into account doubling for unicode. I told it to ignore truncation errors, and, sure enough, it truncated the Description field. trying to put 240 characters into a 1000 nvarchar field.
I finally got it to import, but it pulled int 37,260 out of 37,262 records.
I also tried creating a SSIS package automatically, by using the [Database]/Tasks/Import Data feature. It added a translation module into the data flow, converting everything to Unicode that required it.
EDIT:::
I also just checked the data lengths that got imported, for that description field. 50 characters. Field defined as 1000 nvarchar.
EDIT2:::
Here is the table definition:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[tbl_Archive_mHub_AVL](
[Cco P/N] [nvarchar](255) NULL,
[Description] [nvarchar](1000) NULL,
[Mfg Name] [nvarchar](255) NULL,
[Mfg P/N] [nvarchar](255) NULL,
[Qual Status] [nvarchar](255) NULL,
[Change Date] [datetime] NULL,
[UOM] [nvarchar](255) NULL,
[cmgr] [nvarchar](255) NULL,
[Load_Date] [datetime] NULL
) ON [PRIMARY]
Ok, partial solution.
The field was truncating because the Source Data Connection for XL had the field defined as 50 char. I guess it did that by default because I didn't set it.
Still don't know why it's losing two records out of 37262.
Do you know by any chance which 2 records it is losing? At the beginning, at the end, somewhere random?
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
Koen Verbeeck (5/17/2014)
Do you know by any chance which 2 records it is losing? At the beginning, at the end, somewhere random?
Not yet. I have to get both the source and the result into a database where I can compare them (and know I have all the records), like Access. Can't just compare on one field in XL as it is a multikey primary key.
Well, I never really solved this.
Or maybe I did. It was not importing all the CSV rows because double quotes within the description field was throwing values into wrong columns.
I found an MS Access source for the same data, and that works fine.
Just for any one else interested, with an ETL issue like this you can add a Data Viewer to the data flow and see the rows thrown at the destination (for example). Right click the green arrow going down and add it there. As it runs you can also copy the data for inserting into excel to do more inspections there if you wish.
----------------------------------------------------
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply