November 22, 2016 at 1:07 pm
When I use ETL or SELECT * INTO TableName to automatically create and populate a table, SQL Server 2102 defaults to using NVarchar for column types.
Is there any way to make SQL Server 2012 default to Varchar when creating columns?
Thanks
November 22, 2016 at 1:16 pm
I don't follow what you mean here or what the issue is. Can you be a bit more explicit and give an example?
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 22, 2016 at 3:49 pm
ETL is a process, not a tool. You'll need to let us know what tool you are using for your ETL process.
SELECT *
INTO TableName
FROM SourceTable
uses the same data types as the source table. If you want to change the data types, you'll need to do an explicit conversion.
Drew
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
November 23, 2016 at 2:26 am
I believe the user is talking about SSIS, as the default field type on that for a alphanumeric field is DT_WSTR, which translates to nvarchar on SQL Server. If so, the answer is no. You have to overide this manually. If you are importing from an xls(x) file, then you will need to convert those fields.
Is this what you mean? If not, you need to elaborate more.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
November 23, 2016 at 8:31 am
Thanks for your replies. Here is an example.
Pull data from Excel table into a table that SQL Server 2012 creates.
SELECT * INTO TableName
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
'Excel 12.0 Xml;
Database=E:\Spreadsheets\NameOfExcelSpreadsheet.xlsx',
[Data$]);
The result is that SQL Server creates a table with nvarchar instead of varchar for string and even date values.
CREATE TABLE [dbo].[TableName](
[IntegerValue] [float] NULL,
[String] [nvarchar](255) NULL
) ON [PRIMARY]
This also happens when I use SSIS or IBM Data Manager to create a table. This causes problems, particularly what a date is interpreted as nvarchar. SQL Server 2008 did not have this annoying behavior. Is there a way to configure SQL Server 2012 to behave like 2008?
Thanks,
Allen
November 23, 2016 at 8:56 am
As I said above, alphanumeric fields will be treated as an NVARCHAR, not a VARCHAR (techinally, as I said, DT_WSTR, and DT_STR respectively). This is an intentional behaviour (as it doesn't cause any data depreciation).
If you need to have the fields to be a different data type, you will need to CAST it prior to the insertion, or use a staging table.
For example:
--Option 1
--Convert and Insert in one go
SELECT CAST(IntegerValue) AS INT) AS IntegerValue,
CAST(String AS VARCHAR(255)) AS String
INTO TableName
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
'Excel 12.0 Xml;
Database=E:\Spreadsheets\NameOfExcelSpreadsheet.xlsx',
[Data$]);
--Option 2
--Staging table
SELECT *
INTO StagingTable
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
'Excel 12.0 Xml;
Database=E:\Spreadsheets\NameOfExcelSpreadsheet.xlsx',
[Data$]);
SELECT CAST(IntegerValue) AS INT) AS IntegerValue,
CAST(String AS VARCHAR(255)) AS String
INTO TableName
FROM StagingTable
DROP TABLE StagingTable;
Edit: Also, to add, numeric fields are treated as Floats.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
November 23, 2016 at 9:24 am
Let me be a bit more specific on the ETL case.
I use the IBM Data Manager ETL tool to create staging tables. Data Manager creates the new table automatically on the target database server, which is SQL Server 2012. I have no way to affect how Data Manager creates the table. I believe it interprets from the source table and SQL Server acts in default behavior.
Say the source table has date, varchar and integer types.
The resulting target table, automatically created by Data Manager, will result in nvarchar and float types.
If I use SQL Server 2008, I would get date, varchar and integer types in this case of automatic table creation. This is what I want.
Thanks,
Allen
November 23, 2016 at 9:30 am
If you're using an ETL tool, then the ETL tool will be making all the decisions, not SQL Server. The problem isn't SQL Server, it's your ETL tool.
You said earlier that your SELECT * INTO [Table2] FROM [Table2] resulted in the datatypes being changed, however, a simple statement like this will NOT yield that result. Using a INTO statement will always use the exact same data type, precision and scale.
For example, take this example script:
USE DevTestDB
GO
IF EXISTS (SELECT [name] FROM sys.tables WHERE [name] = 'Sample1') BEGIN
DROP TABLE Sample1;
END
CREATE TABLE Sample1 (TextField1 VARCHAR(50),
TextField2 VARCHAR(50),
nTextField1 NVARCHAR(50),
nTextField2 NVARCHAR(50),
IntNumber INT,
DecNumber DECIMAL(5,2));
SELECT c.name as ColumnName, ct.name AS FieldType
FROM sys.tables t
JOIN sys.columns c ON t.object_id = c.object_id
JOIN sys.types ct ON c.system_type_id = ct.system_type_id
WHERE t.[name] = 'Sample1'
AND ct.name != 'sysname'
ORDER BY c.column_id ASC;
IF EXISTS (SELECT [name] FROM sys.tables WHERE [name] = 'Sample2') BEGIN
DROP TABLE Sample2;
END
SELECT TextField1,
CAST(TextField2 AS NVARCHAR(50)) AS TextField2,
nTextField1,
CAST(nTextField2 AS VARCHAR(50)) AS nTextField2,
IntNumber,
DecNumber
INTO Sample2
FROM Sample1;
SELECT c.name as ColumnName, ct.name AS FieldType
FROM sys.tables t
JOIN sys.columns c ON t.object_id = c.object_id
JOIN sys.types ct ON c.system_type_id = ct.system_type_id
WHERE t.[name] = 'Sample2'
AND ct.name != 'sysname'
ORDER BY c.column_id ASC;
DROP TABLE Sample2;
DROP TABLE Sample1;
If you run that, you'll notice that TextField1 is a varchar in both cases, and nTextField1 is an nvarchar. TextField2 and nTextField2 only change as they are cast in the INTO statement.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
Viewing 8 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply