Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Sql Server 2000 Openquery insert takes too long Expand / Collapse
Author
Message
Posted Tuesday, January 27, 2009 6:31 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, November 13, 2012 4:27 AM
Points: 3, Visits: 44
To whom can help,

I am using Sql Server 2000 and have a linked server to Oracle.

I am doing an insert into the oracle from data on the Sql Server 2000.

it is taking 26 minutes to process 2210 records. at 85 records per minute this appears to be a long time for a relatively small amount of records.

Someone has mentioned that I may be inserting each record row by row instead of all 2210 at the same time. is this possible and if so how can I check.

Basically I want to speed this up (hopefully using setup) and not have to give the answer to the client that their network is rubbish.

Any suggestions or pushes inthe right directions would be useful.

Provider using: Microsoft OLE DB Provider for Oracle (Currently Can't get Oracle Provider for OLE DB to work (error 7399 returned) should I get this working?)

Query is:
Insert into OpenQuery (FinPRUAT, 'Select Field1 From Table1 Where Field1 1=0')
Select Field1 From Table1
Inner Join Table2
On Table2.Field1 = Table1.Field1
Where Table2.Field2 = 10001228
Thanks
Post #644083
Posted Tuesday, January 27, 2009 7:26 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, November 13, 2012 4:27 AM
Points: 3, Visits: 44
Hello,

Just to let people know, we have run a trace on Oracle and it shows that the ODBC does infact process the insert of each record 1 by 1.

Is there a bulk insert type of 'thing' we could use with openQuery? i.e. I would like to send all 2000 records in 1 go.

the problem is the 'SQL*Net message from client' always waits after each insert.
Post #644130
Posted Monday, February 16, 2009 8:36 AM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 7:07 AM
Points: 280, Visits: 356
Hi Stewart

I'm not really an expert but I don't recall having to specify a where clause in the openquery destination. Perhaps it is this that is causing the row level operations?

Other things you could check is if it is a regular issue, does it happen at all times of the day - check off-peak performance as well.


Max
Post #657796
Posted Monday, February 16, 2009 9:06 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, November 13, 2012 4:27 AM
Points: 3, Visits: 44
Your exactly right... Peak periods does relate to this. however, the biggest impact is the number of records. at peak time 20 records will transfer relatively quickly compared to 2000 records at the same time.

I still have no solution. and am thinking about transferring the data in one big field.
Post #657833
Posted Monday, February 16, 2009 9:53 AM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 7:07 AM
Points: 280, Visits: 356
Can you let me know what the purpose of the WHERE clause is in the destination table?

Couple of more suggestions:
1. In your Oracle ODBC DSN, disable LOBs if you're not using them.
2. In your Oracle ODBC DSN, increase the Prefetch Count (memory permitting)
3. Consider dumping the recordset to file and then kicking off a sqlcmd to kick off a batch file to import the data using sqlldr.


Max
Post #657895
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse