Connectivity to SQL Server via VPN terribly slow...

  • swjohnson

    Hall of Fame

    Points: 3254

    Hello all,

    Was wondering if anyone else is in the same boat.  Our SQL Servers were centralized to our corporate hosting facility.  So now we connect to the SQL Servers via Enterprise Manager and Query Analyzer via VPN. 

    The problem isn't connectivity as we can connect to them just fine.  However, the speed is the issue.  We have a very large pipleline between us and them (6 bundled T1's) and that is pretty much dedicated to all this.   Our upload of data to the server is responsive (not great but significantly slower than it was before) but downloading data is just terrible.  I tried to export data (one SQL table with 30,000 records and 7 fields) to an access DB and it took over an hour to do but if I do it locally, it takes about 60 seconds. 

    All servers and clients are running TCP/IP as the primary protocol and Named Pipes is also secondary on both.  They are all set to use the same TCP/IP port as well. 

    Any other thoughts on what I can delve into to solve this slow speed issue? 

    Thanks

    SJ

     

  • DCPeterson

    SSCoach

    Points: 15150

    You might get some improvement if you bump up your network packet size.  This will probably only net a pretty incremental gain on large data transfers, but it could also hurt smaller queries, so don't go overboard...

    Other than that, it sounds like you need to talk  to your network folks to see what's going on.

    /*****************

    If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

    *****************/

  • David.Poole

    SSC Guru

    Points: 75395

    In EM from the tools, options menu switch off server state polling or put it to a much larger interval.

    My understanding of VPN software is very limited but I gather that data is encrypted and the encrypt/decrypt process can have a negative effect on performance.

  • Terry Murphy-161740

    Old Hand

    Points: 361

    I would say you have a network issue - I dont understand what you mean by downloading data? Are you trying to bring data accross the T1 lines locally dont forget that although you have 1.5 Mbits/Sec with a T1 line so in your case with six lines you have 9 Mbits/sec that is very slow when compared to a 100MBits/sec LAN. Try connecting to the Server via terminal services to work locally with and between the servers on the LAN.

  • swjohnson

    Hall of Fame

    Points: 3254

    Yes, by downloading data I mean exporting data to another source such as MS Access or MS Excel. Typically my programmers will choose to create a DTS package or just choose the Export Data wizard from EM depending upon urgency and/or number of times they will repeat this download.

    The data we are transferring is data that has been crunched for our statistics but we need to move it to MS Access for reporting purposes. For one of our survey projects we will often have over 1 million lines of data and this needs to be moved to Access for the reports to deliver to the client. (Yes, we are currently playing with Reporting Services).

    We have been working with our network team but they don't have an answer either as to why we can transfer the same amount of data (1 million rows) from here to there in about 10 minutes but not the other way around which takes over an hour to transfer from there to here. Both are going over the same VPN and using the same firewalls.

    The connecting via Terminal Svcs or Remote Desktop works just fine for one or two people because that is all the network folks will allow us to connect at any given time. However, the data we are transferring still goes over the VPN between us (Lincoln, NE) and them (Wayne, PA).

    Thanks!

    SJ

  • David.Poole

    SSC Guru

    Points: 75395

    Download to Access

    OK now it becomes clear.  I have experienced what you are going through.

    Firstly, I had two Access databases, template and live.  Template was a empty table structure, no indices on the tables but a set of queries and macros to generate the indices.

    The starting process was for a batch file to copy template over live.  The reason being that Access database grow and grow with use and there was no way to compact the database from within SQL.

    Secondly Access doesn't mind multiple connections that read its database but it hates multiple connections that write.  This doesn't always manifest itself but believe me it is true.

    The standard SQL Server DTS wizard to download to an access database will generate multiple connections for SQL Server and Access.  Go through the DTS package and make sure there is one connection each.

    My experience was that run locally the original SQL to Access transfer would run like a rocket.  Run remotely sometimes it would work, sometimes it would run dog slow, other times it would simply freeze solid.

    Taking the steps that I have described here means that, whilst not being as fast as the default with all things working fine, it was at least reliable.

    All the user had to do once the database was produced was run a single Macro that ran all the queries that add indices to the access tables.

     

  • Robert J Marmion

    SSC Enthusiast

    Points: 157

    My big question is why are you downloading data to Access?  Why not just connect the Access front end directly to the SQL data?  It works very nicely, and can be done for Excel as well.  This avoids the whole issue of building mega-line database in Access, dealing with rebuilding indices, Jet, duplicate databases (in Access and in Excel).  I you need to pre-crunch the data, just do it on the servers in the server-farm, and store the crunched data in a seperate SQL database there, then connect it to the Access or Excel front end.

    -Rob MArmion

  • swjohnson

    Hall of Fame

    Points: 3254

    The data is precrunched on our SQL server as that is the best place to do it--much quicker on a quad processor!

    Locking and contention issues between Access and SQL are the biggest thing and especially with the latency of the network involved we really don't want to have a connection or linked table between the two systems. Tried that and we often caused deadlocks on SQL Server.

    The other major reason that we pull the data down to Access is to reduce network traffic. At any given time I have about 20 projects in reporting. Imagine a linked connection between 20 or more different Access databases (some projects have enought reports that we break them into groups and process them on separate machines) to SQL and the amount of network traffic that would generate with each one pulling down just the numbers it needed for each report. The traffic over the WAN or even internal network is rather burdensome. Can't have that as everyone else wants to be able to work too!

    Finally, we often prepare tens of thousands of reports and do not wish to be dependent upon the network if something should happen to it while we are processing the physical reports. If we download to an access DB, then it can process on the local machine and if the network goes down or becomes slow, then our reports keep processing and we are able to meet our quick turnaround deadlines.

    Since we can't get the connection over the VPN to work very well, we will probably create a DTS process that takes a template access database stored locally and export the crunched numbers to it and then we will FTP the database to our subnet and then move it to wherever we want.

    Thanks!

    SJ

  • Royce Blanks

    Old Hand

    Points: 366

    how about if you bring the data down to your local just using bcp - instead of trying to go directly into Access?  after you have it local, then import it into Access?  It seems going directly into Access there may be other contentions also slowing things down.  how long does it take to copy a flat file with 30000 rows in it to your local?  certainly should be an hour.

  • swjohnson

    Hall of Fame

    Points: 3254

    Interesting ... wonder if BCP unicode compliant? One of the tables does have comments that can be in something other than English?

    SJ

  • David.Poole

    SSC Guru

    Points: 75395

    The idea of allowing Access to act as the front end to a SQL Server database gives me the horrors.

    Firstly, Access causes horrendous locking and contention issues.

    Secondly, Access is a very powerful tool.  Personally I am not happy having something this powerful where I can't see it.  It would be like generically installing EM!.

    On one of my installations I copy SQL data to Access and then push the Access database to a completely separate server.  I do this because at no time do I want people to have generic access to the SQL box.

  • iLoveSQL

    SSCrazy

    Points: 2888

    Yes, BCP is unicode compliant and works great and I agree with David about Access to act as the front end is not so good idea if you have intensive db environment.

    If I were you, I might setup the replication for reporting server that replicate to local envirnoment and do it from there if you want to find a reliable solution. Replication through VPN works really well.

    second suggestion would be to setup DTS package/job to export to Access on local drive or at least in the same faility and copy the file nightly or something.

    Hope this helps.

Viewing 12 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply