Linked servers over the internet

  • paul.knibbs (8/23/2010)


    Not to mention, of course, that this 5-20Mb per customer may well not be spread evenly over the whole day--if you get a whole bunch of them trying to upload at the same time then you'll run into additional lag issues as all that data tries to flood down your pipe.

    Exactly the problem when you try to size things like this - it's usually not steady demand.

    You may also see where things are not linear - once you hit a tipping point things really slow down quickly.

    Although it's appearent from some ot the responses some have done quite a bit with some of this, you would be wise to get someone for a day or too to see what they say.

    You are well aware of the risks involved, both for you and your clients.

    Very interesting thread to read through.

    Greg E

  • Tom.Thomson (8/23/2010)


    Brandie Tarvin (8/23/2010)


    Add all the information Paul and Tom just gave you to the fact that you do not have a direct, dedicated link to your customers, and suddenly you have a system that will respond slowly at the best of times.

    ...

    I think doing the data volume calculations very carefully would be a good idea - here we are talking about 5 to 20 MB per day per client, but each interaction has a max of 20kb, so that's 250 to 1000 uses of this new feature each day for each client, which looks a bit high.

    ...

    Not quite, my hard data is that for each order we process we have 14.8 searches in our form.

    Now assuming that this number doubles right off the bat with new clients, fall rush and just the fact that it's a cool new feature that the current client will use regardless of wheter or not it shows their own inventory (that still uses our bandwith).

    We're already talking about 100 000 searches daily all clients combined, with the page tipping at 200 kb for each download... it's already well over 2 GB per day excluding the current traffic with busts a T1 in peak hours.

    So assuming we have a real 10 mbps connection, and maxing out the transfer à 67% capabilities, it means we need ±3 hours per day to transfer all that data and the whole day lasts approximately 9 hours for 9% of our orders. So that seems acceptable for the first tryout of our new system. What do you guys think?

  • At some point, you're going to have to test. So, as long as your clients are aware that you're testing the new system and not expecting things to run smoothly, that you should definitely give it a go. But make sure to test a full load on the system and make sure everyone knows this isn't your "We're online perm" solution. Just to give yourself a little leeway in case things break.

    Or, as the mysterious "they" like to say: Plan for the worst, hope for the best.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (8/23/2010)


    At some point, you're going to have to test. So, as long as your clients are aware that you're testing the new system and not expecting things to run smoothly, that you should definitely give it a go. But make sure to test a full load on the system and make sure everyone knows this isn't your "We're online perm" solution. Just to give yourself a little leeway in case things break.

    Or, as the mysterious "they" like to say: Plan for the worst, hope for the best.

    Greate advice, all of you.

    I'm out of questions for the time being. I should be meeting soon with the powers that be and we'll discuss further about the specs and scope. Maybe I'll come back for seconds.

    Thanks again.

  • Ninja's_RGR'us (8/23/2010)


    So assuming we have a real 10 mbps connection, and maxing out the transfer à 67% capabilities, it means we need ±3 hours per day to transfer all that data and the whole day lasts approximately 9 hours for 9% of our orders. So that seems acceptable for the first tryout of our new system. What do you guys think?

    If you can safely say that your peak quarter hour incoming traffic uses at or below 6667kbps and you have 10000kbps inbound capacity you are reasonably safe from excessive delays caused by missing bandwidth on the connectio0n between you and.

    However, that's just the first hurdle: you now have to look at delays inside the internet. I've seen a 4 mile distance take more than 20 hops and produce a loop delay of 0.75 seconds, so strange things happen in the internet. You need to do as Brandie suggested and look at the delays to at least a representative sample of clients using tracert. If any of the results look like pushing you over your 1 second limit for some clients, you have to decide what to do for those particular clients.

    Tom

  • Tom.Thomson (8/24/2010)


    Ninja's_RGR'us (8/23/2010)


    So assuming we have a real 10 mbps connection, and maxing out the transfer à 67% capabilities, it means we need ±3 hours per day to transfer all that data and the whole day lasts approximately 9 hours for 9% of our orders. So that seems acceptable for the first tryout of our new system. What do you guys think?

    If you can safely say that your peak quarter hour incoming traffic uses at or below 6667kbps and you have 10000kbps inbound capacity you are reasonably safe from excessive delays caused by missing bandwidth on the connectio0n between you and.

    However, that's just the first hurdle: you now have to look at delays inside the internet. I've seen a 4 mile distance take more than 20 hops and produce a loop delay of 0.75 seconds, so strange things happen in the internet. You need to do as Brandie suggested and look at the delays to at least a representative sample of clients using tracert. If any of the results look like pushing you over your 1 second limit for some clients, you have to decide what to do for those particular clients.

    Duly noted.

    I just showed this thread to our network admin along with teh 50K feet review of the projet and he looked like he saw a ghost... good times 😀

  • I just showed this thread to our network admin along with teh 50K feet review of the project and he looked like he saw a ghost... good times 😀

    Probably thinks you'd hold him accountable should there be any performance issues. ;.)

    Next time wear a name tag (Casper) to prepare him.

    You probably have this covered, but you may want to explore some stress test tools ahead of time.

    From the sounds of it, you might be able to do this off hours, so you aren't too disruptive.

    Or adding a bit of load to the current peaks might be enlightening too.

    Greg E

  • Greg Edwards-268690 (8/24/2010)


    I just showed this thread to our network admin along with teh 50K feet review of the project and he looked like he saw a ghost... good times 😀

    Probably thinks you'd hold him accountable should there be any performance issues. ;.)

    Next time wear a name tag (Casper) to prepare him.

    You probably have this covered, but you may want to explore some stress test tools ahead of time.

    From the sounds of it, you might be able to do this off hours, so you aren't too disruptive.

    Or adding a bit of load to the current peaks might be enlightening too.

    Greg E

    I don't know any tools to stess the network, can you recomment one?

    Or even asp net applications for that matter?

  • I used something called Macro recorder for some light testinga few years ago.

    The free version was enough for me - I didn't need a ton of connections.

    And the 30 or 40 users they had testing, with just a few transactions, quickly showed more work was needed.

    These would be a good starting point. And if someone has an MSDN tool around your place, they might already have something available.

    I don't get involved with a lot of testing, but one thing that really sticks with me is understanding the tipping point. Most of the things I've been involved with seem to work real well, but as you hit the wall, pay careful attention to whether it degrades gracefully, or everything just kind of stops.

    Greg E

  • Greg Edwards-268690 (8/24/2010)


    I used something called Macro recorder for some light testinga few years ago.

    The free version was enough for me - I didn't need a ton of connections.

    And the 30 or 40 users they had testing, with just a few transactions, quickly showed more work was needed.

    These would be a good starting point. And if someone has an MSDN tool around your place, they might already have something available.

    I don't get involved with a lot of testing, but one thing that really sticks with me is understanding the tipping point. Most of the things I've been involved with seem to work real well, but as you hit the wall, pay careful attention to whether it degrades gracefully, or everything just kind of stops.

    Greg E

    Sounds interesting, care to share some stories?

  • When we first got JDEdwards, and thought we had all our mods in and were ready to go live, we had about 50 of us in the same room doing transactions. Things seemed to be going pretty well, and then we had 2 people inquire on the same item in the Supply Demand screen.

    Everything locked up, and the AS400 pegged the CPU.

    Go Live got pushed out a bit. :w00t:

    Life also gets rather interesting running SQL and SSAS on the same box, especially when you give users a query tool. It is a rahter interesting battle between SQL, SSAS, and the OS when memory gets pressure. Some of the limits seem to be just 'suggestions', and things slow to a crawl when memory gets paged out. Runs real good right up until the memory gets flushed to disk. And then you spend months convincing someone that another 8 GB of ram is a worthwhile investment.

    IIS - I highly recommend x64 bit over 32 bit on your server. On our 32 bit machine, once it gets to around 1.6 GB, it's about done. Usually around 1.7 GB, it seems the lights are on but nobody's home.

    Greg E

  • Greg Edwards-268690 (8/24/2010)


    I don't get involved with a lot of testing, but one thing that really sticks with me is understanding the tipping point. Most of the things I've been involved with seem to work real well, but as you hit the wall, pay careful attention to whether it degrades gracefully, or everything just kind of stops.

    Greg E

    Yes, and remember that the first tipping point you find and fix may have masked other tipping points. So just because you solve a tipping point that doesn't mean you are now home free - you have to load test some more to see if you hit another tipping point.

    Obvious tipping points you may hit include:

    server processing power - make sure you have enough to cope with peaks (and it's sometimes better to have extra cores than more powerful individual cores).

    server RAM - you can find that your system suddenly spends all its time thrashing the swap file.

    Internet link capacity - plenty of discussion earlier.

    Interent internal tipping points - some routers between telcos (or between ISPs and telcos) get overloaded and start discarding frames (or IP packets), which pushes you (in this case, probably the client not the central server) into lots of retransmissions, which which increases the strain downstream (towards the client) of those routers, so some downstream routers start discarding frames, and so on - it won't stop you dead, but it can push your latency way up.

    Server discs (DB logging and/or DB data) - I guess you know all about that one.

    Tom

Viewing 12 posts - 31 through 41 (of 41 total)

You must be logged in to reply to this topic. Login to reply