ForceEncryption only for WAN

  • Hello Forum,

    this is what I understand:

    in SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols ->Properties

    I can check the ForceEncryption box and then (I think) a user can connect only with SSL  no matter if he sets his connection string with ssl=required or not

    But is it possible to teach the server that this ForceEncryption setting should be applied only on connections that comes from Internet, but connections from LAN can be without SSL?

    This is what I want to achieve:

    I have a running sqlserver. Users from local domain (LAN users) should connect without SSL. But the users that connects from Internet should be forced to use SSL.

    My assumption is that connections without SSL are faster. Is this wrong?

    If not: Is it possible to force some users to use SSL while other users are allowed to connect without SSL?


    thanks

    - mh

     

     

  • Thanks for posting your issue and hopefully someone will answer soon.

    This is an automated bump to increase visibility of your question.

  • I've queried Microsoft, but it looks like encryption is set by protocol, not IP. So while you might be able to force encryption for IP connections from anywhere, which likely covers your WAN, you would have to use a different protocol for other users if you don't want to force encryption.

     

    My view, however, is that if you have concerns and want to encrypt data, force it for everyone.

  • I agree with Steve's view on this one.  I'd also like to add that SSL encryption is VERY fast and it only encrypts data in motion.  Websites using HTTPS is using SSL encryption on the data and I've never noticed a performance hit between HTTP and HTTPS.

    If you are concerned about data in motion, then you definitely should have SSL enabled.  This will not protect data at rest, but that is a different problem.

    Fun little anecdote about this:

    https://www.imperialviolet.org/2010/06/25/overclocking-ssl.html

    In 2010, google switched gmail over to requiring HTTPS.  They used the exact same hardware they were using previously and they saw less than 1% CPU increase by doing this.  SSL is fast and not that heavy on resources.

    If you have a test SQL instance, test this out.  Turn the option on, run a complex query on your local machine with SSL and time how long it takes to complete.  run it 2 or 3 times (or more) until you have a good average baseline (and to account for pulling data from disk vs memory), then repeat with SSL disabled.  You want that average baseline because if the query runs in 30 seconds on first run and 15 on second, and 20 on 3rd, those differences are a lot.  You want to have a few consistent runs at the same performance +- 1 second (or at least that is what I aim for... may not be possible depending on your system).  But, do you have a noticeable increase in performance by having SSL disabled or does your query appear to run in about the same timeframe?

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • @steve-2 Jones: Thanks for the valueable info that encryption is set by protocol in SQL Server.

    @Mr. Brian Gale: of course a legitimate recommendation to test with encyption en-/disabled. My Problem was that I don't have a big database with complex queries on my test instance and I don't know how to export our production db and transfer to my test instance because I have no access to filesystem on production system.

    It's my first time that I touch SQL Server and in the past I worked with different RDBMS.  On nearly all systems there existed tools to simply dump the raw data and import that dump in another db...

    But maybe I can find some complex example databases somewhere in www - do you have suggestions?

  • Do you have access to the folder where backups are happening on the live system?  If so, restore the live database onto your test box and run some tests that way.

    If you don't have access to where the backups are stored, are you able to run a full backup (I recommend copy only) on the system and dump it to a location you do have access to?

    If you can't do either of those, then things may get more interesting, but hey - you are off the hook for disaster recovery!

    Without access to the backups, you could look something like SSIS to move the data across, but it is going to be on a table by table basis.  I would strongly encourage you to get a backup restored of the live system so you can do real-world testing.

    Alternately, you could grab a publicly available database backup like Adventureworks or any of these ones:

    https://github.com/microsoft/sql-server-samples/tree/master/samples/databases

    and then you have a database to work with.  Next, look for some data that you can JOIN and do some calculations on (SUM, COUNT, AVG, etc) and then sort the data on a non-indexed column.  The reason I recommend all of that is then you are making SQL do some work and the query will take a little bit of time to complete.  Depending on your server and what you joined, you may still have really good performance.  But even if performance is really good, that isn't that big of a deal.  Turn STATISTICS TIME ON and get some baseline runs.  Then turn on SSL and repeat.  I expect the timing is going to be NEARLY the same.  Your connection time is where things might be a tad slower, but even then I doubt that anybody would even notice the difference.

    If you can get your real database, you can have the added benefit of seeing the performance difference between your LIVE system and your test system too.  If it runs in 2 seconds on LIVE but 4 seconds on test, and 4.01 seconds on test with SSL, live will run in roughly 2.005 seconds.  I say roughly because it is rare for things to be linear like that with SQL and computers.  It could be that your test with SSL runs faster than test due to other processes using less CPU/RAM.  It could be that test with SSL runs slower because the disk presented to it by the SAN is a spinning disk that is shared with some other system.  That is why in my original post I recommend running the query multiple times to get a good baseline.  Even first run (and depending on your configuration, second run) will be slower than the future runs due to SQL needing to build up a plan.  Plus, if the table hasn't been queried before, the pages may be on disk instead of in memory which can change performance.  So I recommend a MINIMUM of 3 runs (1 to get the plan into memory, 2 in case you have optimize for ad-hoc queries enabled, and 3 to get the result once things are properly cached), but more is still good.  When I do testing like that, I run it until I get fairly consistent results.  if it runs in 2753 ms the first time and 2626 the second and 2686 ms the third and 2698 ms the forth and 2778 ms the 5th (note these are nearly actual values that I pulled while re-running a query on my system.  I added 2000 to each of them), I can see that it ranges from roughly 2.62 seconds to 2.75 seconds for my run time with an outlier of 2778 on the 5th run.  Since the majority of them are around 2.7, I'd say it is 2.7 for my run time with a query like that one.  Now if I check the CPU time, that value in my case is higher which MAY seem strange.  In my case the ACTUAL line that statistics time brought out was:

     SQL Server Execution Times:
    CPU time = 9579 ms, elapsed time = 725 ms.

    So you may be thinking "hold up... CPU time was nearly 10 seconds, but the elapsed time was under 1.  How does that make sense?" and that is due to parallelism.  This query went parallel and broke itself up across all of my cores.  So hypothetically, if it took 10 seconds of CPU time on 10 cores, that is 1 second per core.  In my case, I have 16 cores so it is taking roughly 600 ms per core.  If I remove the parallelism on it, the whole query completes in 5 seconds and my CPU time and elapsed time are nearly the same with elapsed being slightly higher than CPU (which is expected).

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Hi Brian,

    Much thanks for your detailed response.

    Your post was very instructive to me.

    Thanks for pointing me to the examples on github. I was able to setup Northwind on my test system and create a small SP which loops through all the tables and all columns in it and sorts on every single column (this way I want to catch indexed and non-indexed Columns).

    Running this SP I can see that connections with SSL are a little bit faster than non SSL but I begin to think whether this isn't some micromanagement and a little useless, because the SQL Server faces Internet directly and so it should force encryption - and thats it.

    What I find much more interesting is that connections with sqlcmd are much more faster than when I connect with SSMS.

    Anyway I think I can tune more efficiently when looking at hardware (e.g at the moment I have spinning disks). Or maybe split database over different physical drives. Or moving Temp DB's (I have to read something about these things first because I never touched MSSQL before...)

    To the topic with our production database:

    The database is hosted and managed by some hosting provider. And they do regulary backups for sure. But for me its impossible to tell them to send me the last backup file for testing because there is a third consulting company involved in between.

    My idea was to host the production database in LAN, mainly to have better network latency for our users. But before I start to open the conflict with consulting/hoster, I want to make some tests and become familiar with MSSQL.

    After that I can call them and say: "Please send me the backup file because I want to restore the db in LAN".

    I tried many times with different settings to create a script of database and schema in SSMS but it was never possible to restore from these scripts 🙁

    But now I will have a look into SSIS like you suggested - thank you.

  • Glad I could help.  I am actually a bit surprised that SSL connections would be faster than non-SSL.  I am actually not sure why that would be as SSL adds a TINY bit of overhead.  It is a very small bit of overhead, so I didn't think you would really notice it.

    As for SQLCMD being faster than SSMS is likely just due to the overhead that SSMS adds.  SSMS, being a GUI application, needs to do parsing and rendering of the data before it presents it (or sometimes while it presents it).  SQLCMD has no fancy GUI so it can just dump it to the standard output device.  SSMS can be a bit faster if you change the results to Text instead of the default results to Grid.  The downside to changing this is that results to Grid is easier to read, copy, export, and generally work with (my opinion).

    On the topic of moving your database to on-premise (ie LAN), that will add a lot more overhead on you and your team than having it cloud hosted.  It does come with benefits though.  Right now, it sounds like if you need to do a restore (disaster recovery for example), you are relying on the host to have good, valid backups.  You have no way to ensure that the backups are valid.  You are just hoping that the backups that they are making, they are also doing a test restore.  That is something you will have to do if you move it on premise.  After all, a backup is only "known valid" AFTER you do a restore.

    The other thing about on-premise vs cloud is you get the fun of correcting disk space issues and hardware level stuff that is causing performance issues such as spinning disk vs SSD and making sure that databases are configured correctly to handle your environment (tempdb on its own physical disk, data and log files on different physical disk, backups on different physical disk).  Moving from cloud to on-premise involves a lot of work and analysis to determine if it is the correct approach.  Before you go about moving anything from the cloud to on-premise, I would look at the reasoning behind it.  If it is due to the cloud vendor being poor (poor support, poor performance, etc), it may be worthwhile to investigate other cloud vendors.  if it is due to cost, you may or may not save any money by having things on premise, and it may even end up costing more.

    If it is entirely for network latency, you should notice a difference pretty quickly, BUT I would also want to set up a test to prove that it will give a big enough performance boost.  Step 1 for that (for me) would be getting a copy of production data that I can put on the system so I can do some timing analysis between having it cloud hosted vs on premise.  The reason I am saying that need to do that first is that if the performance problems are NOT due to network (ie in most cases you are only pulling or pushing a few KB of data at a time) but are due to complex queries, you may spend a lot of time getting this migrated only to find the problem still exists.

    Now with SSIS, it is good at pulling the DATA across, but it won't help as much with the objects.  So, in general, you won't have tables, stored procedures, views, etc created by using SSIS.  I would script those out with SSMS to create the objects then use SSIS to move the data.  Depending on how large the data is, I would try to find a downtime window to do this data copy.  You don't want your SSIS package to lock some critical table or have prod performance tank while you are doing this copy.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Oh - haha ....a typo

    Of course I meant that NON-SSL connections  are a little bit faster than SSL connections.

     

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply