Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
Log in  ::  Register  ::  Not logged in

K. Brian Kelley - Databases, Infrastructure, and Security

IT Security, MySQL, Perl, SQL Server, and Windows technologies.

Whitepaper on Malware to Attack Databases

Cesar Cerrudo of Argeniss Information Security has put out a new whitepaper (.pdf format), Data0: Next generation malware for stealing databases, describing how malware could be crafted to steal information out of databases. For the most part, it stays at a high-level, however, Cesar does give a few example queries (for SQL Server), the appropriate API calls to perform certain operations, etc., which delve a bit more into the technical side, but even these are fairly straight-forward. To demonstrate what he talks about in the whitepaper, he built a simple proof of concept (PoC), but based on what's in the whitepaper (and what is generally accepted as what's possible), nothing seemed outlandish or hard-to-do. Just for those worried about that PoC being out in the wild, Cesar states in the whitepaper he's not going to put it out for public consumption because he knows it'll be used for evil.

Which brings us to how the malware attacks. The typical anatomy for an attack is something similar to:
  • Discovery
  • Exploitation
  • Escalate Privileges (if necessary)
  • Cover Tracks
Since we're dealing with malware, the attack methods are well known. Keeping malware out of the corporate environment, especially considering most of the techniques for detecting malware are signature based, such as antivirus, is difficult. When users run as local administrators, all it takes is one person clicking on an email that sends that person to a website which exploits an Internet Explorer, Firefox, Microsoft Office, etc., vulnerability to download and install the malware. If the malware is new, there isn't a signature for it. Therefore, it'll likely pass through the scans.

But what about the web site and web filtering software used by the organization? Well, if the site hasn't been categorized yet, it really depends on how the web filtering software is configured to handle such sites (if such an option exists). Some web filtering products have heuristic engines which try to analyze the content to determine if it's objectionable or not. Some engines can scan words, others also have the capability to look at images, and the engine in question generates a score. Depending on the score, the page does or does not get displayed. (I'm greatly simplifying the process, but you get the idea.) So if you're building a page that hosts said malware, you ensure it says all the right things to look legitimate for business. In fact, it may very well be a copy of another business page because the only thing you're interested in is deploying the malware. If it has been categorized, there have been known exploits of well-known organizations, such as educational facilities and even Yahoo! in recent days. That means playing a catch up game before the individual page is categorized. So in other words, getting the malware deployed typically isn't the problem.

Therefore, Cesar concentrates on the malware itself. The pattern it follows is the following:
  • Discover
  • Attack
  • Transmit the Data Back
  • Cover Its Tracks (if necessary)
Discovery is where it locates database sources. The two most obvious, and most stealthy, is to check the ODBC DSNs on the local system and to look into existing processes to look for outbound connections to well-known ports (such as tcp/1433 for SQL Server). If necessary the malware could get substantially more noisy by doing a network scan (again, for well-known ports) or outright sniffing the network (but switched networks makes this extremely problematic and if you try to overcome this, it will be VERY noisy).

Once the targets are identified, the next step is to attack the servers. Connections, like to SQL Server, which use Windows authentication are trivial. Otherwise, it might have to resort to brute force. Brute force, in and of itself, can be noisy (depends on whether or not you are auditing failed login attempts). And once it gets in, it can check replication settings, linked servers, etc., to locate further targets, which adds to the discovery process. However, once it's in, it'll need to scan for interesting information, and this usually means looking at metadata for table and column names. Once something of interest is found, it's all about extracting the data.

After it has some data, it needs to get it off-site. Again, if you can get a site up where malware can be grabbed, getting back out isn't that difficult, either. Even if an organization is doing egress filtering, they still allow out HTTP and HTTPS. As long as the web site passes the filters, all is well. And the data is in the hands of a malicious individual or organization.

Afterwards, if necessary, the malware can cover its tracks by removing itself. This may be a good idea to make getting samples of the malware more difficult, thereby impeding a security company's ability to generate signatures on said malware.

If it is really this easy, how do you prevent this from happening? Several things make the malware's job more difficult. Some of them I've taked about how to get around, but they should still be in place.

Network Layer:
  • Up-to-date web filtering software
  • Firewalls with egress filtering on the perimeter
  • Firewalls in front of the database servers controlling access to them
  • Network switches (although it is nearly impossible to find an actual hub nowadays, this still needs to be looked at, especially in smaller organizations with old equipment)
  • Network configuration on firewalls and switches to block udp/1434 (SQL Server Listener Service)
  • Use of network-based Intrusion Detection/Prevention System (NIDS/NIPS, or just IDS/IPS)
Client Workstation Layer:
  • Personal firewalls on systems
  • Up-to-date anti-malware software
  • Up-to-date on system and application patches
  • User running with less than administrator privileges
  • Use of Host-based Intrusion Prevention system (HIPS)
Server Layer:
  • Use IPSEC Policies (Windows) to restrict IP addresses which can connect to the database system
  • Use IPSEC Polcicies to block the SQL Server Listener Service (udp/1434)
  • Use IPSEC Policies to encrypt the traffic and to require authentication to make the connection to the database system
  • Up-to-date on system patches
Database System Layer:
  • Up-to-date on database system patches
  • Use non-standard ports (stay away from tcp/1433 for SQL Server and tcp/3306 for MySQL) - Hampers or prevents discovery
  • Users running with minimal permissions - restricts access to data
  • Data encryption (SQL Server 2005) on those interesting columns - simply querying the tables won't get sensitive data
  • Audit failed login attempts (SQL Server) - "Noise" that may allow for detection of a brute force attempt
  • Enforce Password Policies (SQL Server 2005) - Reduces likelihood of success of a brute force attack
  • Locking down users by IP, where possible (MySQL) - If the end user doesn't need to access
Notice I said more difficult, not impossible. A knowledgeable attacker, with a real desire to break into a system, will find a way to do so. The goal is to make it as difficult as possible while still being reasonable in budget and in functionality for the organization. An attacker who isn't specifically going after a certain company (such as what happened to Valve for Half Life 2) will likely move on to a much easier target.

Technorati Tags: | | | | | | | | | | SQL Server Security


Posted by Dmitri Mikhailov on 23 November 2007
To laugh or cry: a virus looking for sensitive information by running search by pattern [column_name like "%string%"] queries on all character type columns of each table in (multi-terabyte) database? Please, just do not go there.

The rest of the whitepaper is just boring.
Posted by K. Brian Kelley on 23 November 2007
There is a lot of data in databases which are in the megabyte or lower gigabyte size range. Not everything has to be a terabyte to be useful.

For instance, consider the HR database for most small to medium enterprises. That's certainly going to be within size ranges for this type of search and you're going to get identity theft type of material as a result.
Posted by Anonymous on 23 November 2007
As a follow up to my post about Cesar Cerrudo's new whitepaper, earlier this month David Litchfield talked...
Posted by Lars Johansson on 25 November 2007
This confirms what I always said, 'creative' database design is the best defense against prying eyes. Do not use standards nor referential constraints. I'm not alone, a huge ERP system I try to understand use these principles, stealing data from their database is impossible. You need to understand the business and the ERP system, then practice for months before you can extract anything of value. Some call this DB design pattern a mess, I call it 'secured black hole' design, very safe!
Posted by smith.dyer on 28 April 2010

Thanks a lot for sharing such a nice and informative article, Really a very nice and detailed review which is very helpful.

Malicious self-replicating codes, known as malware, pose substantial threat to the wireless computing infrastructure. Malware can be used to launch attacks that vary from the less intrusive confidentiality or privacy attacks, such as traffic analysis and eavesdropping, to the more intrusive methods that either disrupt the nodes normal functions such as those in relaying data and establishing end-to-end routes (e.g., sinkhole attacks), or even alter the network traffic and hence destroy the integrity of the information, such as unauthorized access and session hijacking attacks. Malware outbreaks like those of Slammer and Code Red worms in wired Internet have already inflicted expenses of billions of dollars in repair after the viruses rapidly infected thousands of hosts within few hours.

By the way for more information on security issues like Training and Certification check this link:

Leave a Comment

Please register or log in to leave a comment.