Which brings us to how the malware attacks. The typical anatomy for an attack is something similar to:
- Escalate Privileges (if necessary)
- Cover Tracks
But what about the web site and web filtering software used by the organization? Well, if the site hasn't been categorized yet, it really depends on how the web filtering software is configured to handle such sites (if such an option exists). Some web filtering products have heuristic engines which try to analyze the content to determine if it's objectionable or not. Some engines can scan words, others also have the capability to look at images, and the engine in question generates a score. Depending on the score, the page does or does not get displayed. (I'm greatly simplifying the process, but you get the idea.) So if you're building a page that hosts said malware, you ensure it says all the right things to look legitimate for business. In fact, it may very well be a copy of another business page because the only thing you're interested in is deploying the malware. If it has been categorized, there have been known exploits of well-known organizations, such as educational facilities and even Yahoo! in recent days. That means playing a catch up game before the individual page is categorized. So in other words, getting the malware deployed typically isn't the problem.
Therefore, Cesar concentrates on the malware itself. The pattern it follows is the following:
- Transmit the Data Back
- Cover Its Tracks (if necessary)
Once the targets are identified, the next step is to attack the servers. Connections, like to SQL Server, which use Windows authentication are trivial. Otherwise, it might have to resort to brute force. Brute force, in and of itself, can be noisy (depends on whether or not you are auditing failed login attempts). And once it gets in, it can check replication settings, linked servers, etc., to locate further targets, which adds to the discovery process. However, once it's in, it'll need to scan for interesting information, and this usually means looking at metadata for table and column names. Once something of interest is found, it's all about extracting the data.
After it has some data, it needs to get it off-site. Again, if you can get a site up where malware can be grabbed, getting back out isn't that difficult, either. Even if an organization is doing egress filtering, they still allow out HTTP and HTTPS. As long as the web site passes the filters, all is well. And the data is in the hands of a malicious individual or organization.
Afterwards, if necessary, the malware can cover its tracks by removing itself. This may be a good idea to make getting samples of the malware more difficult, thereby impeding a security company's ability to generate signatures on said malware.
If it is really this easy, how do you prevent this from happening? Several things make the malware's job more difficult. Some of them I've taked about how to get around, but they should still be in place.
- Up-to-date web filtering software
- Firewalls with egress filtering on the perimeter
- Firewalls in front of the database servers controlling access to them
- Network switches (although it is nearly impossible to find an actual hub nowadays, this still needs to be looked at, especially in smaller organizations with old equipment)
- Network configuration on firewalls and switches to block udp/1434 (SQL Server Listener Service)
- Use of network-based Intrusion Detection/Prevention System (NIDS/NIPS, or just IDS/IPS)
- Personal firewalls on systems
- Up-to-date anti-malware software
- Up-to-date on system and application patches
- User running with less than administrator privileges
- Use of Host-based Intrusion Prevention system (HIPS)
- Use IPSEC Policies (Windows) to restrict IP addresses which can connect to the database system
- Use IPSEC Polcicies to block the SQL Server Listener Service (udp/1434)
- Use IPSEC Policies to encrypt the traffic and to require authentication to make the connection to the database system
- Up-to-date on system patches
- Up-to-date on database system patches
- Use non-standard ports (stay away from tcp/1433 for SQL Server and tcp/3306 for MySQL) - Hampers or prevents discovery
- Users running with minimal permissions - restricts access to data
- Data encryption (SQL Server 2005) on those interesting columns - simply querying the tables won't get sensitive data
- Audit failed login attempts (SQL Server) - "Noise" that may allow for detection of a brute force attempt
- Enforce Password Policies (SQL Server 2005) - Reduces likelihood of success of a brute force attack
- Locking down users by IP, where possible (MySQL) - If the end user doesn't need to access
Technorati Tags: DATABASE | SQL | T-SQL| SQL Server | Microsoft SQL Server | SQL Server 2000 | SQL Server 2005 | MySQL | Security | Database Security | SQL Server Security