|Detection and Tripwires|
|Written by Michael Shinn|
|Monday, 09 May 2011 12:59|
Recently we had a customer ask a great question if the WAF could be configured to only inspect attacks if the file existed. In other words, to only look at an action if the URL was valid. The WAF can be configured to do this, and this article explains how to do it. But before you do it, I'd like to take a moment to discuss why I recommend against this.
First, let me be clear, its your system so if this approach (only looking at specific cases for attacks) is acceptable to you, please do it. Its your risk, and far be it from me to tell anyone to not do something with their properly. You can do this now and its pretty simple really, just define a set of rules for the filenames that exist on your system and if it doesnt exist dont do anything.
As a security practitioner I don't recommend you do this. From a security perspective, you lose something, and I think that something can be pretty significant.
The biggest problem we have in cyber security is that we only know what we know. It turns out we know a lot, but we can't know everything. The bad guys work very hard to discover new ways to break into stuff, and because of that you can't always know if you're going to stop every possible attack. The bad guys are smart too, and they work just as hard as the good guys to figure out new and exciting ways to break into systems. Whats more important to remember is that they have access to the same software we all do. The same security products, the same web applications and so on. So they know as much as we do. Because of that its a good idea to try to detect as many attacks as possible even if you aren't vulnerable to them. Now you may ask, why should I detect something if I know I'm not vulnerable to it?
Great question! Which brings us to the title of this article: Detection and Tripwires. If you know someone is attacking you, then you know something about them - that they are malicious, and the next thing they do is probably going to be malicious as well. We can do something with that information. We can learn the source (to maybe block it in the future), we can analyze what they did to maybe improve our security (this method may not work on this application, but it might work on that one) and most importantly, we can prejudge any future actions they take, assume they will be malicious too and block them now and possibly in the future. Blocking people in the future is why we put people in jail and prison. We may want to rehabilitate them, but we also assume they will commit the same or more crimes in the future and want to prevent them from doing that.
This is a tripwire, something to detect the badguys before they can do really bad things so we can know they are coming and stop them before they do any real damage. We use this same method in physical security, personnel security (this person has a history of drug abuse, maybe they shouldnt work in a pharmacy), and in spam protection (this IP address is a known source of spam). It works reasonably well, and it stops the things we don't know about yet.
If you stop the bad guys far enough out before they can get to the good stuff, then we can protect the good stuff better. For example, lets say you own a store and you have a lock on your door. Along comes a burglar, they try to get through your lock but they fail (its a good lock, and maybe they are bad lock pick). Should you let this person in your store if they knock nicely? Of course not, you know this person intends to steal from you, you don't let them in, you report them to the police and hopefully they spend some time locked up so they can't bring back better lock picking tools to try again tomorrow.
If you ignore attacks, simply because you aren't vulnerable to them, then you are also ignoring the fact that the source is malicious. Its basically pretending that their aren't malicious, and in reality its worst than that: now you don't know that they are malicious because you are now blind to their malicious behavior. Which means the attacker has a higher probabilty of succeeding against your system should you be vulnerable to something so new nothing detects the attack yet.
For example, if you have a php file called "index.php" add that to protect.txt. Do not add the full path.
And add this rule to 00_asl_a_nodetect.conf:
SecRule SCRIPT_BASENAME "!@pmfile protect.txt" "phase:2,nolog,noauditlog,allow,ctl:ruleEngine=Off"
Or if you want to use full paths (these must be relative to the website, not to the file system), use this rule:
SecRule REQUEST_URI "!@pmfile protect.txt" "phase:2,nolog,noauditlog,allow,ctl:ruleEngine=Off"
Again, I do not recommend you do this. Your system will be exposed to a lot more attacks, you will not detect them and consequently you are a lot less likely to stop an attacker that is trying to find a hole in your system. There are no security advantages of only inspecting requests for applications you have installed on your system, over detecting an attacker before they cause harm. It could be argued that the system would have less work to do and therefore a gain in performance. That may be true, but the performance improvement would be miniscule in many cases I suspect, but if even if it were significant modsecurity is extremely fast and lightweight, and hardware is cheap. Recovering from an attack can be expensive, time consuming and emotionally draining. I don't wish the later on anyone.
My advice: detect the attacks, all of them even if you aren't running that software and even if you aren't vulnerable to the attack. But, if this approach brings you joy, then don't let stubborn old me stand in your way. Its your system, and I have no monopoly on the truth. There certainly could be excellent cases where this may be just the ticket.
My two cents: detect the attacks, all of them, and block em too. :-)