HD Moore will be releasing one browser vulnerability a day for the month of July based on his, research.
It is interesting to see how horribly buggy browsers can be. I am most interested in seeing which browser gets broken the most andif we can take any defense in depth lessons from this.
Hopefully the vendors got all the details and will release one patch for all of the bugs affecting their browser, because nobody has the resources to run about patching user's browsers several times a month.
You buggin what? You buggin who? You buggin me!
And don't you see it ain't cool.
Here's the abstract:
This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis.
Here is a brief chapter breakdown:
- Introduction
- Vulnerability and Patch Management - an analysis of vulnerability, malware and threat trends followed up by an analysis of problems with patches.
- Policy Solutions - an in-depth patch management framework for creating an organisational patch management policy.
- Vendor Release Patch Policy - an analysis of how vendors can best manage the risks associated with releasing patches.
- Practical Solutions - an analysis of where technology is needed in patch management and what is currently available.
- Conclusion
The thesis is still being examined after which I will submit the final version with corrections. What this means is that if you have any corrections, please send them to me.
The thesis was examined and passed. Irritatingly, one examiner strongly recommended a distinction while the other strongly argued against one (on the basis that the thesis was not scientific enough, but highly practical). At least there was no ambivalence.
UPDATE: added compressed versions.
UPDATE II: added new version with over 32K worth of corrections.
UPDATE III: Updated examination status to passed.
- The new potential threats we hype/demonstrate as security researchers almost never seem to end up being used 'in the wild'.
- Given that Sensepost demonstrated this in 2002, gave code to virii researchers and released a defence PoC (Casper), surley heuristics should pick this up?
What we can really learn from this is that the wrongs of the world are Sensepost's fault and this problem can be resolved with litigation and references to something patriotic. Jokes aside, I also wouldn't mind knowing why Symantec was distributing these signatures on the 14th of June, but we, the public, only got to hear about it on the 16th? I got to hear about it at 8am when cron fired off my rss2sms script, which I thought was seriously cool.
In my soon-to-be-published paper, I make a point that it is a good idea for vendors to make friends with security researchers in an effort to encourage delayed disclosure (some people call it 'responsible' disclosure).
It is interesting then to see that Microsoft will be throwing a party for security researchers at BlackHat. This, along with their BlueHat efforts is a very good idea. I look forward to seeing if it pays off given the past (and somewhat current) negative opinion of some security practitioners towards Microsoft. Or, more simply, will it have a material effect on the number of Microsoft 0days?
Snow Crash, Neal StephensonAny number that can be created by fetishistically multiplying 2s by each other, and subtracting the occasional 1, will be instantly recognizable to a hacker.
My ISSA paper was just accepted as a full research paper. The comments were pretty good too, of course I am only quoting the good bits, but:
Reviewer One:Reviewer Two:Excellent insight shown, well researched, very relevant topic.
The paper presents an interesting and well-written discussion, which is extensively supported by references to existing literature.
Reviewer one had mostly grammatical corrections, but reviewer two built some positive arguments against some of my points, which is always a good sign of a thoughtful reviewer and meaty arguments. I think I can rebut them pretty easily and will add them to the paper.
Rhodes is sending down a well sized phalanx of presenters, and I will be proudly representing my company. I can't wait. I just hope those lazy Sensepost bums contribute something this year, instead of recruiting ;)
If you follow these kinds of things, you will realise that AV technology is getting worse not better at protecting you from malware. Case in point, yesterday I downloaded a file from a rather dubious site. I checked it out with ClamAV and McAfee and both gave it a clean bill of health. On running it and monitoring it's progress with process explorer I realised pretty quickly that it was malicious. I submitted it to VirusTotal and it was found to be malicious by only 3 scanners (all free, none of the 'big vendors' detected it) and at least two looked to be using heuristics to make that analysis.
Now it seems the automated classification debate is heating up with Havlar's recent work, and now Microsoft's Anti-Malware team (and Havlar's response).
Noam Epple of Vivica Information Security Inc. believes the info sec community has failed. Understandably, I take issue with this. Thomas Ptacek has a nice reply which highlights the mistake Noam has made: You can't look at every problem and claim that security has failed, if there is an effective defence. You can claim there are a lot of people being dumb and not using the defence, but we knew that already, hence our continued employment.
To provide an analogy, if I said that physical lock security has failed due to the number of thefts reported each year, I would be fiddling the stats. A meaningful stat would be to look at the number of thefts that occurred in situations where the goods had been properly secured.
This is not to say every security problem has been solved, but rather that a claim of "Complete, Unquestionable, and Total" failure is overblown to say the least.
I think there is an interesting correlation between the number of adjectives used in a story title and the quality of an article :)
This whole creationism vs evolution thing is silly. It is mostly a false argument sent to get people all riled up instead of focussing on God. I take the Catholic stance on the matter, advocated by Pope Pius XII in his 1950 encyclical Humani Generis, with a few points:
- Scripture does not contradict evolution. Read.
- Scripture is very nuanced and should not all be interpreted literally. Read.
- Evolution is a very broad list of topics, be specific. Read.
- Evolutionary theory has the scientific upper hand. Always check the claims of creationists at Talk origins.
Many anti-virus vendors take a black list approach: a huge list of 'naughty' files is drawn up and those naughty files are prevented from running on your machine.
The problem with this is to make the list the naughty files need to be found and analysed, then the list needs to be updated and sent all over the world. So, the easy attack is to make it difficult to find and analyse the files. For example the hacker defender project used to sell a customised version of their trojan. That way it would be difficult to find (if used in a targeted attack) and the analysis would only affect that version keeping the other versions undetected. Another example are the thousands of variants some malware has e.g. NetSky, MyDoom and Mytob.
A much better way of doing this would be to use a whitelist. Now, making a whitelist of allowed programs requires too much customisation, so a white list of behaviours is used instead. Many AV products are pretty bad at this. To test your products functionality, some ISC handlers have created Spycar. This provides some binaries which perform benign versions of naughty behaviours to see if your AV can pick them up. The name is a reference to the naughty file test suite EICAR.
While you're at it, check out the free Windows behaviour-based protection tool, WinPooch.
Today we (Deloitte) hosted the ISGA (formerly WhiteHat) meeting. I was in charge from Deloitte's side. Apart from a few hiccups related to broken amps and late coffee it went off fairly smoothly. I got the chance to present, my presentation was entitled: "Threat Monitoring: Reading Risk the Wrong Way".
It basically provides a justification as to why threat monitoring is important, and how people ignore the "threat" component of the risk equation. A component of threat monitoring is having decent threat models. For this I discussed how security researchers have gotten the vulnerability life cycle wrong, and provided a corrected model based on combining the conclusions of several researchers.
The slides can be found here.
I am really enjoying watching how these turn out. The recent 0day's have seen unofficial patches, Microsoft entering the threat monitoring game, a significant amount of community effort and all sorts of discussions.
1...2...3...go
A great entry over at OSVDB blog has an unintentionally good description of the problem with provenance.
http://osvdb.org/blog/?p=116
Wow, it seems Microsoft managed to get their MS06-015 cumulative IE patch rolled out with only a few compatibility problems with older HP, NVIDIA, Siebel and Kerio Firewall products. Pretty good given the non-security ActiveX change they bundled in there.
Oh, they also fixed that security vulnerability that was activley exploited in the wild since March 23rd. Now given the lag time in patch deployment (current research suggests 19 days for internal machines), it should just be just over a month that attackers have been able to wade through the average windows box.
Can someone tell me why Microsoft decided that the best way to get a patch out as quickly as possible was to bundle a huge, non-security modificcation into it?