I have been writing my thesis and am trying to come up with some a priori reasons as to why vendors releasing patches in certain ways will have certain effects.
The bit of research I have just cooked up seems to indicate that for software which has a large community of users likely to get involved in the testing of patches, it makes more sense to release a detailed advisory and patch as soon as possible, instead of keeping it to yourself and releasing a patch when it is ready. This is still a very early version and is changing rapidly, please treat it as such.
I don't want to flood things with large images, so click on the graphs for a larger version.
|
This first graph depicts the difference between the two method of releasing a patch. The "normal" method of releasing patches for vulnerabilities that have been responsibly disclosed is in purple. The quality of the patch increases at a linear rate, then once it is released to the community, as more people get involved with testing and improving the patch the quality increases exponentially.
Similarly, the quality of a patch publicly released as soon after the vulnerability announcement as possible will increase at the same rate (one of the assumptions), but without the initial linear growth. Thus, the quality of the patch should get better sooner.
|
|
This graph is just the inverse of the above. It depicts the risk an entity would face it they were to deploy the patch, and is necessary to calculate the risk later.
|
|
This is an idealised depiction of the number of vulnerable machines on the internet. With "normal" disclosure in purple, there is an initial plateau where the patch hasn't been released and thus noone can deploy it. After this the number of vulnerable machines starts to decrease exponentially as more people find out about the patch or complete their testing. This decrease is usually at a far slower rate than the rate of patch quality increase, thus this is not the same curve as the threat from a faulty patch.
The orange line represents patches released immediately. In this case patching would continue at the same rate (one of the assumptions). While both lines tend towards zero, in the real world they would probably taper and invert the curve as the die hard non-patchers (usually ignorant users) make their presence felt. However, this shouldn't make a difference on the final findings.
|
|
This graph depicts the growth rate of potential malicious threats. With normal patch release, the advisory and patch are usually synced to be released at the same time, even if they aren't a patch can be reverse engineered quite easily. Thus, the threats start increasing exponentially from then as more ha><0rs find out about the vulnerability and better exploits and attack tools are made and distributed. The threat is not non-zero before this though as skilled attackers could already know of the exploit.
In the case of non-responsibly disclosed vulnerabilities, or when the vendor releases the details straight away, the threats can start preparing straight away.
|
|
Risk is defined as the likelihood of a threat exploiting a vulnerability, or a threat being realised against a vulnerability. The actual consequences or impact of exploitation are only relevant if we look at one organisation, and can be ignored here. Thus, risk = threat x vulnerability. So, if we add the two threats together and multiply it by the vulnerability we can see that, initially the immediate patch release (orange) carries a higher risk, but this risk is reduced sooner. Delayed patch release (purple) has less risk initially, but prolongs the risk. In addition, the total risk of immediate patch release is less than that of delayed patch release (although the end of the purple curve is not shown it will decrease at the same rate the orange curve did, resulting in less surface area under the orange curve, than under the purple). |
In conclusion, this would seem to indicate that releasing a patch as soon as possible after a vulnerability is announced is the best approach for a vendor to use, unless they can improve the quality of a patch internally better than the community could. The exponential patch quality increase assumes that as more people get involved more configurations and problems get detected. However, this is not always guaranteed as new testers could be running very similar configurations, providing little new information compared to what a planned testing schedule could. So, if a planned schedule is able to increase the quality of a patch at a linear rate initially faster that the rate at which the community could, it doesn't alter the final risk graph substantially, as the rate of change of the community that has been working on the patch longer will be higher and will soon overtake any linear quality improvements. A significant patch quality internal increase would possibly have the effect of lowering the purple risk curve resulting in near equal risk, however it is unlikely that this would be possible.
UPDATE: Ben Nagy of eEye has mailed me with some usefull stuff, pointing out that my curves are mostly far too general and in some places just wrong:
1. Your decays don't look exponential? If you've just inversed the graph I don't know if that's what you want? My models for Machines Fixed / Time and Interest In Vuln(x) / Time use N(t)=N(0)*exp(-[decayconstant]*t), which is the classical half-life model. You might note that Eschelbeck (the Qualys guy) has verified that real life remediation in companies follows this pattern in his "Laws of Vulnerabilities".
2. Your threat growth looks like a simple exponential growth, which assumes n(h4><0rZ) is infinity. You could also look at sigmoidal functions - there are a wide range to choose from, and they're quite common in epidemiology and population growth. A Sigmoid looks like a exponential growth at first but it slows down as the carrying capacity of the population is approached (eg with worms, when almost all the machines are infected).
3. I also plan to multiply my graphs for threat and vulnerability to derive risk, and my intuitive vision of the risk / time graph is broadly similar, so it's useful to see someone else with the same ideas. :)
Here is the abstract of the paper I submitted to ISSA 2006 today. It's mostly cut & paste from the introduction to one of my thesis chapters. I really should hand that in sometime. After Ben Nagy pointed out the awful flaws in my last attempt, I cam
Tracked: Mar 10, 13:41
Tracked: Feb 05, 09:23