I have been perseverating about Biometrics of late. My biggest problem is one everyone seems to know about but vendors are reluctant to address: key revocation. You can replace a token and a password, but you can't (practically) replace your iris or the like. This created a (perceived by me) dual problem:
- Spoofing attacks are getting more sophisticated. Will we end up in a patch-loop where entities implementing biometrics will need to regularly 'patch' (firmware & hardware replacement) their biometric readers?
- Are biometrics only a short-term authenticator until they can be effectively copied and arbitrary copies made?
Some of these questions were partially answered by Bori Toth a Biometric specialist at Deloitte UK by describing how key revocation could occur with an iris.
There are ways of simulating password aging and revocation for biometrics as well. This is fairly straightforward with iris recognition as IrisCodes do not have a spatial metric and as such the 256 data bytes can be scrambled into any order [256!=10^507 different permutations just for 1 eye (!)] - this means you can have an IrisCode for your driver's license, one for your taxes and one at the local gym etc, using the same eye yet all codes would be different, changeable, revocable and incompatible (unless of course you retrieve the permutation key). As fingerprint templates do have a spatial metric (i.e. they are a bit like a map), things are not that "simple" but I heard of approaches to achieve similar goals for fingerprinting as well.
Her and her team's publications can be found here. The paper entitled "Biometric liveness detection" is particularly interesting.
Based on the GPL Cube of Potential Doom (or the Spinning Cube of Potential Doom), JP has expanded and built on the idea to produce InetVis. It has some pretty impressive features and may provide some interesting insights into traffic analysis. The basic idea is to use the human brain's ability to detect patterns in visual stimuli to detect patterns in network traffic. Anyone want to build a SGUIL module for it?
HD Moore has a detailed analysis of a site exploiting browsers. It is a good read with detail we don't often see. Including a breakdown of the kind of revenue these sites pull.
Most interestingly it provide a a nice quotation as to why "safe browsing habits" are a load of phooey, something that has always, bugged me about Microsoft's oft-cited remediation.
A common myth is that the only way to get hit by a browser exploit is to visit "bad" web sites. The reality is that it only takes a single line of Javascript code to send your browser off to the darker areas of the Internet. Many respectable web sites are inadvertently allowing attackers to target their users. Just last week, the media covered an example where a MySpace banner ad resulted in over one million malware infections. This banner ad redirected the user to a malicious WMF file, that when opened, installed an adware application. A similar attack occurred on the LiveJournal network just a couple months earlier. In both cases, the web site operators were not aware that the attacks were occurring until someone complained.
At the Information Security South Africa conference 2006 I published a paper arguing that our current understanding of the risks associated with monthly patch release cycles is pretty poor. This discussion is pretty important given that entities such as Gartner recon monthly release will be the new industry standard.
I basically argue that in the case of delayed (responsible) disclosure patch schedules work well, but in the case of instantaneous (0day) disclosure none of the purported benefits, namely better quality patches and better deployment scheduling are accrued. I then move onto some solutions.
I think this is a really important paper and a really important discussion. Of course, I am the author so I would think that. The paper is available at:
Websense made some interesting discoveries about Google indexing binary files. From their initial ideas, HD Moore has put a malware search engine together for finding live malware sample. The initial signature database was provided by offensive computing. This is some pretty cool stuff. Here's a snip:
Summary
Technical
- Obtain live malware samples through Google searches.
- Fingerprint an executable, then search for the fingerprint.
HD Moore will be releasing one browser vulnerability a day for the month of July based on his, research.
It is interesting to see how horribly buggy browsers can be. I am most interested in seeing which browser gets broken the most andif we can take any defense in depth lessons from this.
Hopefully the vendors got all the details and will release one patch for all of the bugs affecting their browser, because nobody has the resources to run about patching user's browsers several times a month.
You buggin what? You buggin who? You buggin me!
And don't you see it ain't cool.
Here's the abstract:
This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis.
Here is a brief chapter breakdown:
- Introduction
- Vulnerability and Patch Management - an analysis of vulnerability, malware and threat trends followed up by an analysis of problems with patches.
- Policy Solutions - an in-depth patch management framework for creating an organisational patch management policy.
- Vendor Release Patch Policy - an analysis of how vendors can best manage the risks associated with releasing patches.
- Practical Solutions - an analysis of where technology is needed in patch management and what is currently available.
- Conclusion
The thesis is still being examined after which I will submit the final version with corrections. What this means is that if you have any corrections, please send them to me.
The thesis was examined and passed. Irritatingly, one examiner strongly recommended a distinction while the other strongly argued against one (on the basis that the thesis was not scientific enough, but highly practical). At least there was no ambivalence.
UPDATE: added compressed versions.
UPDATE II: added new version with over 32K worth of corrections.
UPDATE III: Updated examination status to passed.