I learned something about DNS this weekend. For some reason I was labouring under the impression that your machine, not the DNS servers dig the recursive lookup for you. What I mean by this is that I thought a DNS lookup went something like this:
- Request for lookup of 'domain.com'.
- Request sent to configured DNS server (non-authoritative for 'domain.com').
- Response received including details of nameserver (NS) authoritative for 'domain.com'.
- Request sent to 'domain.com' NS for lookup of 'domain.com'
- Authoritative response received
That is sort of how it works, I just had the agent initiating the subsequent requests mixed up. This belief was fine back in the day when most DNS servers were running open, caching, recursive DNS servers. Now-a-days that is bad, so just sticking my nearest DNS server into /etc/resolv.conf was only resolving addresses it was authoritative for.
After struggling for a few hours to solve this, I phoned Russell, because he knows this stuff. He pointed out that it is the DNS servers that do the recursion not your machine. So, you have to point your machine at a recursive DNS server that will talk to your IP. Being a smart guy, he happened to know one I could use, off-by-heart. FREAK!
I put this here in the hopes that the next geek who googles for it won't waste as much time as I did. I thought as a moderately capable geek I would just know stuff like this, it's always interesting to see where the holes in your knowledge are.Continue reading "DNS Recursion Duh!"
Courtesy: Security Curve.I find it hard to swallow that a vendor like Aladdin can write a filesystem driver that filters USB requests to encrypt data on the fly using documented interfaces, that a vendor like CA can write a driver that filters all incoming TCP connections using documented interfaces, and that a vendor like PointSec can write a driver to intercept filesystem calls using documented interfaces; but somehow McAfee can't get it together to grep the filesystem for malware without "going commando" all over Windows Vista in a way that requires them to rewrite the kernel. WTF?!
Nice idea Haroon, nice work Tim (Thor).
Ha! the use of ' or 1=1-- in my title messes with my HTML comments.Mark had his head screwed on right in the first place when he called the debate a "red herring", and his response shows it. Ironically, it seems it was an attempt to counter FUD from agent-only distributors. It's just a pity common sense has marketing departments to contend with.
After work such as Do Enterprise Management Systems Dream Of Electric Sheep?, and the 'everything-as-an-agent' syndrome security products seem to be going through, I think there is stronger ground to advocate against an agent-only based solution, coupled with the obvious need to be able to push patches to machines which haven't gone through a gating process.
Thanks for the response Mark. CEO blogging++
Below is a copy of an e-mail I sent to the ISC in response to a claim that orphaned open source projects display the dark side of open source:
These dangers are not specific to open-source. For example a copy paste from the article with the words 'open source' removed still rings true:
"This is the downside of software. What happens to it when the original maintainers tire of it, move on to other things, get hit by the proverbial bus,...?"
You need to evaluate the project team around a software product whether it is open source or proprietary.
If anything this is a positive of open source; The original project team went dead, but you still have security patches regularly being released, something unlikely to happen if a proprietary product's company went under.
UPDATE: added 'Below is' to first sentence.
Why go to all the technical trouble of stripping DRM controls from songs, when all you need do is:
- Play the DRM controlled song on a device supporting the DRM.
- Record the analog sound using high-quality audio cables to a non-DRM controlled format (like OGG or MP3).
- Audacity can help with such tasks.
- Distribute (for legal purposes only ;) ).
But wait, they are angry because someone wrote malware, that's all. It seems AV people take blacklisting to heart. What Consumer Reports should have done is written a new packer for existing malware, then we could have bypassed that rule :)
Malware is to an AV product as a test case is to a program. Could their rule to not create malware be why they are so bad at detecting new malware, or could it be because it is a bloody hard problem and they don't want to loose market share by admitting it?
Right, sceptic out. I'm off to write my own self-modifying OS in C.
Today a plane flying from London to somewhere in America was diverted to somewhere else in America by fighter jets. Why? Because of an 'agitated' woman carrying a screwdriver and matches.
Does this seem a bit dissproportionate to anyone? I can see the threat-o-meter logic now:
Screwdriver + maches + agitation = Scramble the Jets!
Not to mention the fact that with all the heightned security hoo-haa, she still managed to get a screwdriver onto the plane, it isn't exactly made of plastic, if you get my drift.
So you finally got local administor access on your Windows machine. The problem is that you're part of a domain with restrictive group policies and you really want to change your wallpaper, but remain part of the domain.
After some rooting around in the windows registry I found out where the nasty group policy hides. Just nuke everything in these keys, or if you have other intentions, directly modify them here:
HKEY_CURRENT_USER\Software\Policies
HKEY_CURRENT_USER\Software\Nicrosoft\Windows\CurrentVersion\Policies
HKEY_LOCAL_MACHINE\SOFTWARE\Policies
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies
I have been perseverating about Biometrics of late. My biggest problem is one everyone seems to know about but vendors are reluctant to address: key revocation. You can replace a token and a password, but you can't (practically) replace your iris or the like. This created a (perceived by me) dual problem:
- Spoofing attacks are getting more sophisticated. Will we end up in a patch-loop where entities implementing biometrics will need to regularly 'patch' (firmware & hardware replacement) their biometric readers?
- Are biometrics only a short-term authenticator until they can be effectively copied and arbitrary copies made?
Some of these questions were partially answered by Bori Toth a Biometric specialist at Deloitte UK by describing how key revocation could occur with an iris.
There are ways of simulating password aging and revocation for biometrics as well. This is fairly straightforward with iris recognition as IrisCodes do not have a spatial metric and as such the 256 data bytes can be scrambled into any order [256!=10^507 different permutations just for 1 eye (!)] - this means you can have an IrisCode for your driver's license, one for your taxes and one at the local gym etc, using the same eye yet all codes would be different, changeable, revocable and incompatible (unless of course you retrieve the permutation key). As fingerprint templates do have a spatial metric (i.e. they are a bit like a map), things are not that "simple" but I heard of approaches to achieve similar goals for fingerprinting as well.
Her and her team's publications can be found here. The paper entitled "Biometric liveness detection" is particularly interesting.
Based on the GPL Cube of Potential Doom (or the Spinning Cube of Potential Doom), JP has expanded and built on the idea to produce InetVis. It has some pretty impressive features and may provide some interesting insights into traffic analysis. The basic idea is to use the human brain's ability to detect patterns in visual stimuli to detect patterns in network traffic. Anyone want to build a SGUIL module for it?
HD Moore has a detailed analysis of a site exploiting browsers. It is a good read with detail we don't often see. Including a breakdown of the kind of revenue these sites pull.
Most interestingly it provide a a nice quotation as to why "safe browsing habits" are a load of phooey, something that has always, bugged me about Microsoft's oft-cited remediation.
A common myth is that the only way to get hit by a browser exploit is to visit "bad" web sites. The reality is that it only takes a single line of Javascript code to send your browser off to the darker areas of the Internet. Many respectable web sites are inadvertently allowing attackers to target their users. Just last week, the media covered an example where a MySpace banner ad resulted in over one million malware infections. This banner ad redirected the user to a malicious WMF file, that when opened, installed an adware application. A similar attack occurred on the LiveJournal network just a couple months earlier. In both cases, the web site operators were not aware that the attacks were occurring until someone complained.
At the Information Security South Africa conference 2006 I published a paper arguing that our current understanding of the risks associated with monthly patch release cycles is pretty poor. This discussion is pretty important given that entities such as Gartner recon monthly release will be the new industry standard.
I basically argue that in the case of delayed (responsible) disclosure patch schedules work well, but in the case of instantaneous (0day) disclosure none of the purported benefits, namely better quality patches and better deployment scheduling are accrued. I then move onto some solutions.
I think this is a really important paper and a really important discussion. Of course, I am the author so I would think that. The paper is available at:
Websense made some interesting discoveries about Google indexing binary files. From their initial ideas, HD Moore has put a malware search engine together for finding live malware sample. The initial signature database was provided by offensive computing. This is some pretty cool stuff. Here's a snip:
Summary
Technical
- Obtain live malware samples through Google searches.
- Fingerprint an executable, then search for the fingerprint.