In this eight page paper with precisely five references (of which one, while using the word 'agent' is clearly referring to autonomous semantic-web based agents and not client/server architecture agents) she manages to make a plethora of unsubstantiated claims plainly meant to sell Shavlik's agentless technology. Her main ploy is to make it sound like installing agents to every machine is hard work, and so having to do that every time there is a patch emergency would be bad. Of course you only need to install them once, but lets not confuse 'facts' with the truth. She doesn't seem to realise that if her Shavlik software can deploy executable patch content it could probably deploy agents too, *sssh* don't tell.
This isn't a bash at Shavlik software or an endorsement of agent based solutions. She even quotes her CEO as saying he thinks the whole debate is a "red herring", I prefer the term "a load of crap". If you are logging into a box remotely with administrator rights, then you aren't doing much different from an agent, the code just happens to be transmitted every time instead of stored locally.
I wonder how many morons they fool with this faux-academic ninja-marketing technique?
Lunda points out that some 1337 scr1p7 k1DD13 named eno7 is defacing a ton of Windows 2003 webservers in South Africa. Zone-H is taking forever to repond to request, but a quick whois on the domains displayed on the first page show that most of the sites are in the 70.87.127.255 netblock which is registered to ThePlanet.com Internet Services, Inc.
The little twerp has a Make Poverty History routine and in the same page says hi to Team Evil. After displaying an Ad from Red Nose day 2005 it redirects to EkinCaglar.com who has a message at the top of the site stating he has nothing to do with the attacks.
I contacted the ISC to see if they have any ideas.
I am now prepared to roll back my claim that it is snake oil, but it still looks like a stupid idea. The basic idea is that the 'virtual patch' will modify network traffic in the same was that the vendor's patch will. Thus if the vulnerability involves sending over large UDP packets, the inline patch will truncate them, or if it involves a SQL injection, the inline patch will strip the offending SQL.
This bring me to several point, which I will summarise:
- Why is stripping a malicious request better than blocking it?
- How is this different from an IPS?
Continue reading "Blue Lane Technology's Patch Point"
Also in the last three years, Microsoft has:
- Released 99 critical patches
- Taken an average of 120 days to release a patch
- Taken an average of 62 days to release patches for full disclosure vulnerabilities
The original spreadsheet is available in:
I changed the day calculations so that they will work in Excel, however Excel is unable to display the graphs correctly and just shows two sets of bars instead of bars and a trend line, so I recommend either the OpenOffice version or the HTML.
As an aside, what are the correct terms for the two types of disclosure. Responsible disclosure is a rather morally laden term, and calling the alternative irresponsible or non-responsible seems silly. I am using 'full disclosure' in this entry, but it seems wrong.
While going over the research on Microsoft's time to patch produced by Brian Krebs at SecurityFix, I noticed a few things which didn't add up. His calculations for the number of days from internal or full disclosure until patch release appeared wrong. On double checking it seems they were. The calculations for 2005 were particularly bad with a total of 118 days going missing or being added. There are many off by one errors and in one case the disclosure date was listed after the patch release date, once the year was changed from 2003 to 2002 it made sense. For both 2003 and 2004 the number of patches were counted incorrectly! Given that the information was vetted by Stephen Toulouse of Microsoft, it is strange they they both missed this. The other possibility is that I have missed something, anyone care to double check my calculations? Brian has since seen this post and linked to it.
A spreadsheet is available with my calculations next to Krebs. In my corrected days column I have italicized and centered the days where my results and his disagree. I used Open Office's DAYS() function I just do a normal subtraction to calculate the difference in the days.
While the errors were sometimes quite large, the average calculations are not badly affected as the days were sometimes higher, and othertimes lower than they should be. The dates are still hugely useful, and all sorts of interesting information can be derived from them (eg1, eg2), it would be nice to have the same info for other vendors. Thus, the new summary is:
2003 | 2004 | 2005 | |
Number of Critical Patches | 34 | 28 | 37 |
Ave. Days from Report to Patch | 90.7 | 136 | 134 |
Ave. Days from Disclosure to Patch | 73.6 | 55 | 46 |
UPDATE: added link to SecurityFix's follow-up post and Dan Geer's work
He contacted the original researchers and Microsoft to verify the dates and times, and Steven Toulous of the MSSRC vetted his results. The summary is:
2003 | 2004 | 2005 | |
Number of Critical Patches | 33 | 29 | 37 |
Ave. Days from Report to Patch | 90.7 | 134.5 | 133.5 |
Ave. Days from Disclosure to Patch | 71.1 | 55 | 46 |
This shows that Microsoft has been taking longer to fix 'responsibly' disclosed vulnerabilities, most likely due to their increased testing regime, and fixing publicly disclosed vulnerabilities which they were not previously notified of faster. The increase is understandable and the marginal increase in risk is justified if the risk from faulty patches is greatly decreased. The decrease is a good sign, but 46 days is still way too long, a skilled attacker doesn't need underground sploits if they have that long.
Continue reading "Vendor Patch Speed"
While preparing my tin-foil body suit. I found this little gem; a randomisable on-screen keyboard, at this site.
It has two files, one is the keyboard.py which displays a keyboard in your terminal and outputs to a fifo. You can click on the keys with your mouse or navigate to the correct key with your cursors and press space. Pressing Esc changes input from the on-screen keyboard to your normal keyboard, i.e. the on-screen keyboard will grab normal keyboard input and send it to the fifo. You can also press r to randomize the keyboard.
The second file keywrap.py sets up a PTY and maps STDIN/OUT/ERR to a process you tell it to execute. This is how you can map input from the fifo to, say ssh.
This is perfect for the secure browser discussed in the last entry. Now keyloggers will have to deal with far too many variables, and unless it is a very targeted attack, you would be safe. I wrote a little script, sssh.sh for "super secure shell". It sets up the fifo for you and opens the keyboard in a compact little new window and initiates the ssh session. Here it is:
#!/bin/sh
terminal="gnome-terminal --geometry 45x7 -e"
if [ -z "$1" ]; then
exit 1
fi
fifo=`mktemp -u ~/$1.XXXXXX`
mkfifo $fifo
chmod 600 $fifo
export DISPLAY=:0
$terminal "keyboard.py -cCq $fifo"&
pid=$!
cat $fifo|keywrap.py "ssh $1"
rm $fifo
kill $pid
With the plethora of nastiness the average user has installed on their machine, a while ago I sat down to work out a method of providing a 'secure' browsing session for your average home user. Something point and click. I went through a variety of ideas for separate browser sessions to unprivledged user account to the VMWare Browser Appliance. None of them were great.
Then with all the WMF nonsense people were suggesting using the Browser Appliance, so I thought I would post my solution, which is an overboard tin-foil hat kind of approach. I have a read only chroot with just enough libraries for elinks to do SSL which can be accessed via ssh.
Continue reading "Secure browsing with chroot'ed elinks"