by Amir Khashayar Mohammadi

Is “sandboxing” the local browser really the cure-all for inherent browser vulnerabilities that the developers of supposedly “secure” browsers make it out to be?

Or is it just one more attempt to put lipstick on an aging pig with progressing health problems?

Much like with security patches and browser updates, the answer is not that simple. Putting the fix in can open the door for new and different exploits that allow attackers to pwn the local machine.

Which methods have been applied so far to break local browser and app sandboxes?

Let’s take a closer look. You will be surprised.

Breakouts from the beginning

Local browser sandboxing was first introduced by Google for the Chrome browser, as a layer of isolation designed to keep third-party processes confined to the browser and prevent them from harming the local machine’s environment.

The problem with this form of isolation is that it is far from perfect.

The smallest hole in a sandbox can invite dangerous exploits. All it takes to start the process is visiting a website that harbors a sandbox-targeting exploit kit.

This method of exploitation is stealthy by nature. There will be no download initiation alert and no warning signs. The payload will execute without users realizing it:

Code example: Chrome sandbox escape exploit

This code example is taken from a Chrome sandbox escape exploit that has been previously disclosed and since been patched (Source). Since its introduction in October 2008, more than 40 sandbox related security vulnerabilities were documented for the Chrome browser (1523 total security vulnerabilities, full list here).

When accessing a site running this exploit from a Windows-based machine, the local computer will execute the Windows Calculator applet. A harmless outcome in the demo, it illustrates the potential risk.

In real-world incidents, instead of calc.exe, it could be your machine’s command line interface that is invoked - or essentially anything. In our example, it appears that the script makes Chrome access the location of the machines’ OAuth (Open Authorization) token information to then upload to a remote server (highlighted in yellow).

Once uploaded, extension installations begin using the machines’ own OAuth tokens. Then, a vulnerability within the extension is used to break the sandbox. The directory that’s accessed after, “/AppData/Roaming/Microsoft/Protect” contains the user’s master key.

This could potentially allow further escalation in privileges which are then used to execute other binaries that require it. As you can see, the exploit then makes calls to a directory (highlighted in red) “c:\\windows\\system32\\calc.exe”.

Again, this script was developed for testing purposes. Malicious versions may include more sinister binaries or perhaps the location of more complex payloads.

Security benefit only as strong as the last patch or update

In the example above, where’s the threat if this particular vulnerability has already been patched?

Glad you asked.

Let’s remind ourselves that a patch being made available doesn’t necessarily mean users will actually apply it promptly (if ever).

Not everyone updates. In fact, many people will rather turn off update notifications than go with the program and update their software as suggested. On the enterprise level, skipping essential patches is tantamount to inviting disaster.

Local Browser Sandbox Escapes - Illustration for Authentic8 blog post

While Google’s Chrome browser has had 40+ sandbox related vulnerabilities over the years, the good news about Chrome - compared to many other browsers - is the type of support and attention it receives, in part thanks to Google’s bug bounty program. A functioning sandbox escape exploit with a high-quality proof of concept report can fetch $15,000.

One effect of this program is that it creates competition for those who sell these exploits in the black market. Why sell the exploit and risk drawing the ire of the bad guys when it gets patched before they can cash in on it, when you can get paid a handsome amount by the browser developers themselves?

Not to mention that bug disclosure is a public service and the right thing to do.

So Chrome should be commended for its approach to addressing such threats. But what about other common browsers like Microsoft’s Internet Explorer, Firefox or the Tor Browser (based on Firefox)?

By now, they all feature their own sandboxing functionalities, which are supposed to make them more secure. How does that promise hold up?

That’s the question I’ll be trying to answer more in-depth next.

How to fiddle with the PIDL to break the MSIE sandbox

Internet Explorer should be completely phased out in corporate America by now. Yet too many companies, organizations and government entities still hold on to software that is way past its expiration date, such as Windows 2000, Windows XP and Internet Explorer (Source).

From a security perspective, such legacy IT solutions are ticking time bombs. If you think they’re not a big deal, consider this report on hacking the British Royal Navy’s Trident Submarine Command System (SMCS).

What this has to do with sandboxing the browser? The following screenshot illustrates a sandbox escape exploit for Internet Explorer 11. It also has been patched since (at least on some computers, I’m sure).

Screenshot: illustrates a sandbox escape exploit for Internet Explorer 11

This example shows how low the threshold can be for escaping a mainstream browser sandbox. This particular exploit, it should be noted, used to impact fully patched Windows 7 and 8.1 platforms.

It takes advantage of how Internet Explorer used to handle the PIDL. PIDL stands for “Personalized Information Description Language,” and there are many purposes for it. In Internet Explorer, it’s mainly used for managing pathways to URLs.

In Internet Explorer, whenever a shortcut is created, the “ieframe!CShdocvwBroker__CreateShortcut” function is implemented. Shortcuts are created with the extension .url in the Favorites folder or within a subdirectory in the same general location.

The problem here is that the PIDL that is handling the corresponding URL does not validate whether or not it is dealing with a valid URL. That means that an attacker can essentially place a file location instead.

Within the Favorites folder, the failure of validation results in accessing any directory/file on the user’s disk, and in conclusion, breaking IE 11’s sandbox.

This script does exactly what the Chrome exploit did - executing pre-existing binaries. Credits to Ashutosh Mehra, who received a 3,000 dollar bounty payout for disclosing this vulnerability [CVE-2015-1688].

Firefox: Flying off the GMP handle

Security researcher “Nils” was the first person (self-proclaimed) to devise a sandbox escape for Firefox. Nils took a unique approach: he exploited an integrated plugin (one that ships with the browser) to affect the environment. His target: the Gecko Media Plugin (GMP).

In a sense, the GMP sandbox was developed to support the viewing video content on Firefox. According to Mozilla, it “is currently only used to host h.264 video playback using the OpenH264 plugin but is being developed to host other media plugins.”

This bug seems to affect only Windows, due to the way handles are duplicated in the sandboxed process “plugin-container.exe”. One can duplicate the process by making a call via the “DuplicateHandle” function.

Screenshot: Firefox sandbox breakout

This screenshot of a script by Nils (written in C++) shows the proof of concept behind generating new handles to the parent process.

The handle in reference has full access to the higher parent within the hierarchy, which would allow the execution of arbitrary code.

Non-admin setting raises bar for browser sandbox escapes

Some of these sandbox escapes can be avoided by running each browser in a non-admin setting.

If the browser itself or other running processes don’t have administrator rights, a lot of the actions resulting from a vulnerability being exploited will be labeled “unauthorized”. Sooner or later, the execution of such a “multi-stage attack” will be denied locally.

Out of many browsers, Tor specifically warns users via popup against running their browser while logged in as root on Linux. The following example underlines why it is crucial to evaluate privileges assigned to each program.

Because of its strong reputation as a browser for the security and privacy-minded, Tor has been a favorite target of hackers out to prove their mettle, especially since it has become more mainstream.

Screenshot: code to break Tor’s own sandbox

This screenshot contains the entire code (written in C++) to break Tor’s own sandbox.

The script is short but quite complex. It utilizes Linux X11 connections to break out of the sandbox and potentially intercept HIDs, human interface devices. Essentially, the X11 network protocol (or the X server) on Linux manages keyboard layouts, mouse layouts, etc.

If the above code is compiled and executed properly, an attacker can use the keyboard layout to access the terminal and run arbitrary commands. That’s all it takes to fully take over a Linux machine running the Tor Browser.

This by no means proves that the Tor network is insecure. It just means the browser still requires a lot of patching that will inevitably lead nowhere (Source). Patching won’t fix all problems, therefore there will always be vulnerabilities present.

For a vulnerable user, running the Tor browser in a non-root setting prevents certain scripts from being executed. Using Tor Browser as a hostage to issue specific commands will not work since it lacks authentication to do so in the first place.

Tor shouldn’t be the only browser to warn against this:

Secure Browsing Illustration: Taking the Tor Browser hostage? Not happening.

In my next post in this mini-series, I will examine how attackers manage to turn the sandbox defenses against the defenders.


Amir Khashayar Mohammadi is a Computer Science and Engineering major who focuses on malware analysis, cryptanalysis, web exploitation, and other cyber attack vectors.