top of page
Search
  • Writer's picturezohar shachar

Author spoofing in Google Colaboratory

Recently, Google made public their new ‘Abuse Research Grant Program’ - an awesome tool for motivating researchers to delve into an often overlooked angle of software security that doesn’t get the attention it deserves.

I was very honored to be mentioned as a contributor to this effort, and I thought this is a good opportunity to write about one the first abuse-related bugs I’ve ever reported to Google - an identity spoofing issue in Google Colaboratory that allows you to bypass security warnings and trick victims into running your malicious code on their own environment.

Let’s dive right in.


What is Colaboratory anyway?

Google Colaboratory is a powerful tool for running Python-Jupyter-Notebook’s. Essentially, you can easily write python code and execute it in a ‘serverless’ environment (a new container is spawned for every new session). If you want to persist data, you can integrate your code with other Google products (such as Drive, Docs or Big Query), and you can also share code with your peers (just as you would share any other Google document).


At this point, you’re probably thinking “well I’d be careful before running code written by someone else in my own environment”, and Google thought so too. And so, when you try to execute a notebook owned by someone else you face a very clear warning:


If you were a malicious entity trying to get some victim to execute your code, such a warning might be a serious blocker, as even the most naive of users might think twice when facing such an alert.


Can we bypass it?

If you pause to think about this alert message, a question may pop in your minds (or at least it popped in mine). As mentioned before, Google Colaboratory notebooks are shared just like any other document in Google drive, and can have several contributors. in fact, in order for the victim to even access this code and try to execute it, we had to initially share it with the victim and make him/her sort of a contributor. So, if we have several potential users writing code to the notebook, who’s the ‘Author’ mentioned in the alert message?


Once the question was formulated in my mind the answer was also clear - the ‘author’ must be the original user who created the document - or in Google Drive’s lingo - the document owner.


But there’s the rub - a document owner in Google Drive is not constant, and can be changed - an owner can simply appoint some other contributor as the new owner.

(This makes sense - imagine you work with a colleague on some document authored by them, and then they leave the company and close their account. It wouldn’t make sense for the shared doc to just disappear, would it? Someone else will have to become its new owner. But that could not be determined ‘automatically’, and it only makes sense for the current owner to choose the future one).

And indeed, quick testing showed that Colaboratory notebooks are managed just like any other Google doc, and using Google Drive’s settings you can change their owner (i.e. the 'author') to any of the contributors to the file.


So now we have all the pieces of the puzzle, the attack becomes clear:

  • Create a new notebook in Colaboratory, and write your malicious code in it.

  • Share your notebook with your victim.

  • Using Google Drive, locate your notebook and set your victim as the notebook owner.

  • Send the link to the victim and you’re good to go!

I’ve reported the bug to the Google abuse team, who granted a cool 500$ bounty.


Here's A POC video of the attack, showing first the ‘normal’ behavior and then the ‘spoofed’ one (and many thanks to Moti Harmats for adding the oh-so-magnificent soundtrack):



Final thoughts

I really like this bug, due to its simplicity. It’s not a technical bypass, it’s not a code error, it didn’t result in the biggest bounty ever and it doesn’t even require a computer to discover - but that’s why I like it. it’s one of these things that suddenly comes to your mind when you're out for a walk after a day of playing with a system, and once you think about it you know it will work, you don’t even need to test it.

I reported it around a year and half ago, and wanted to write about it ever since. It was even meant to be the first writeup on this blog, but somehow other things got in the way.

Thanks again Google’s team for this cool new grant program, that motivated me into finally giving this bug the post it deserved :)

2,252 views

Recent Posts

See All
bottom of page