Meddling with Shipyard

Docker Logo

TL;DR
  1. Deploy Shipyard for docker.
  2. Get owned (due to complacency, not specifically docker).
  3. Deploy again, properly.
Introduction

So recently, I've been playing a lot with Docker, both at work and at play. Up until a few months ago I was a complete docker noob. Actually, I definitely still am (as we will see shortly) but I did pick up some interesting and cool things that I thought I'd share.

Most people are typically aware of Docker, and what it is, but may not necessarily know how to make use of it. So for this post, i'm not going to go through what docker is, but rather, how to make use of Docker via Shipyard/Portainer.

In the beginning, I was largely playing around with Docker locally, creating containers, adding them to virtual networks, playing with docker compose, figuring out how to upgrade containers and persist data, etc.

After a while, I figured i'd learned enough and enjoyed docker enough to consider migrating some stuff to docker, to enable further learning and experimentation.

Shipyard

My next point of call was to find and utilize a management platform for docker. I wanted to deploy containers across many nodes, and manage them in a centralized manner. Docker Cloud is one such platform which allows for this, but there are fees for using Docker Cloud once you add more than one node.

I actually have a dedicated server in the cloud, and also a micro-server at home running Vmware ESXI. I wanted to find a solution which I could host myself, rather than pay Docker when the nodes are actually mine already.

The first product I found and tried was Shipyard. It was incredibly easy to setup and in minutes I had it behind a reverse proxy for https and using letsencrypt for certificates. I was deploying containers, bringing them up and down, etc. It was great.

That is, until some strange activity started to creep up on me. I was showing a colleague the shipyard UI and showing some container management examples when I noticed there were 3 new Ubuntu containers. I wouldn't have noticed them, except for the fact the image was simply ubuntu. When I use images, I always pin to a tag, even if it's just ubuntu:latest. For this reason, the 3 containers immediately caught my eye (pro tip, if you're gonna pwn me, pin to a tag).

I navigated into one of the containers, and opened up the console, took a look at the bash history, which I found had been wiped. Getting a little concerned, I next looked at the container logs, which had not been wiped. Great. What I observed, was several attempts at inserting python based reverse shells into the crontab. Yikes.

I was immediately very concerned, how could this have happened, my credentials for the shipyard admin console are very strong. This was also a day or so after I deployed shipyard, it couldn't have been brute forced that quickly.

I started to Google for known vulnerabilities in shipyard, any exploits which may be around - i didn't find anything of concern. Next, I looked at the shipyard issues in Github, where I found several reports of similar activity. After some reading and analysis of the documentation, I determined that the cause was that my docker daemon, typically only exposed locally, was being exposed via shipyard-proxy, making my docker daemon available to the entire internet. Awesome. I started to ask myself how I managed to miss this. When I read the documentation and came across TLS related notes, I immediately assumed it was for HTTPS for the web UI and dismissed it knowing that I'd setup my own reverse proxy and use letsencrypt. Little did I realize that TLS would actually ensure that access to the docker daemon would require mutual authentication via client x509.

Essentially, without setting shipyard up with TLS, anyone can interact with your nodes, deploy/modify/destroy containers, etc. Even as a security person, I completely missed this and got sucker punched. We can assume that this is the usual script kiddie, using scanners/Shodan to find shipyard instances, then exploiting them. In this case though, the person who did this, knew to look for and how to abuse remotely exposed docker daemons.

Examples of attacks

Essentially, when you deploy shipyard, it proxies traffic from all interfaces, tcp port 2375 to the docker socket. Below is a screenshot of the shipyard containers after default deployment:
Shipyard containers

What this means is that any docker * command you would run locally, you can also run on remote docker instances. E.g. instead of docker ps, you can do:
docker --host=tcp://192.168.0.41:2375 ps.

In the first example below, you can see that running docker ps locally fails, since there is no docker daemon running on my host. However, i can run docker ps on the VM i setup for the examples:
Running docker ps against a remote host

Great, we can see a list of all the containers running on that host, let's now interact with one of them and drop into a root shell:
Root shell inside one of the deployed containers

In my case, no one messed with the containers I had already deployed, although of course that cannot be guaranteed, hence why I torched them. What they did, was set up a few new Ubuntu containers, and backdoored them. I believe that the next steps were to add/use them to/with a botnet. I believe they left my containers intact so as not to raise alarms, and created new containers, because they want the resource power, rather than any data within my other containers. Here's an example of running a new ubuntu container and showing it exited afterwards:
Running a new Ubuntu container and dropping into it

I did this twice, so you can see two containers listed with ps -a. The -a is required since the containers are not up:
Ubuntu containers

My setup (Now)

Naturally, I burned everything. Even the containers which looked clean/untouched. I destroyed the OS too, just incase, and re-deployed all nodes. I then started my effort to re-deploy shipyard with TLS enabled, and struggled to get it up and running. Eventually I had it running, but the web UI also required mutual auth, which I didn't want. I tried playing around with the deployment script for a few minutes, trying to see if I could omit this particular part. In the end, I scrapped shipyard and deployed Portainer, which is a similar solution which actually offers the same functionality, and then some.

I setup the docker daemon to require mutual authentication, deployed my client certs to my nodes, and set them up.

Endpoint addition page

Next, I ensured that the same example attacks shown above were not possible, and they weren't. With a reinstalled confidence, I started to re-deploy my containers, and get my environments back up and running.

You can see below that the attempt to run docker commands failed, until I provided the right TLS certs:
Can't connect without valid TLS certs

And we can see in systemctl that the TLS connection failed due to no cert:
Connection failure

Portainer

So, now I'm setup. I have Portainer installed, I can't be backdoored (at least not in the same trivial manner), time to start building my playground out again. The following sections just detail some of the features that come with Portainer, how I'm using them, etc.

Portainer features

Group management across nodes.

Typically, in a multi-node environment, to check the status of your Docker containers you would have to connect in to each individual host, and check using docker ps:
Console Docker status
Additionally, as you scale out your applications, it gets easy to lose track of which container is where. Solutions like Portainer make this particular part of container management easy, providing instant visibility to all of your deployed containers, across all nodes:
My Portainer Containers

Additionally, using the checkboxes on the left, you can bulk manage containers; destroying, stopping, restarting, starting, etc.

Container monitoring

High level overview is awesome on it's own, but we actually get a lot more than that. If we navigate into a container, we can access even more monitoring utilities to check how our deployments are doing.

Logs

Firstly, we can get access to the stdout and stderr logs from the container, which are streamed to the webpage in realtime.
Stdout logs via Portainer

Activity/Resources

If we want, we can even view resource activity to identify how much cpu/memory our container is using for example:
Resource Monitoring for a container

Console access

Next up, for any container, we can easily drop into an interactive shell. Of course, this isn't necessarily desirable for all users. Ideally, there could be a server side configuration option to disable this.
Console access to container
Surprisingly, the console offers tab completion which is really nice.

App Templates

As well as configuring and deploying your own services, you can also pick from a template repository, for ease of deployment:
Portainer App Templates

And that's not all, via Portainer you can easily configure Volumes, to allow for data persistence inside containers, you can set up networks and add machines to networks, to give them interconnectivity, and more.

All things considered though, Portainer lacks a few necessities that I need. I'm considering submitting a PR on Github if I can find some free time. I'd really like MFA to be in place, so that I can more thoroughly protect access to the web UI. Then again, i'm likely going to move the web UI to my internal network, and VPN in. We'll see how things go since nothing major is at risk for me currently.

In the final section of this post, i'll show off a bit of what I've had fun playing around with so far.

What I'm using it for

I run several web applications from within containers. I use Apache to reverse proxy to each application and also letsencrypt to provide TLS to each.

My Portainer Containers

The applications I have running are:

  1. Firstly, Portainer itself runs inside one of the containers.
  2. Next, this blog (powered via Ghost) is running inside one of the containers. I expose the port (2368) for Ghost blog to localhost from the container, and utilize a docker volume to ensure I persist the data when upgrading to the latest docker Ghost image.
  3. I self host a copy of Gitlab Community Edition, whereby I store and work on several personal code projects, some with friends & colleagues. Again, I utilize docker volumes to persist data beyond container upgrades.
  4. One such project I've been working on is a Swift library for the Riot Games API. In the documentation, Riot require developers to utilize a web proxy between their applications and the Riot Games API endpoints. Riot do not allow developers to hardcode/store the API keys within client applications/libraries. For this reason, I created a super simple project called Proxy Singed. The Proxy basically takes in requests from my library and wraps them with the API key, sending them onto Riot and relaying the responses back.
  5. A second, production copy of Singed runs in another container. Via Gitlab, I make use of CI and CD, automatically deploying staging and production versions of Singed to respective containers. I.e. when merging from develop->staging the code is automatically deployed if the builds pass without failure. Likewise, merging from staging->master results in an automatic deployment to production.
  6. I even have a health API which returns a JSON blob of the current deployment. This way, I can quickly see which specific snapshot of the source is in use and whom issued the deploy:
$ curl https://singed.riotkit.xyz/health
{
  "build_ref": "639623293b76610837164af46c60ac51f5686add",
  "codebase": "master",
  "gitlab_userid": "1",
  "timestamp": "Sun May  7 14:21:53 UTC 2017"
}
Grant Douglas

Grant Douglas

Senior Security Consultant @ Cigital/Synopsys. Working on everything appsec but mainly security, strategy and SAST & DAST tooling in the mobile vertical.

Read More