1-616-951-1166 info@vdalabs.com

At VDA Labs we are constantly improving our tradecraft. One of the ways we do this is by leveraging, testing, and building tools.  And when there are existing toolsets that make our jobs easier, we explore and share.

Why is docker cool?

When we use a tool, especially for the first time, we have a common set of questions:

  • Where is it coming from?
  • Does it run on my platform?
  • What dependencies might I need? Did the installer bring them? Are there transitive dependencies?
  • What is configuration like, does it start on boot? Is it a service?
  • Does it conflict with other things?

The questions are related to three main components of software:

Distribution, Installation, and Operation.

As a hacker, being an expert at the nuances of all the different software we use is its own part-time job.
Docker makes it easy.

Images help us with Distribution.
Containers help us with Operation. [they are isolated processes]

Dockers use virtual memory address spacing, which mitigates context switching and other cross-process issues. While it shares the kernel, it doesn’t necessarily share all of the kernel modules. Entirely different filesystems are provided by images. Processes can have different hostnames, different virtual network interfaces, and so on. A process thinks it’s in an entirely different computer.

What are the basics?

Images live in registries

  1. Dockerfiles are text files with instructions on how to set up the image
  2. Images are loosely coupled layers configured by a manifest file

Containers are running images

  1. Each container should provide a single service
  2. Containers are thin writeable layers ontop of images
  3. Can be a one-to-many relationship, each container linking to a single image

Installation on MacOS:

brew install docker

Installation on Kali:

curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

echo 'deb [arch=amd64]https://download.docker.com/linux/debian buster stable' > /etc/apt/sources.list.d/docker.list

apt update

apt install docker-ce

That’s nice, but what does it do for me?

Fair enough, that’s what you came here for. Simplicity allows us to get more done. So let’s talk tools.

Say you want to do recon: Use Eyewitness to screenshot hosts!

git clone https://github.com/FortyNorthSecurity/EyeWitness

While we won’t go into extreme detail, this first run has comments to help you understand what the flags are doing. Start by building the image from within your cloned directory.

docker build -t eyewitness . 

Then run the image.

docker run \
--rm \ # cleans up and removes the file system on container exit
-it \ # keep STDIN open even if not attached (i) allocate a pseudo-tty (t)
-e DISPLAY=$DISPLAY \ # sets DISPLAY environment variable for VNC
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mounts a shared filesystem
-v /tmp/eyewitness:/tmp/EyeWitness \ # same as above
eyewitness \ # the application to run in the container
options # usual EyeWitness flags

But wait, where are we getting our EyeWitness inputs? Nmap is our friend. Let’s skip github and go right to the docker hub for it. Just substitute nmap command with:

docker run --rm -it instrumentisto/nmap
A quick note about security: You are downloading and running things from the internet. As always, use good judgment.

Need to use another Nmap utility like nping? Just specify it as container entrypoint.

docker run --rm -it --entrypoint nping instrumentisto/nmap

Say you want to get some usernames: Use The Harvester

git clone https://github.com/laramies/theHarvester

docker build -t harvester .
docker run -ti --rm harvester

Say you want to phish: use EvilGinx2 to steal credentials and cookies!

git clone https://github.com/kgretzky/evilginx2

docker build . -t evilginx2

# Then you can run the container:

docker run -it -p 53:53/udp -p 80:80 -p 443:443 evilginx2

Bring on those sweet, sweet tokens.

Say you want to scan web hosts: hunt for common vulns with Nikto or wpscan!

git clone https://github.com/sullo/nikto

docker build . -t nikto 

docker run --rm -v $(pwd):/tmp nikto -h http://www.example.com -o /tmp/out.json # run and save your report in the mounted directory

If you don’t have it locally, docker will automatically search the docker hub.

docker run -it wpscanteam/wpscan #automatically searches and pulls the relevant layers to you 

You can manually search for anything yourself by running docker search tool

Say you want to ‘remotely administer’ some hosts: run Covenant for a state of the art C2 platform!

git clone https://github.com/cobbr/Covenant
docker build -t covenant .

docker run -it -p 7443:7443 -p 80:80 -p 443:443 --name covenant -v /absolute/path/to/Covenant/Covenant/Data:/app/Data covenant --username AdminUser --computername 0.0.0.0

Say you want to hack all the things: use Kali Linux

Kali Linux itself has an official image with Metasploit and a few popular tools. This image can be built with a wide range of options, and configured to your needs, making it a great platform for testing, and sandboxing your payloads/experimental stuff.

Docker greatly simplifies use on MacOS due to dependency issues. Next time you’re doing a git clone keep an eye out for that dockerfile. Doesn’t have one? It’s pretty simple to write your own. Additionally, docker-compose can orchestrate multiple docker images and containers in seconds, allowing you to spin up an entire environment in seconds. If you leverage that in combination with something like Ansible you can create fast and powerful deployments. Look for a future blog on automating red team infrastructure using combined techniques and tools.

Didn’t see the tool you want? Check out hub.docker.com
Confused? Like this but want more? The Docker documentation is great.