Recently VDA was conducting an enterprise software security engagement where we were looking closely at the internal software security practices of a large enterprise. Through that work I got to spend a lot of time with a document called the BSIMM which is a study that compiles information about the software security practices of over 100 different organizations that can be used as a benchmark to compare against your own organization. While reading that study, one statistic really stuck with me:
Number of organizations that provide penetration testers with all available information: 21%
At VDA Labs we conduct many penetration tests in a given year. These include enterprise network penetration tests, application penetration tests, and product penetration tests (IoT generally). One thing that is different, no matter what type of test we are doing, is the amount of information disclosed to us by our clients regarding the target of our testing. Take for example a recent web application test that our team conducted – 95% of this application was locked behind a login, but the client did not (directly at least) provide our team with credentials to access that portion of the application. This reminded me of a tweet that I recently saw:
“Penetration Test” is a crazy overloaded term. Important to start w/ discussion of goals and tradeoffs between testers and client team. pic.twitter.com/kxeJa6Cv1T
— Patrick Thomas (@coffeetocode) November 4, 2016
So – what is the right amount of information to disclose to your penetration testers? That is what we would like to dig into a bit more with this blog post – along with some other colors (red and purple!) that sometimes come into the discussion
Black Box vs. White Box Penetration Tests
The main difference between a black box vs. white box penetration test is the amount of information disclosed to the team by the clients. A black box test would have very little information given to the penetration testing team beyond targeting to define what is in scope. A white box test, on the other end of the spectrum, would provide further details about what is being tested – aka the inside of the box. That might be architecture information regarding the enterprise network, logins for users on the system, or in the case of application tests – actual code for the app being tested. The important thing to keep in mind is that this is a spectrum – so the most common type of test we see is some form of grey box test where some details are shared and others are withheld.
We frequently hear from potential clients that they prefer a “black box” test, where only the very basics of targeting information is disclosed, because this “more accurately reflects the real world”. We view this as something of a false argument – while it’s true that a hacker might not know what is inside the box, they have another luxury on their side that most penetration testers would not have: time. A hacker can spend as much time as they want to probe and examine your app, but a penetration tester most likely has other clients and will need to move on to the next assessment, so enabling them to do their job (find bugs) in the timeline allotted for their engagement is ideal.
That brings us back to the motivation for the test – we work with many clients who still want to run black box tests, and that’s ok. In that case it is most likely a situation where the motivation to do a penetration test is different than finding as many bugs as possible (which we prefer as testers), and more likely something like compliance or other similar requirements.
But I want a red team!
A red team engagement can sort-of be thought of as a grey or black box penetration test on steroids. That is because generally less information is shared, but also less parameters are placed on the testers in terms of boundaries for exploitation. That might allow the red team to take a more unconventional approach to compromising the target. Red team activities also usually have longer time frames and fewer client staff are aware that the testing is taking place. This is how a test can truly be similar to a real ‘adversary simulation’ since increasing the time allotment and minimizing scope restrictions is more like the real world.
That said – the main downside of a red team engagement is cost. It takes more time, with higher level staffing required, in order to conduct a successful red team engagement and not every organization can justify spending to those levels, and may be better served by having a more standard ‘white box’ penetration test.
How about purple teaming?
The concept of Purple Teaming is newer and something many organizations have been trying out recently. The idea is that a test is performed in a similar way to a normal pentest or a red team engagement, but with the specific goal of having red-team knowledge sharing between attackers and defenders. This looks different from engagement to engagement, but one of my favorite versions was where I got to spend the week on-site with the client walking their defenders through our team’s attacks in a play-by-play manner. We did this on something of a time delay, however, just to be sure that our testing could proceed without intervention. The goal was to share first-hand knowledge of our exploitation paths with the blue team so they could really understand how things work from an attack perspective.
So – what is the right amount of information to share?
This depends but ultimately comes back to goals, or really asking yourself “how many bugs do I want to find?” If it is your goal to find as many bugs as possible within the scope of a normal pentest, we would always advise being as transparent as possible.
When we are performing an application test, for example, we like to throw as many tools and techniques as possible at the app we are testing. That means that we are employing static analysis, automated dynamic analysis, manual runtime analysis, and source code review where possible so we can squeeze as many vulnerabilities out of the test as possible. Some of these testing methods, however, require items like source code, or binaries that are compiled with debug flags, in order to enable that method of testing. We find that more access leads to more bugs discovered, which in the end makes the products and organizations we test more secure.