The 2013 National CCDC season ended in April 2013. One topic that I’ve sat on since this year’s CCDC season ended is feedback. Providing meaningful and specific feedback on a team-by-team basis is not easy. This year, I saw multiple attempts to solve this problem. These initial attempts instrumented the Metasploit Framework to collect as many data points as possible into a central database. I applaud these efforts and I’d like to add a few thoughts to help them mature for the 2014 season.
Instrumentation is good. It provides a lot of data. Data is good, but data is dangerous. Too much data with no interpretation is noise. As there are several efforts to collect data and turn it into information, I’d like to share my wish list of artifacts that I’d like to see students get at the end of a CCDC event.
1) A Timeline
A timeline should capture red team activity as a series of discrete events. Each event should contain:
- An accurate timestamp
- A narrative description of the event
- Information to help positively identify the activity (e.g., the red IP address)
- The blue asset involved with the event
A complete timeline is valuable as it allows a blue team to review their logs and understand what they can and can’t observe. If they’re able to observe activity, but didn’t act on an event, then the team knows they have an operational issue with how they consume and act on their data.
If a team can’t find a red event in their logs, then they have a blind spot and they need to put in place a solution to close this gap.
In a production environment, the blue team has access to their logs on a day-to-day basis. In an exercise, the blue team only has access to the exercise network during the exercise. I recommend that blue teams receive a red team timeline and that they also get time after the competition to export their logs for review during the school year.
These red and blue log artifacts would provide blue teams a great tool to understand, on their own, how they can improve. Access to these artifacts would also allow students to learn log analysis and train throughout the year with real data.
Cobalt Strike’s activity report is a step in this direction. It interprets data from the Metasploit Framework and data collected by Cobalt Strike to create a timeline and capture this information. There are a few important linkages missing though. For example, if a compromised system connects to a stand-alone handler/listener, there is no information to associate that new session with the behavior that led to it (e.g., did someone task a Beacon? did the user click on a client-side attack? etc.).
2) An Asset Report
An asset report describes, on an asset-by-asset basis, how the red team views the asset and what they know about it.
Most penetration testing tools offer this capability. Core Impact, Metasploit Pro, and Cobalt Strike generate reports that capture all known credentials, password hashes, services, vulnerabilities, and compromises on a host-by-host basis.
These reports work and they are a great tool for a blue team to understand which systems are their weakest links.
A challenge with these reports is that a CCDC red team does not use a single system to conduct activity. Some red tea members run attack tools locally, others connect to multiple team servers to conduct different aspects of the engagement. Each system has its own view of what happened during the event. I’m taking steps to manage this problem with Cobalt Strike. It’s possible to connect to multiple team servers and export a report that intelligently combines the point of view of each server into one picture.
I saw the value of the asset report at Western Regional CCDC. I spent the 2-3 hour block of networking time going over Cobalt Strike’s hosts report with different blue teams. Everyone wanted me to scroll through their hosts. In the case of the winning team, I didn’t have to say anything. The students looked at their report, drew their conclusions, and thanked me for the helpful feedback. The hosts report gave the blue teams something concrete to judge whether they were too complacent or too paranoid. Better, this information helped them understand how close we were to making things much worse for them.
Whether this type of report comes from a penetration testing tool or these competition-specific solutions under development, I recommend that red teams provide an asset-by-asset report. The students I interacted with were able to digest this information quickly and use it to quickly answer some of their open questions.
3) A Vulnerability Report
During a CCDC event, the red team only uses one or two exploits to get a toehold. We then leverage credentials for the rest of the event. Still, I’m often asked “which exploits did you use?” A report of which vulnerabilities were used will answer these questions.
4) A Narrative
The item that completes the feedback is the narrative. The narrative is the red team member telling the story of what they did at a very high level. A short narrative goes a long way to bring life to the data the blue team will have to sift through later.
I believe telling stories is something CCDC red teams do well. At a typical CCDC debrief, red team members will share their favorite moments or wins during the event. Without context, this story is anecdotal. Combined with the data above, it’s something actionable. Now the blue teams know what they should look for when they’re analyzing the log files.
The narrative provides blue teams with a starting point to understand what happened. The data we provide them will give them the opportunity to take that understanding to the next level.
5) Sizzle
During a security assessment, I’m not doing my job if I just explain what I did. It’s my job to ally with my blue counterparts and actively sell our client’s leadership on the steps that will improve their security posture. When communication with non-technical folks, a little sizzle goes a long ways. I like to record my screen during an engagement. At the end of the engagement, I cut the interesting events from the recording and create short videos to show the high points. Videos make it easier to understand the red perspective. If a video involves an event that both the red team and blue team experienced together, I find watching the video together creates a sense of a shared experience. This can go a long way towards building rapport (a key ingredient in that building an alliance step).
To record my screen, I use ScreenFlow for MacOS X. 20 hours of screen recording (no audio) takes up a few gigabytes, nothing unreasonable.
In this post, I listed five artifacts we can provide blue teams to better tell the offensive story. I’ve pointed at examples where I could. Beware though, if actionable feedback were as easy as clicking a button to generate a report, this blog post wouldn’t exist. Reporting is challenging in an environment where 20 experts are actively participating in 10 engagements with multiple toolkits. As different parties build data collection platforms, I hope to see an equal effort towards data interpretation. These artifacts are some of the things I’d like to see come out of the data. What artifacts do you think would help?