Good Hunting

The personal blog of Chris Gerritz. I muse on malware, threat hunting, and security incidents. Occasionally more.

Approaches to Threat Hunting

Asked yourself:

"Am I breached?"

"Is someone monitoring my systems right now, logging my keystrokes, stealing my credit card information or intellectual property?"

How would you answer these questions in your organization?

Aha, I saw what you just tried to do there. You just reached for your old, dusty antivirus scan button.

Wrong.

I’m sorry, but that’s not going to help you against modern threats. Not if you're an organization with anything of value on a computer, server or database.

Unfortunately, most organizations still do not employ post-compromise detection outside of this; the statistics and news headlines demonstrate that fact every day. I could go into the failings of antivirus but most of the marketing being pushed already does a pretty good job of that. Fact is: antivirus scans are no longer relevant outside keeping day-to-day riff raff at bay. If this is your only strategy, there will be nothing you can do to find an attacker who wants to maintain access to your network. If you do want to find a modern persistent compromise, you have to hunt for it.



Post-compromise Detection Strategies

Currently, industry is attacking the problem of discovering persistent compromises in a couple different ways, all influenced heavily by the original pioneers of hunting: the US and its allies’ militaries. Having been involved in enterprise-level hunting for 6+ years, I’ve had a lot of time to think about and actually try various ways to hunt better, faster, and more efficiently. This post isn’t meant to be a comprehensive list of those approaches just yet but rather to simply frame some of the models and approaches for future research and discussion.


1. Data-centric Hunting

Data-centric hunting works by ingesting or querying the existing logs from a SIEM or log management solution and outputting flags on certain malicious behaviors and events that require a closer look by an analyst. As traditional real-time detection solutions generally only alert on single packets, sessions, or events, the data-centric hunt model can be very effective in looking at the data over time to glean additional context beyond what real-time sensors can provide. Here is a good presentation I saw from RSA that describes how this can be done using existing enterprise data sources.

In fact, an entire subcategory of security monitoring is emerging from this area called User and Entity Behavior Analytics (UEBA) as these techniques are very useful in identifying credential misuse by an insider threat; an area which has been woefully inadequate in most enterprises.

Proponents: Ex National Security Agency (NSA) practitioners.
The main brunt of experience with this type of hunting are the operators coming out of the NSA. My team at the Air Force did a bit of this (we called it Advanced Traffic Analysis) and we primarily used Splunk as a tool of choice – it proved effective for us but was not a model we could easily teach to lower skilled analysts. Recently, Data Scientists have teamed with security researchers to take this further.

Advantages: The advantages of this type of hunting is that it is mostly passive and non-invasive. It does not collect its own information, it merely applies analytics to existing stores of data and logs. Modern intrusion detection systems and EDR tools collect significantly more data and logs than they generate alerts on, so searching against this data set and correlating behavior over time can be very effective in identifying breaches that went unnoticed.

Disadvantages: Currently has an extremely high skillset requirement but it’s getting better. Since this model does not collect its own data, the existing sensors must also be mature to have the requisite visibility on events within the enterprise. Granularity of logs and having the right logs is a primary concern. To put that in context, understand that a single windows server can produce between 3kB of logs per day (default / most logs turned off) and up to 1GB per day (everything turned on including command line logging). Additionally, if the logs do not go back far enough, you may never catch that breach that happened before your logs rolled over.

Conclusion: I recommend this for mature enterprises only as there is a requirement for collecting and storing vast amounts of security/IT events and logs. The enterprise that engages in data-centric hunting should have good visibility up and down the stack such as gateway sensors, flows, and endpoint detection and response (EDR) sensors at the host level. Additionally, it needs to have long retention of that data (6 months at least). David Bianco has a great post on evaluating the maturity of an organization and the readiness to start hunting like this.


2. Hunting on the Endpoint (DFIR-style)

Hunting on the Endpoint uses host/endpoint forensic information and artifacts to discover threats or artifacts indicative of compromised systems. It’s really an evolution of Digital Forensics and Incident Response (DFIR) with the key difference being proactive application and scale. Some have called it "Proactive Incident Response" but I think that name needs to die.

Performing forensics and analysis on a compromised system is a well-researched topic. Applying those techniques proactively and scaling them against thousands of systems has been a goal for DFIR practitioners for a while now. This can be as simple as Indicator of Compromise (IOC) searches on the endpoint or hunting with Sysinternals tools. Recently this approach has matured to where more and more forensic collection techniques can be used such as volatile memory analysis.

There is a lot less marketing and analyst discussion on this style as direct endpoint access has historically been lacking in the enterprise. Google has some useful research and application in this area: Proactive incident response and scaled digital forensic triage using GRR Framework.

Proponents: DFIR Community / USAF?
The key proponents of this style of hunting is not straightforward as there are many DFIR practitioners and organizations who’ve contributed over the years. I may be a victim of bias but my observations thus far point to the US Air Force as a common root of many current pioneers. Endpoint hunting is the practice pushed with the Air Force’s hunt teams. While I’m probably over-stating the influence that org had – for now i'll just say it is my root and the root of many of the practitioners I admire in this area (i.e. Kevin Mandia).

My own company, Infocyte, is 100% focused on enabling a simplified endpoint hunting process using lessons learned from years of IR and hunting on the Air Force’s networks.

Advantages: This approach is independent of the security stack (aka, it does not rely on existing sensors) as it typically involves collection of forensic data off the endpoint. There are a wide array of data points that can be checked that are not, and never will be, collected by monitoring tools (i.e. logs, EDR, etc.). This approach even works for less mature organizations that don’t have complete visibility or enough centralized retention of data/logs (i.e. most organizations). Even in organizations that do, the level of depth you can get is way beyond what logs or a real-time monitoring tool can produce.

Disadvantages: More invasive as it collects forensic information from each host. For now, there aren’t many techniques to identify compromises of infrastructure devices and devices running non-standard operating systems (i.e. switches, printers, etc.).

Conclusion: The majority of attackers gain entry via an endpoint (workstation/server) and/or maintain persistence there so it makes sense to check these devices using endpoint hunting. Ultimately, even if a compromise was found using a network or data-centric technique, a responder would still have to employ forensic techniques on the endpoint to respond and confirm compromise.


3. Deception

Deception relies on honey pots, honey tokens, lures, and moving targets. The assumption with this approach is that if an attacker gets into the network, they will likely go for a critical system like a domain controller or database. By strategically placing and monitoring tokens (i.e. tagged files) and vulnerable copies of services within the network, the attacker’s presence can be discovered when they get lured into them. While honeypots have been around for a while, the military-inspired deception techniques infused into it by practitioners coming out of the Isreali military have vastly improved the technique to where it is now considered a viable approach.

Proponents: Israeli military
The IDF has always been a strong advocate of this approach and it shows with the number of new deception vendors coming out of Tel Aviv.
http://www.timesofisrael.com/new-israeli-cyber-security-technique-daze-and-confuse-hackers/

Advantages: No requirement for signatures or keeping up with adversary techniques or entry vectors. A tripped token/lure is all the way on the right of the kill chain where you are detecting effects of the adversary; that has nothing to do with their tools, techniques, or exploits.

Disadvantages: Significant infrastructure requirement to replicate critical services. Effectiveness is very difficult, if not impossible, to measure. How do you know if your deceptions are enough to fool the attacker, or the lures are attractive enough that they would be drawn to it? Sophisticated attackers will monitor the network and go to the services and databases already being used in production – especially if they suspect a deception.

Conclusion: I’m still in jury deliberations on the overall effectiveness over time on this approach. While I’ve heard from penetration testers who have been burned by a modern deception solution, I really don’t see this surviving long for the majority of enterprises. Just like attackers evolved sandbox evasion, I predict detecting and countering deception solutions will come shortly behind any wide-scale adoption of this practice. Attackers are generally only fooled once, if the deceptions are not constantly changed, they will become ineffective over time which calls into question the return on investment for such an approach. Military? Sure. But commercial enterprise? I doubt it.


Runner Ups:

While there have been efforts to look at all stages of the attacker kill-chain, many solutions are piecemeal. A couple come to mind that I’ll mention but really only constitute techniques and not a primary approach or model:

  • Network Analysis (Gateway) – Network analysis at the gateway is not incredibly useful for existing compromises considering that after a successful compromise, all C2 and exfil will be encrypted (no, SSL interception will not save you, it’s usually doubly encrypted at the payload AND tunnel). The two primary techniques that do work are comparing outbound destination addresses with known Command and Control (C2) addresses and exfiltration detection by looking at absurd ratios of outbound vs inbound traffic. Generally, your IDS should be doing this and it shouldn’t be your primary solution for post-compromise detection.

    • Command and Control (C2) Detection – A subcategory of the above. An example is running passiveDNS monitoring and comparing queries against a blacklist. Useful for low-grade threats with static C2 infrastructure. Some security vendors have even made this their primary strategy but were not very successful.

  • Network Analysis (Lateral) – Requires sensors being placed all over the network looking for lateral movement and internal recon. Biggest problem is the extreme expense and lack of ROI. Also, skilled attackers generally use standard administrator practices to move laterally so would be rare to catch someone with this approach.

    UPDATE[10/4]: Been informed there are some cool things you can do with Flow data. I've not used flow data for hunting personally so I can't speak to it.

  • Anomaly Detection – Anomaly detection starts by creating a baseline and alerting on deviations from that baseline. I’ve seen it done on the network layer (never successfully) and on the endpoint (Linux file integrity monitoring is a successful example). New unsupervised machine learning models are showing promise for this approach but so far require constant tuning and maintenance to keep the model relevant.

    • I wouldn’t rely on this as a primary strategy as networks are very dynamic and this produces a significant amount of noise if you intend it to have full fidelity and low false negatives. Additionally, understand that attackers have a funny way of getting into your baselines (I’ve seen it more than once). On the other hand, this would work great in limited-function networks that are slow to change (like industrial control networks).


I welcome any discussion on identifying the various approaches and techniques to hunt. I even welcome people telling me I have no idea what I’m talking about when it comes to Deception and other approaches I have not been heavily exposed to. This is a widely changing arena but it’s good to abstract these things so they can be better considered by security teams. I personally prefer DFIR-style hunting on the endpoint for most of my use cases but a good enterprise hunt team should consider layering multiple techniques and approaches.

Let me know your thoughts on Twitter or you might find me hanging out in the Demisto DFIR Slack community.

Author image
San Antonio, TX Website
Chris Gerritz is a retired Air Force cyber warfare officer and pilot. He now hunts malware for a living as co-founder of Infocyte.