ITDR. TTP. AD. Even though I live and breathe security here at Quest, the vast number of acronyms in the cybersecurity world can be overwhelming.
In this post, we will unpack these terms and their definitions, and explore real-world examples of how to approach identity threat detection and response (ITDR). We’ll also examine related tactics, techniques and procedures within one of the most common forms of identity authentication— Active Directory.
Understanding identity threat detection and response
Identity threat detection and response (ITDR) is a relatively new discipline defined by Gartner and styled as the evolution of network detection and response and endpoint detection and response into the security world of identity. I’ve written a general summary of ITDR here, but to briefly summarize:
- ITDR is a framework for detecting and responding to a breach of an organization’s identity infrastructure. Notably, while the overall approach to identity security should absolutely include preventive controls, ITDR assumes that there will be a breach and prioritizes how to find it and what to do when you do find it.
- ITDR as a security principle can be applied to any identity system, including Active Directory. Some analysts may use the term AD TDR which specifically references ITDR for Active Directory. In the context of this article, which references Active Directory security, I will just use the broader term ITDR.
How does an ITDR approach detect a breach?
Gartner categorizes detection into three main categories:
- Indications of compromise (IOCs). Think of these as if you are looking for a specific indication of a specific attack. For example: detecting the execution of Mimikatz on an endpoint.
- Anomalous behavior. This is all about looking at behavior or changes that deviate from an established baseline of “normal” (typically employing some kind of machine learning).
- Tactics, techniques and procedures (TTPs). We will explore TTPs in-depth in the next section.
Gartner strongly recommends focusing on TPP-based detection of attacks on identity. They argue that while all have potential value, IOCs are narrow and easy for attackers to route around, and anomalous behavior detection produces many false positives.
Tactics, techniques and procedures (TTPs)
Security defense is akin to a high-stakes game of Rock, Paper, Scissors – a game where an attacker only needs to be right once, while defenders need to be right every time. To that end, it is important that defenders make being “right” as painful as possible to attackers. The tactics, techniques and procedures (TTPs) of attackers are how they attempt to win this game.
To defend against an attacker’s TTPs, defenders need to understand them. But it is important to understand TTPs “at the right level of abstraction.” That is, understanding the common elements of TTPs that do not often change from attack to attack.
For example, there are several techniques to extract credentials from the memory of a Windows server depending on the version of Windows and security hardening that is in place. But despite the means used to dump credentials, the abstract technique is the same: escalate privilege through lateral movement by dumping credentials with greater privilege out of memory.
If defenders understand the abstract nature of the TTP, it will make the attacker’s job considerably harder. To continue the previous example, rather than looking for specific signs of Mimikatz execution, or signs of Chalumeau execution, and so on and so forth; defenders should be on the lookout for computers where logins occur from both highly privileged and less privileged users. Don’t focus solely on the cat and mouse game of obfuscating Mimikatz execution, but rather focus on the broader TTP of OS Credential Dumping.
It is a defender’s job to form a hypothesis on how to detect a TTP, a hypothesis that is not so overly broad to be useless but not so specific as to be easily missed. Fortunately, the Mitre ATT&CK framework is a pre-defined, industry-standard knowledge base of TTPs based on real-world observation. This body of research gives defenders a solid foundation to begin forming their detection hypotheses.
Understanding TTPs: Practical examples with Active Directory
I highly recommend reading the above links for far deeper dives into ITDR and hunting threats based on TPPs. While ITDR and TTP-based identity threat hunting is about far more than Active Directory, it is one of the most deployed identity systems in the world. Over 95% of enterprises use Active Directory and millions use Entra ID as their authentication and authorization system of choice, so I will give a few examples of TTPs and use the most-adopted identity system, Active Directory, as the example.
I suspect that if you are reading this, you have more than a passing familiarity with Active Directory, but just to cover our bases: Active Directory is an authentication and authorization directory released by Microsoft in the spring of 1999. It is based on LDAP, but with several additions: tight Windows client integration, Kerberos authentication and Group Policy systems management. Active Directory is undoubtedly the most prominent on-premises identity platform. Because of its “maturity” it tends to get picked on by attackers, as it was designed in a more innocent age, security-wise.
Let us look at two examples of Mitre TTPs as they apply to Active Directory. We will be developing two hypotheses per TTP, one that I think misses the mark a little and one that is more on target – at least in my opinion.
Hypothesis #1: Privilege Escalation – T1098 Account Manipulation
Hypothesis: An attacker will manipulate an account that they have control over to become a member of the Administrators Active Directory group.
As a defender, we seek to understand the TPPs that an attacker might use. The above hypothesis is not bad. We know that attackers have a variety of techniques to manipulate accounts. We also know that threat groups like Magic Hound and software like ServHelper add backdoor users into the Administrators group.
But this is a perfect example of needing to develop a broader hypothesis that an attacker cannot easily sidestep a simple change. We could configure an alert on new members of the Administrator group, but that is not the only Active Directory group that can grant privilege over an Active Directory domain. It would be simple to modify Magic Hound to add the backdoor user to Domain Admins, Enterprise Admins, etc. Even worse than that, many organizations have custom groups that can exert control over Active Directory and thus be be leveraged by an attacker.
In other words, our hypothesis is too specific and can be routed around too easily.
Let us try our hypothesis again:
Hypothesis: An attacker will manipulate an account that they have control over to become a member of a Tier 0 group.
This is more like it. In this hypothesis, we are classifying all groups that could exert control over Active Directory itself into a “Tier 0” category. This includes the usual suspects: Administrators, Domain Administrators, Enterprise Administrators, but other groups as well. Groups delegated control over core Group Policy Objects. The Domain Controllers group. Nested memberships into all of these.
By detecting and alerting on changes to this entire definition of groups, we can keep this TPP at the right level of abstraction.
Hypothesis #2: Defense evasion – T1484 domain policy modification
Hypothesis: An attacker will manipulate Group Policy to evade defenses or escalate privileges.
On the surface, this is another solid hypothesis. Group Policy is a systems management technology that has been integrated into Active Directory from its very inception and is a part of every Active Directory installation. It can be used to do nearly anything on an Active Directory joined computer: from setting the Windows desktop background to modifying the system audit log policy.
We also know that attackers love to use Group Policy for both defense evasion and privilege escalation. For example, Group Policy abuse has figured prominently in attacks by Mango Sandstorm.
The trouble with our hypothesis is that there are a great many benign changes to Group Policy in most organizations that occur on a regular basis. If we simply look for Group Policy manipulations of any kind, we will have far too many false positives to detect anything useful.
Reduce your AD attack surface.
But if we constrain the scope of our hypothesis a little:
Hypothesis: An attacker will manipulate Group Policy to create scheduled tasks, install software or run scripts.
Group Policy has a great many options, but most are not going to be what an attacker is looking for. Security researchers indicate that scheduled task abuse is one of the most popular techniques to establish persistence on a Windows computer, and Group Policy provides a central means to configure scheduled tasks. And while perhaps not as popular, it isn’t hard to identify a handful of other policy settings that could cause serious trouble as well. Looking for these types of changes to domain policies allows us to detect an entire range of attacks without being inundated with a lot of false positives.
In summary
Understanding and using that understanding of TTPs in ITDR is a key component to effectively detecting and responding to an identity-based breach. Defenders need to pivot their focus from looking for highly specific indicators of attacker behavior to looking for what an attacker’s purpose is. What do they want to do with Active Directory, for example? Establish persistence, escalate privileges, evade defenses? We then develop hypotheses on the TTPs attackers might use to accomplish those goals and look for the aspects of those TTPs least likely to change from tool to tool and technique to technique. Doing this will make your attacker’s job much harder.