Publicité
Publicité

Contenu connexe

Publicité

Red + Blue, How Purple Are You

  1. Red + Blue, How Purple Are You? Identifying Gaps in The Spectrum of Security © 2023 Specter Ops, Inc.
  2. What is Purple Teaming? • A collaborative approach to cybersecurity that brings together red and blue teams to test and improve an organization’s security posture. • Coursera • In true purple fashion, the goal of the course is to educate students on how adversarial techniques can be emulated (manual and automated) and detected (use cases/rules and anomaly-based detection). • SANS SEC699 • The goal of purple teaming is to bring the blue and red team functions together. • ATTACKIQ • Meant to better understand our organization’s ability to detect and respond to real-world attacks. • GitLab 2 © 2023 Specter Ops, Inc.
  3. What is Purple Teaming? • A function designed to enhance the information sharing between – and the ultimate effectiveness of – an organization’s Red and Blue Teams. The ultimate purpose of improving the organization’s defenses. • Daniel Miessler • A blue team becomes “purple” when it emulates the adversary as a means of self-evaluation. • Jonathan Reiber • [Purple Teaming] refers to multiple cybersecurity teams working together to improve an organization’s security posture… • Xena Olsen • Actively pinpoint weaknesses in protection and detection capabilities. • TIBER-EU • Enables defenders to gain better understanding of adversary TTP. • Cristian Pescariu (Pluralsight) 3 https://github.com/ch33r10/EnterprisePurpleTeaming
  4. What is Purple Teaming? • Collaboration between Offense (Red) and Defense (Blue). • Increase familiarity with or understanding of adversary TTP. • Self-evaluation of existing security posture. • Improving an organizations security posture or defenses. • Preventative Controls • Detective Controls • Response Procedures 4 © 2023 Specter Ops, Inc.
  5. ATT&CK Heat Maps 5 https://cyberwardog.blogspot.com/2017/07/how-hot-is-your-hunt-team.html
  6. Problem Statement • Do our security controls achieve their goals? • If our goal is to detect processes named “mimikatz.exe” do we detect every time a process is named “mimikatz.exe” • Are the goals we set for our security controls properly scoped? • The adversary dictates the proper scope. • Detection is a prediction of which variation we expect the attacker to use. • If the answer to either of the previous is “no”, why? • Signature vs. Behavior • Vendor rules vs. Organic rules 6 © 2023 Specter Ops, Inc.
  7. Existing Solutions How has the industry been addressing this problem? 7 © 2023 Specter Ops, Inc.
  8. Existing Solutions: Adversary Emulation Plans • Protocol • Execute a coherent attack path to achieve some objective. • Base selection of procedures on explicitly observed examples. • Benefits • Test a broad range of techniques. • Simulate a ”real” attack path. • Limitations • Attack Path must be coherent, so often executes only one variation of a technique. • Procedural variations are limited to known or observed variations. • Past observations are not necessarily predictive of future actions. • Examples • Red Team Exercises • MITRE ATT&CK EDR Evaluations 8 © 2023 Specter Ops, Inc.
  9. 9 © 2023 Specter Ops, Inc. Techniques Variations Breadth Depth
  10. PowerShell to C# • Microsoft addressed PowerShell malware, as a class, so effectively that using PowerShell became untenable for many attackers. • Sharpdump is one example where the code from Out-Minidump was directly ported from PowerShell to C#, in order to, bypass controls. • Demonstrated that the tool or modality had been detected, not the behavior. • It is not possible to draw broad conclusions (we detect LSASS Credential Dumping) from a single test case. • Efficacy evaluation must include multiple variations of a behavior. 10 © 2023 Specter Ops, Inc.
  11. Existing Solution: Dynamic Test Cases • Protocol • Create test cases that represent adversary tradecraft. • Execute test cases in the environment. • Observe the result of security controls. • Benefits • Allows for a more complete understanding of control efficacy. • Execution is simplified and streamlined. • Provides a repeatable process, especially with detective controls because they are static. • Limitations • Efficacy is dependent on the representative nature of selected test cases. • Finite resources are used to ensure depth which causes loss in breadth. • Examples • Atomic Red Team • Breach and Attack Simulation Products 11 © 2023 Specter Ops, Inc.
  12. Similarity • What does it mean for two malware samples to be the SAME? • How can we measure similarity? • Cryptographic Hashes (MD5, SHA1, SHA256) • Only measure ABSOLUTE similarity • There’s no way to determine if one bit changed or if the entire sample is different. • Piecewise and Fuzzy Hashing1 • Generate traditional hash, but also generate hash values for segments of files. • This assumes that changes will be localized to certain locations (change mimikatz to mimidogz). • Imphash2 • Idea that Portable Executables that import the same API functions are probably similar in function despite changes to less significant bits. • Still lacks ability to distinguish between small and large changes. 12 1. https://www.sciencedirect.com/science/article/pii/S1742287606000764?via%3Dihub 2.https://www.mandiant.com/resources/blog/tracking-malware-import-hashing
  13. Electromagnetic Spectrum 13 © 2023 Specter Ops, Inc.
  14. Defense Evasion Spectrum 14 © 2023 Specter Ops, Inc.
  15. Prototypical or Traditional Variation 15 © 2023 Specter Ops, Inc. 1
  16. Minimize Difference Between Tests 16 © 2023 Specter Ops, Inc. 1 2
  17. Maximize Difference Between Tests 17 © 2023 Specter Ops, Inc. 1 2
  18. Quantity vs. Quality • Phenomena can be measured in two ways: • Quantitative measurements • Number of test cases • Qualitative measurements • Likelihood of Occurrence • Uniqueness relative to existing test cases • Appears that Quantitative measures are being used as a PROXY for Quantitative measures. • If we increase the number of test cases, then surely, we will increase coverage. • Unfortunately, there are so many possible variations that random selection is unlikely to provide a valid sampling. 18 © 2023 Specter Ops, Inc.
  19. Functional Variations • Software is extraordinarily complicated, especially when it is being designed with evasion in mind. • Most Attack Techniques can be implemented in MANY different ways: • Process Injection • 4.4 Million Functional Variations Across 8 Procedures • Access Token Manipulation • 2.5 Million Functional Variations Across 22 Procedures • LSASS Credential Dumping • 33,000 Functional Variations Across 4 Procedures 19 © 2023 Specter Ops, Inc.
  20. Reality of Current Solutions 20 © 2023 Specter Ops, Inc. 1 2 3 4 5
  21. Sampling • Selection of a subset of the of individuals from within the statistical population to estimate characteristics of the whole population. • Examples • Political Polling • Pharmaceutical Testing • Problems • Selection Bias • Sampling Size Issues 21 © 2023 Specter Ops, Inc.
  22. Defense Evasion Spectrum 22 © 2023 Specter Ops, Inc. 1 2 3 4 5
  23. Our Approach How we select test cases for Purple Team testing at SpecterOps 23 © 2023 Specter Ops, Inc.
  24. Golden Tickets 24 © 2023 Specter Ops, Inc.
  25. Golden Tickets 25 © 2023 Specter Ops, Inc.
  26. Classic Shellcode Injection 26 https://github.com/DGRonpa/Process_Injection/blob/main/Classical_Injeciton/Shellcode_Injection.cpp#L40-L48
  27. Classic Shellcode Injection 27 https://github.com/DGRonpa/Process_Injection/blob/main/Classical_Injeciton/Shellcode_Injection.cpp#L40-L48
  28. Function Call Stack – kernel32!OpenProcess 28 © 2023 Specter Ops, Inc.
  29. Function Chain + Call Stacks 29 © 2023 Specter Ops, Inc.
  30. Function Call Stacks 30 © 2023 Specter Ops, Inc.
  31. Win32 API Functional Variation 31 © 2023 Specter Ops, Inc.
  32. Direct Syscall Functional Variation 32 https://github.com/badBounty/directInjectorPOC/blob/master/directInjectorPOC/Program.cs
  33. 1st Hypothetical Functional Variation 33 © 2023 Specter Ops, Inc.
  34. 2nd Hypothetical Functional Variation 34 © 2023 Specter Ops, Inc.
  35. Functional Variations • 900 total ”functional” or ”inter- procedural” variations are possible given the call stacks generated from our initial sample. • 5x6x5x6 = 900 • Relatively insignificant difference. • Can bypass some naïve EDR sensors. • Kernel telemetry is robust. • Like moving the small weight on the physician’s scale. • Not the first thing one should worry about. 35 © 2023 Specter Ops, Inc.
  36. Operations Categorize Function Call Stacks • Each function in a Call Stack can be transparently exchanged for any other function. • From a developer point of view, these small changes make little difference. • From an attacker point of view, these small changes can evade naïve EDR products. • From a defender point of view, these small changes can be ignored by conceiving of the attack via Operations instead of Functions. 36 © 2023 Specter Ops, Inc.
  37. Operational Chain – Classic Shellcode Injection 37 © 2023 Specter Ops, Inc.
  38. EDRs Perceive Operations 38 © 2023 Specter Ops, Inc. OpenProcessApiCall NtAllocateVirtualMemoryRemoteApiCall WriteProcessMemoryApiCall CreateRemoteThreadApiCall
  39. “Align your conception to your perception.” We should think about attack techniques (Process Injection) at the level of resolution at which we perceive them. We don’t SEE Process Injection, we SEE Thread Creations, Process Writes, and Process Opens. 39 © 2023 Specter Ops, Inc.
  40. Why Change it? • Security Controls! • This is analogous to natural selection, insofar as, if a control prevents or endangers the operation it WILL act as a selective pressure. • The more robust the control, the greater the change. • [Small] Signatures looking for the string “mimikatz” result in ‘s/katz/dogz/g’. • [Medium] Controls focused on specific API functions (CreateRemoteThread) result in non-standard API functions being used (syscall!NtCreateThreadEx). • [Large] Controls focused on the Execution method (Thread Creation) result in novel Execution methods (Asynchronous Procedure Calls). • Attackers often implement the “Minimal Stimulus Necessary” principle. 40 © 2023 Specter Ops, Inc.
  41. Example 1: Classic Shellcode Injection https://github.com/DGRonpa/Process_Injection/blob/main/Classical_Injeciton/Shellcode_Injection.cpp#L40-L48 41
  42. Example 2: Asynchronous Procedure Call 42 https://github.com/DGRonpa/Process_Injection/blob/main/APC_Injection/APC_Injection_Shellcode.cpp
  43. Example 2: Asynchronous Procedure Call 43 https://github.com/DGRonpa/Process_Injection/blob/main/APC_Injection/APC_Injection_Shellcode.cpp
  44. Example 3: File Mapping 44 https://github.com/DGRonpa/Process_Injection/blob/main/Mapping_Injection/Mapping_Injection_CreateRemoteThread.cpp
  45. Example 3: File Mapping 45 https://github.com/DGRonpa/Process_Injection/blob/main/Mapping_Injection/Mapping_Injection_CreateRemoteThread.cpp
  46. Example 4: Atom Bombing 46 https://github.com/BreakingMalwareResearch/atom-bombing
  47. Example 4: Atom Bombing 47 https://github.com/BreakingMalwareResearch/atom-bombing
  48. Example 4: Atom Bombing 48 https://github.com/BreakingMalwareResearch/atom-bombing
  49. Example 5: Thread Execution Hijacking 49 https://github.com/DGRonpa/Process_Injection/blob/main/Thread_Hijack/Thread_Hijack.cpp
  50. Example 5: Thread Execution Hijacking 50 https://github.com/DGRonpa/Process_Injection/blob/main/Thread_Hijack/Thread_Hijack.cpp
  51. Process Injection: Operation Chains 51 https://github.com/DGRonpa/Process_Injection/blob/main/Thread_Hijack/Thread_Hijack.cpp
  52. Plotting Operation Chains 52 © 2023 Specter Ops, Inc. 1 5 4 2 3
  53. Operational Variations • There are relatively few “operational” variations for Process Injection. • We’ve looked at 5 in this webinar. • Represent major difference. • Potentially remove the operation that your detective or preventative control was based on. • Like moving the large weight on the physician’s scale. 53 © 2023 Specter Ops, Inc.
  54. SpecterOps Purple Team Resources • On Detection Blog Series • https://posts.specterops.io/on-detection/home • Malware Morphology (4-hour Workshop) • First offering at NorthSec in Montreal (May 18-19, 2023) • Adversary Tactics: Tradecraft Analysis (4 Day Training) • Next public offering at Black Hat USA (August 5-8, 2023) • Purple Team Service • Detection and Response Program Development Service 54 © 2023 Specter Ops, Inc.
  55. SpecterOps Purple Team Resources • On Detection Blog Series • https://posts.specterops.io/on-detection/home • Malware Morphology (4-hour Workshop) • First offering at NorthSec in Montreal (May 18-19, 2023) • Adversary Tactics: Tradecraft Analysis (4 Day Training) • Next public offering at Black Hat USA (August 5-8, 2023) • Purple Team Service • Detection and Response Program Development Service 55 © 2023 Specter Ops, Inc.
  56. Deliberate Test Selection • Ability to sample variations is our key technical differentiator. • Deliberate test case selection using qualitative measurements. • Does this sample call the same functions as a previous sample? (redundant) • Does this sample implement a different Operation Chain? (novel) • Triangulate estimation of coverage. • Begin with maximal differentiation. • Fine tune with smaller and smaller differences over time. 56 © 2023 Specter Ops, Inc. 1 4 5 3 2
  57. Security Control Evaluation • Provide feedback on your controls along three measurements: • Prevention • Detection • Perception (Telemetry) • Controls are evaluated independently. • Using 1 rule to detect 5 tests is robust. • Using 5 rules to detect 5 tests is brittle. • Goal is to infer or predict how likely (the probability) it is that the set of controls would detect/prevent the attacker’s arbitrary selection of a variation. 57 © 2023 Specter Ops, Inc.
  58. Education • We believe that you can’t detect (or prevent) what you don’t understand. • Golden Ticket Epiphany • In addition to the evaluation results, we teach clients about the technique we are testing” • How it works. • Why attackers use it. • The types of variations that exist (big and small). • Common issues we’ve seen with detection attempts. • Prevention opportunities. • Increase the resolution with which clients view attack techniques. • Help to understand the correct level of abstraction. 58 © 2023 Specter Ops, Inc.
  59. Modeling to Improve Solutions • Understand control’s resilience by comparing test results. • Identify the best data source for detection engineering. • Help to understand distinction between uni- or multi-chain detections. • Provide guidance on optimal scope of controls. 59 © 2023 Specter Ops, Inc.
  60. Modeling to Improve Solutions • Understand control’s resilience by comparing test results. • Identify the best data source for detection engineering. • Help to understand distinction between uni- or multi-chain detections. • Provide guidance on optimal scope of controls. 60 © 2023 Specter Ops, Inc.
  61. www.specterops.io @specterops info@specterops.io © 2023 Specter Ops, Inc.

Notes de l'éditeur

  1. This slide just helps to provide evidence to the claim that organizations and/or detection engineers are interested in understanding how much coverage their security controls (preventative and detective) provide, specifically focused on organic controls as opposed to vendor provided controls. This is an attempt to measure how well each technique is covered. An alternative to this is ATT&CK Navigator. We could also potentially discuss my work with Endgame on this topic back in the Air Force.
  2. Mismatch of evaluation/treatment protocol similar to A/V is dead conversations.
  3. Process Open Sysmon 10 Windows Event 4656 (SACL) MDE OpenProcessApiCall ActionType Memory Allocate MDE NtAllocateVirtualMemoryApiCall MDE NtAllocateVirtualMemoryRemoteApiCall Process Write MDE WriteProcessMemoryApiCall Thread Create MDE CreateRemoteThreadApiCall
  4. Process Open Sysmon 10 Windows Event 4656 (SACL) MDE OpenProcessApiCall ActionType Memory Allocate MDE NtAllocateVirtualMemoryApiCall MDE NtAllocateVirtualMemoryRemoteApiCall Process Write MDE WriteProcessMemoryApiCall Thread Create MDE CreateRemoteThreadApiCall
Publicité