4. Objective
• Introduction
• Infrastructure Automation Tools
• Setup up your own lab (DetectionLab)
• Atomic Red Team
• Metrics
• MITRE ATT&CK Framework Heatmap
• SIGMA
• Suggestions and Continuous Improvement
4
5. Introduction
“COI chairman Richard Magnus also said in his closing remarks that
cyberattacks are a reality today, and APTs are constantly evolving in
their sophistication.
This is why organisations need to adopt an “assume breached
mindset”, and not only have a proactive defence strategy but also
security systems and solutions that enable them to detect and
respond to cyber threats early. In turn, these systems and solutions
should be complemented with the right people and processes.”
Source: https://www.channelnewsasia.com/news/singapore/singhealth-coi-ends-cybersecurity-recommendations-10985254
5
8. Question
• What is actually being detected on?
• What are the gaps in detection?
• What should be prioritised on?
8
9. MITRE ATT&CK Framework
• https://attack.mitre.org/
• Knowledge base of adversary tactics, techniques and procedures
based on real-world observation
• Tactics – Adversary’s Technical Objective
• Techniques – How an Adversary achieves those objectives
• Procedures – Specific Implementations of the Technique
9
12. Packer
• https://www.packer.io/
• A tool for creating identical machine images for multiple platforms from a single
configuration
• Local Hypervisors – VirtualBox/VMWare/Hyper-V etc
• Cloud Providers – AWS/DigitalOcean/Azure etc
• How it works?
• Start VM
• Configure OS
• Install software
• Create machine image from VM
12
13. Vagrant
• https://www.vagrantup.com/
• A tool to build and manage virtual machine (VM) environment
without having to learn specific VM provider’s commands
• Usually used to spin up VirtualBox/VMware development
environment locally
13
16. Pre-built Image + Vagrant Workflow
vagrant up
Download pre-
built box from
VagrantCloud
16
17. Packer + Terraform Workflow
terraform init
terraform plan
terraform
apply
Image
packer build
template.jso
n
Infrastructure
main.tf
17
18. Why do I care?
• “Single” source of truth
• Describe the state of the machine/image explicitly
• Scalable & Repeatable
18
19. Resource to learn more
• Infrastructure As Code Tutorial -
https://github.com/Artemmkin/infrastructure-as-code-tutorial
• World class DevSecOps Training and Certifications-
https://www.practical-devsecops.com/
19
25. Resource to learn more
• Windows Event Forwarding for Network Defense -
https://medium.com/palantir/windows-event-forwarding-for-network-defense-
cb208d5ff86f?
• Endpoint detection superpowers on the cheap, Threat Hunting app -
https://medium.com/@olafhartong/endpoint-detection-superpowers-on-the-
cheap-threat-hunting-app-a92213f5e4b8
• osquery Across the Enterprise - https://medium.com/palantir/osquery-across-
the-enterprise-3c3c9d13ec55?
• sysmon-config | A Sysmon configuration file for everybody to fork -
https://github.com/SwiftOnSecurity/sysmon-config
25
34. Resource to learn more
• Putting MITRE ATT&CK into Action with What You Have, Where You Are
https://www.slideshare.net/KatieNickels/putting-mitre-attck-into-action-with-what-you-
have-where-you-are
• How to Be a Savvy ATT&CK Consumer
https://medium.com/mitre-attack/how-to-be-a-savvy-attack-consumer-63e45b8e94c9
• GETTING STARTED WITH ATT&CK
https://www.mitre.org/sites/default/files/publications/mitre-getting-started-with-attack-
october-2019.pdf
• Comparing Layers in ATT&CK Navigator
https://attack.mitre.org/docs/Comparing_Layers_in_Navigator.pdf
34
44. Tips for writing detection rules
• Don’t aim to write a perfect rule to cover all scenarios and evasions
• Having rules implemented for different techniques is better than
having one perfect rule for one technique
• Make the rule as short and liberal as possible (Depending on your
environment)
• Run the rule against data from 7 / 30 / 60 days ago to determine if
adjustment needs to be made
44
45. Resource to learn more
• Sharing is Caring: Improving Detection with Sigma
https://www.sans.org/cyber-security-summit/archives/file/summit-
archive-1544043890.pdf
• How to Write Sigma Rules
https://www.nextron-systems.com/2018/02/10/write-sigma-rules/
45
46. Common Pitfalls
• 100% MITRE ATT&CK Coverage
• Thinking all Techniques are equal
• Thinking you are done!
• Forgetting the Fundamentals
46
48. Solution: Seek Complementary Sources
• Ask about what parts of ATT&CK they cover and don’t cover
• Ask why they cover certain techniques and procedures and not others
• Seek other complementary products/sources/services that fill the
gaps
48
49. Problem: Thinking all Techniques are equal
• Not all techniques have equal
• Impact
• Usage
• Detection Difficulty
• Data Source availability
• Specific/Broad
• Legitimate use in the organisation
49
50. Solution: Prioritise
• Prioritise detection based on a combination of factors
• Data sources availability
• Value for techniques data sources
• Relevant Threat Groups’ TTP
• Top 20 Techniques based on Vendor X’s data or relevant Threat Groups
• Caveat: *Subject to your environment, maturity and resource available*
50
53. Solution: Prioritise (My Personal Preference)
53
Source: https://upload.wikimedia.org/wikipedia/commons/c/c2/The_Unified_Kill_Chain.png
54. Solution: Prioritise (My Personal Preference)
• If there is no constraint, I would place more weight on popular techniques for the
following tactics:
• Execution – Early stage in the kill chain and data source provides visibility through out the kill
chain because execution is usually not standalone
• Discovery – Early stage in the kill chain and high fidelity because commands unlikely to be
executed by normal users in bulk in a short period of time (whoami/tasklist/arp/net users
etc)
• Persistence – Early stage in the kill chain and attackers usually do it for the ease of returning
to the network
• Credential Access – High Impact and limited techniques
• Lateral Movement – High Impact and limited techniques (Require thorough understanding of
where Administrators log in to)
54
55. Problem: Thinking you are done!
• Endless variants for each techniques – it’s impossible to have a
perfect detection rule for the unknown
• MITRE ATT&CK Matrix only includes techniques from real world
observation – does not include the latest security research or attacks
that are not reported yet
55
56. Solution: Thinking you are done!
• Shift from a Binary Detection metric to a Detection Confidence Level
metric for each technique after initial assessment
56
Source: https://www.sans.org/cyber-security-summit/archives/file/summit-archive-1561390150.pdf
57. Solution: Thinking you are done!
• Another example of Confidence Level
57
Source: https://medium.com/@visiblerisk/detection-confidence-a-framework-for-success-d6cf1aa1638
58. Solution: Thinking you are done!
• Develop your own matrix
• Look out for emerging techniques from latest security research or threat
intelligence report
• Map the techniques to either ATT&CK or your own matrix
• Continuous Assurance & Improvement!
58
59. Forgetting the Fundamentals
59
• Improving your detection capability is great… but don’t forget
• Primary security functions should still be reducing attack
surface/risks:
• Segmenting Network
• Limiting Host to Host communication
• Maintaining asset inventories
• Installing patches
• Managing user privileges
60. Resource to learn more
• ATT&CK™ Is Only as Good as Its Implementation: Avoiding Five Common Pitfalls
https://redcanary.com/blog/avoiding-common-attack-pitfalls/
• Prioritizing the Remediation of Mitre ATT&CK Framework Gaps
https://blog.netspi.com/prioritizing-the-remediation-of-mitre-attck-framework-gaps/
• ATT&CK™ Your CTI with Lessons Learned from Four Years in the Trenches -
https://www.sans.org/cyber-security-summit/archives/file/summit-archive-
1548090281.pdf
• Lessons Learned Applying ATT&CKBased SOC Assessments -
https://www.sans.org/cyber-security-summit/archives/file/summit-archive-
1561390150.pdf
60
64. Solution: Alert Fatigue
• Measure the number of true positive and false positive alerts
• Determine the reason for each false positive alerts
• Categorise the reason for false positive alerts and follow up
accordingly
64
67. Solution: Budget
• Demonstrate Return on Investment (ROI) via
• Existing SOC heatmap coverage and confidence level
• Effort to measure and improve efficiency of the SOC (KPI and metrics)
• Justify additional resource are required
• New tools/data source required to increase SOC heatmap coverage
• Manpower/expertise required to handle alert volume after optimisation
67
68. KPI and Metrics
68
KPI Explanation Target Value
Number of Log Management Rule
Configuration Error events per
month
This value reflects the rules configured in the SIEM
by the SOC Analysts. A high number suspects bad
quality of rules, more training or experience
needed.
< 10 %
Number of Announced
Administrative/User Action events
per month
This value reflects suppressions that should be
improved.
< 10 %
Number of Bad IOC/rule pattern
value events per month
If too many events were created by bad IOCs or
rule pattern values, the source or the trust in it
should be questioned.
< 5 %
Number of Confirmed Attack
attempt without IR actions (best
matched with Log Source Category)
Number of events detected but prevented by
measures in place or where the alert isn’t viewed
as a high risk.
> 50 %
Number of Confirmed Attack
attempt with IR actions (best
matched with Log Source Category)
Very high numbers → Security Architecture should
be updated
Very low numbers → The rules aren‘t detecting or
you are safe
:)
Source: https://github.com/d3sre/Use_Case_Applicability/wiki/KPIs-and-Metrics
69. Resource to learn more
• Use Case Applicability: How to better integrate Continuous
Improvement into Security Monitoring
https://github.com/d3sre/Use_Case_Applicability
• Alerting and Detection Strategy Framework
https://medium.com/palantir/alerting-and-detection-strategy-
framework-52dc33722df2
69