This document discusses monitoring software repositories to detect security issues. It introduces a tool called SANZARU that analyzes commits to repositories to identify potential bugs and vulnerabilities. SANZARU works by extracting vectors from commit data, training a classifier on past issues, and then classifying new commits. Its goals are to detect security fixes, new vulnerabilities, and interesting new features. The document provides examples of issues SANZARU has found and discusses challenges in commit classification.
2. About me
● Security consultant (C.T.O.)
working for Securus Global in Melbourne
● PentesterLab (.com):
○ cool/awesome (web) *free* training/exercises
○ real life scenario
3. Disclaimer
● No code is going to be released today
● No repositories
were harmed during
the preparation of
this talk
● I worked on Web and Open Source projects
● I worked on commits without using the entire
project's source code
4. Why work on commits?
● Corporate development:
○ Cannot review all projects anymore
○ Nice to have a “what to check today”
○ Sort commits by criticality
○ Detect backdoors
● Agile development:
○ The code changes every day
○ Can’t rely on one time code review anymore
○ Current approach: daily scan
5. Why work on commits?
● You have vulnerabilities:
○ Detect patches affecting your bugs
○ Detect changes to sensitive functions
6. Why work on commits?
● You want vulnerabilities ($$):
○ Detect new features with dangerous functions
○ Detect changes to sensitive functions
7. Why work on commits?
● You want bugs (lulz):
○ Get bugs few hours before the patch is available
○ Get a list of bad practices examples
○ Detect silent patching
8. What's a repository?
● Developers
● Files
● Commits
● And all of these are constantly moving...
9. Developers
● Main developer(s):
○ Add features
○ Fix bugs
● Cosmetic committer(s):
○ Change comments (fix typo)
○ Change designs of the website
○ Change indentation
○ Add documentation
● External people
○ Do a bit of everything
14. Stats (on the last 5000 commits)
● Commits per week:
○ anywhere between 20 and 180 (phpmyadmin) per
week
○ 40 commits per week seems to be the average for
"normal/interesting" projects
● Authors:
○ between 1 and 140
● Average commit: 200 lines
(insertions+deletions)
21. Filtering files
● General approach:
○ images
○ css
○ README
● Framework based:
○ tests (interesting to keep for some projects)
○ database migration/creation script
● Project based files
○ deployment
○ installation files
22. Filtering developers
● For a given project find the "cosmetic
developers"
● Don't get me wrong they are not useless,
they just do things i don't care about
23. Results
● Around 5-10% of commits have nothing to
do with code...
● You can divide the size of most other
commits by 2-3 if you ignore noise
(files/comments/...):
○ new code with test cases
○ modification in comments
○ ...
25. Data mining
● Take your samples (commits)
○ Extract a vector from each sample
○ Classify each sample
● From a training set, learn to classify the data
● Apply what you learned:
○ to the same training set after splitting it (cross-
validation)
○ to new samples
26. Data mining
● training set:
[1,2,3,0,10,220 ] -> bugfix
[2,4,3,0,1,0 ] -> boring
[2,5,3,3,1,1 ] -> boring
[20,1,0,100,0,10 ] -> new bug
● testing:
[23,0,1,90,0,15 ] -> ???
27. Extracting a vector
● You can't really say a commit is close to
another commit
● You need to generate a vector from each
commit to compare them
● Once you have done that, everything else is
just magic^W Maths
28. Extracting a vector: getting data
● Number of lines changed:
○ insertion vs deletion
● Number of words changed (--word-diff):
○ insertion vs deletion
● Authors:
○ rating of authors based on the project's history
■ "fixing" score
■ "vulnerability creator" score
○ new developers
○ known security researchers
29. Extracting a vector: getting data
● Number of "dangerous" functions:
○ insertion
○ deletion
● Number of "filtering" functions:
○ insertion
○ deletion
● commit date vs author date
● Keywords in the message and in the code
30. Extracting a vector: getting data
● Files modified:
○ already implicated in a bug fix
○ already implicated in a vulnerability
31. Filtering vs Dangerous
● Good list of "dangerous" signatures from
graudit:
○ https://github.com/wireghoul/graudit/
● Weighting is *really* important:
○ echo -> potential XSS -> 1 point
○ system -> potential commands execution -> 10
points
● Some functions are in both:
○ crypto functions for example
○ crypto can be dangerous and but can filter as well
34. Classification
● Fixed bugs:
○ learn from dangerous keywords
● New bugs:
○ git blame
○ read the source code and classify manually
● Potentially interesting new feature:
○ read the source code
○ can be a new bug
35. Results
● Vector computation:
○ between 15 and 120 minutes for 5000 commits
● Classification:
○ less than a minute
● Scoring:
○ 90% success rate on bug fix (without using the
message as part of the vector)
○ 50/50 between FP and FN on bug fix
○ 200 commits down to 5-10 bugs per day
36. My tool: SANZARU
● Japanese names for tools make you a Ninja ;)
● Ruby based (what else...)
● Data Mining done with Weka (thx Silvio)
37. SANZARU: virtuous circle
● Made in a way that the more you learn on a
project the more effective it gets :)
● Score authors through learning
● Score files through learning
● add functions used by the project
38. SANZARU: "learning mode"
● take the last 5k commits and give you the list
of impacted files and authors with a weight
● still working on finding the initial bug's author
but it doesn't really give you more information
41. SANZARU: "classification mode"
● Using ruby to create all the vectors
● Using weka to classify the data
● Then manual review of the results:
○ New features to find security bugs
○ FP for possible silent patching
42. SANZARU: "daily mode"
● Cron job (every day)
○ update all repositories (hasn't been blacklisted by
github...yet), ruby-git is *shit*
○ find alerts in new commits
○ classify new commits
○ give me a nice report with what to read
47. General observations
● Most fixes are:
○ small code insertion (less than 10 lines)
○ basic line substitution
○ easy to detect
● Most new bugs are:
○ details...
○ really hard to detect statistically
○ general approach: read all potentially interesting
commits
○ working on important projects make the creation of
bugs far less likely
○ it's not going to rain 0dayz...
48. Possible improvements
● Integrating syntactic analysis:
○ regular expression are just not enough
○ False alerts are time consuming...
● Retrieve information from external sources:
○ bug report
○ CVE
● Support for more languages/platforms:
○ Objective C libraries and applications?
○ Linux kernel?
○ ...
49. Conclusion
● Easy to detect:
○ (Silent) Security Fixes
○ New features with "interesting" functions
● Not so easy to detect
○ New security bugs
● Still worth the time
○ if you want bugs
○ if you are doing code review to have examples to
learn from or share: vulnerability patterns
○ most frustrating thing you can do?
50. Questions?
@snyff
● Have a great Ruxcon
● Play the CTF and Lock Picking
● Remember to checkout:
○ PentesterLab.com
○ @PentesterLab
● Thx to everyone who helped me
putting this talk together