When performing security assessments or participating in bug bounties, there is generally a methodology you follow when assessing source-code or performing dynamic analysis. This involves using tools, reviewing results and understanding what you should be testing for. Reviewing modern web applications can be quite challenging, and this talk will go into details on how we can automate the boring (but necessary parts) and how to set a roadmap of what should be focused on when dealing with modern JavaScript applications.
2. About Me
• Sr. Security Consultant @ Synopsys
• Formerly Cigital
• AngularSF Organizer
• https://www.meetup.com/Angular-SF/
• B.Sc. in Computer Security and Ethical Hacking
• Founder of http://leedshackingsociety.co.uk/
• JavaScript Enthusiast!
3. Agenda
MANUAL WORK IS A
BUG
WHAT CAN SHOULD
WE AUTOMATE?
WHAT ALREADY
EXISTS
PROOF OF CONCEPT
APP
4. Manual Work Is a Bug
• Automation can be used to help improve productivity and reduce
repetitive work
• Four Phases
1. Document the steps
2. Create automation equivalents
3. Create automation
4. Self-service and autonomous systems
A.B.A: always be automating – Thomas A. Limoncelli
https://queue.acm.org/detail.cfm?id=3197520
https://parsiya.net/blog/2018-10-03-reflections-on-manual-work-is-a-bug/
5. “Automate everything you can, even if it's just a series of
manual steps in your documentation with a bunch of code
snippets. Let your mind be the CPU. Follow these directions
and improve them when you can.” – Parsia Hackerman
https://parsiya.net/blog/2018-10-03-reflections-on-manual-work-is-a-bug/
6. Automation Process:
1. Target something boring and repetitive
2. Automate it. It does not have to be code. Steps are fine
3. Follow your steps when doing that task and improve your steps in
real-time (do not delay this)
4. Use your newly acquired spare time to target a new task
5. Share the documentation (wiki, git repo, confluence, etc.) and
collaborate with others
A. Good Example https://github.com/EdOverflow/can-i-take-over-xyz
7. Automation will allow you to spend your time focusing on things which require
your energy such as business logic flaws, or identifying more complex attack chains
17. Detecting Issues: Burp Suite
• Burp 2.0 JavaScript analysis is insanely good
https://portswigger.net/blog/dynamic-analysis-of-javascript
18. When To Perform Manual Analysis
• Understanding dataflow is important to confirm issues
• Learning JavaScript inbuilt-functions
• Get comfortable with:
• Developer console
• Debugging
19. When To Perform Manual Analysis
• Browsers interpret window.location.hash differently
http://demo.testfire.net/high_yield_investments.htm#<img src=x onerror=prompt(1)>test@test.com
20. Detecting Issues: ESLint
• Linting can identify issues in source-code
https://medium.com/greenwolf-security/linting-for-bugs-vulnerabilities-49bc75a61c6
https://eslint.org
21. Detecting Out-Of-Bound Responses
• Burp Collaborator is king for Out-Of-Bound Detection
• XXE
• SSRF
• bXSS
• SQLi
• Setting up your own server is a good exercise
22. Detecting Out-Of-Bound Responses: bXSS
• Burp Collaborator does detect bXSS, but..
• Sleepy puppy was the initial ‘bXSS’ tool (now deprecated)
• Current contenders:
• XSS Hunter (Top Dog)
• bXSS (Hot Dog)
https://github.com/Netflix-Skunkworks/sleepy-puppy
https://github.com/LewisArdern/bXSS
https://xsshunter.com
24. We should not have to
manually find ‘hungry’
regular expressions
25. Detecting ReDoS
• Certain vulnerable regular expressions loop exponentially when
validating specific strings
• The expression ^(a+)+$ takes 65536 steps to check the string
aaaaaaaaaaaaaaaaX
• The number of steps doubles for each additional “a”
• An attacker can exploit vulnerable expressions to consume server resources
and create a DoS condition
33. Let’s Talk About Dependencies:
https://twitter.com/kkotowicz/status/1175355336838062081?s=20
34. What if we had a tool such as npm audit that
could help drive secure-development and issue
discovery?
35. What would this look like?
Expertise:
1. Review popular dependencies for:
a) Possible misconfigurations
b) Possible security issues
2. Add them to a repository to retrieve later
Tool Creation:
1. Review a package.json file for its dependencies
a) Retrieve the name, and version
2. Look-up repository and match version and name
3. Return list of recommendations to investigate in a readable format
43. What if we had a tool which combines all the
open source projects which help identify security
issues and drive secure development?
44. Metasec.js
• Metasec.js is a meta analysis tool to review
JavaScript Applications using open-source tools:
• Current Functionality:
• Help drive secure development based on
package.json dependencies
• Identifies issues with third-party dependencies
• Looks for secrets
• Looks for ReDoS
• Performs security linting against an application
• Audits the application for electron issues with
doyensec/electronegativity if electron is in the
package.json file
https://github.com/lewisardern/metasecjs
46. Future Work
• Currently a PoC, and needs a lot of work…
• Future plans:
• Extend the ‘repository’ with more libraries
• Smarter automation
• E.g. detect when a framework/library is used and run appropriate linting/tools
rather than hailmary
• Extend or improve current capabilities
• Link finder, and other asset finding capabilities in source-code
• Request for help!
47. Conclusion
• Automation helps streamline work processes
• To get the best results you should automate from both a static and
dynamic perspective
• Automation can be used to help identify security issues and drive
secure development in modern JavaScript applications
• Automate and be AWESOME!
Hi Everyone,
Just a little introduction about myself.
I’m a Senior Security Consultant at Synopsys, where I specialize in Application Security.
I am one of the organizers for AngularSF
I did a degree in Computer Security & Ethical Hacking where I also founded the leeds ethical hacking society
The main reason why I’m speaking to you today is because I’m a JavaScript Enthusiast, I love everything there is about JavaScript
First we will dive into what do I actually mean by ‘Manual work is a bug’
We will talk about automaton and the types of things we can automate when reviewing JavaScript from a dynamic and static perspective including what already exists
Finally I would like to show you a proof of concept application I hacked together for this talk, its not pretty but it sure does the job!
Thomas A Limoncelli wrote quite an interesting article which was titled ‘Manual Work Is a Bug’ where it goes into detail about two engineers.
Both engineers are equally as talented at their job and have similar skill sets, however one was deemed more successful and this was due to their iterative process of automation.
Automation whist not always the best use of time for tasks you only intend to use once, If used in the right way can be extremely beneficial and help improve your own productivity and reduce repetitive work.
Thomas documents four phases to the key to automation which is:
Document the steps, this is where you are the CPU, you are writing the step by step guideline on what needs to happen when doing a particular task
As documentation matures, you can start to build the automation equivalents this could be code snippets and command-line arguments
Then these command-line arguments are turned into even longer scripts, which may need to be stored in source code
Finally, what was once just a step by step guide is now a fully fledged application or stand alone tool, and these are the phases Thomas documents in his paper
My good friend Parsia had some reflections on this paper, which I would recommend you read in your personal time, however lets take a look at a quote from that article.
When Parsia was my colleague, you would hear people often say ‘Have you asked Parsia?’
There was a good reason for this, one because he dresses up ask hacker man on Halloween.
Parsia always had the intention to clone himself so he did not have to work so hard, this clone was a wiki, every task was a step by step guide such as:
How to use a particular tool like eslint
How to add your certificate to the java store
How to intercept traffic in a thick client
Why something usually fails with burp proxying
So when someone asked Parsia a particular question, he would just usually point them to his clone, and that would usually give them the answer.
If he didn’t know the answer, he would either tell them he didn’t know, or help them identify the solution
Once a solution was known it likely went into the parsia clone.
This quote directly relates to that parsia clone, the aim to automate everything, improve them over time which reduces the same repetitive tasks and roadblocks one might get when doing a security testing or well, anything
The automation process is fairly straight forward, and there are a few key aspects you need to do to ensure for effectiveness
Look at your work, and identify a repetitive task
Automate your process on that repetitive task, you don’t need to dive straight into code steps are absolutely fine at this stage
When you follow these steps, if you ever identify an improvement, immediately update them if you delay on this action its likely you will still be performing the less optimized approach in the future (we are lazy)
Now you have an optimized way of doing things, you can spend time taking another boring repetitive task and repeat the same actions
Finally you want to share this work so others can help improve the process or help collaboratorate.
From a bug bounty example take a look at the work from EdOverflow, he started with a resource on what to do to identify the response on a website to determine if its vulnerable to domain take overs.
Now if you look at this resource it’s a living document and its incrementally updated over time!
When we thin about automation in security, on every assessment you have to do the same repetitive things.
The more you optimize the process of these boring tasks will allow you to identify issues you generally would miss during an assessment.
This could be a very unique business logic flaw that affected authorization that required some very deep analysis.
Or even help you start to piece together all of these issues you have found and how they can be chained to increase the severity of an attack
In 2019 we should not have to manually look at bundled JavaScript and look for assets.
We shouldn’t have to manually identify absolute URLs or relative paths for vanilla JavaScript, JavaScript frameworks, and so on.
There has to be an easy way to do this…. And luckily for us there is!
There are two Burp extensions that can help you identify endpoints.
The first is probably the most stable, js-link-finder which will pull out any relative path or absolute URL,
And this can be extremely useful at finding privileged locations that you should not be able to access and other subdomains for you to go and attack!
The next, is still an early work in progress by ettic-team, however their intentions are to be able to identify paths in JavaScript frameworks like AngularJS,
So combining both of these tools in the near future can give you a large asset list for you to go ahead and investigate.
This could be taken further too by using something like eyewitness to automatically pull those URLs and take a screenshot to visually view later.
Here is an example of visiting AOL with LinkFinder automatically reviewing JavaScript files and identifying potentially interesting domains
m-dev-app
mydev.aol.com
Along with ports,
so using js-link-finder when you are performing an audit will save you a lot of time and effort.
Has anyone looked at bundled JavaScript and been like *NOPE* I’m out.
I think everyone has who still has their sanity, has thought this, there are probably a few JavaScript nerds in the room who absolutely bug out on reverse engineering minified or obfuscated JS, but that’s probably 1%.
Modern applications tend to produce minified code for a variety of reasons
Using TypeScript which adds types of JavaScript, which is transpiled into JavaScript
Even For performance using minification process such as Webpack
Luckily there are many instances where the SourceMap is also accessible within the application, this is useful for developers when debugging in production!
A source-map is essentially the ‘building blocks’ a reference map on how to get back to the ‘non-minified version of the code.
There are a variety of modern solutions to retrieve the source from a map file, the easiest way is to use the debugger in the browser.
The browser has so much more power than we tend to usually know, so if source-maps are available in the browser they will do the heavy lifting as you can see in the screenshot.
There are some projects on github that will do this for us, there are tools like source-map by mozilla that can consume a source-map and process it later.
There is an opportunity to build a burp extension that looks in the JavaScript response and
Points out that a sourcemap is available by seeing the .js.map is referenced at the bottom of the response of a javascript file
Attempts to add .map to each .js file and manually finds it that way if the reference is not there
Finally uses a tool to convert the map to the original source and you can either manually review it later or re-run Burps JavaScript analysis or other tools for maybe better results
This could be a good first burp extension for anyone looking to get more experience with that type of thing.
Manually verifying popular technologies can be automated, this is usually through fingerprinting
Which is the processes of matching some kind of regular expression to identify a particular piece of software.
Whilst it wont uncover everything, you can easily detect the use of
Hosting providers, s3 buckets,
Server versions such the use of Express by finding the X-Powered-By header
Technologies such as Angular and JavaScript libraries like Jquery
The caviat with fingerprinting is that tools can only identify what it knows, there will be false negatives
But extending the capabilities (improving the automation process) is key here to continually evolve tooling.
I think we are in a good place when it comes to being able to identify common sources and sinks in web applications, to recap:
Source: anything the user has control over
Sink: any place malicious input will cause undesirable effects
When we think of JavaScript we know common sources can come from places such as
responseText when you perform an XHR
Event.data when you do a postMessage
Window.location.hash which can be controlled by a third-party and shared over social media
Cookies, storage, and so on
A common sink would be
Eval which dynamically executes JavaScript
innerHTML when you want to assign a value of an HTML element
Window.location.href which can lead to XSS or client-side open redirect
Third-party libraries such as JQuery changes the attack surface from modern sources and sinks and has silently dangerous properties such as .prepend or .append
Frameworks such as AngularJS and Angular change the HTML mark-up and change the attack surface where additional APIs can be a sink such as bypassSecurityTrust.
But just because there are complexities, that doesn’t mean we cant automate the identification of potential sources and sinks in the general eco-system.
Burp 2.0 really has changed the game when it comes to identifying common JavaScript sources and sinks.
I am sure if you look at the hackerone reports for when Burp 2.0 came out of beta, you will probably see a correlation to the increase of issues being reported.
That’s not to say that people weren’t capable of identifying these issues, its just that automation is key to high detection
Eslint is the go-to developer tool for making sure code quality issues are squashed before they make it into a repository.
ESLint Is a linting tool which traverses an Abstract Syntax Tree and identifies anti-patterns or particular pieces of code that should not be used.
Linting is an essential way to identifying potential security issues in source-code, it is not a static code analysis tool which can perform dataflow analysis, its important to understand the difference.
From what I have seen Burp has two types of detection when it comes to finding issues directly in JavaScript
JavaScript Injection, when you are already inside a JavaScript execution context such as setTimeout
DOM Based XSS where you are taking a value such as location,hash and assigning it to innerHTML
The JavaScript injection will likely be reported in a passive scanning state, it may not require an active scan to do deeper inspection to identify the issue
The DOM XSS is more likely to be reported in an active scanning state, as burp has a fairly good way of validating sources and sinks
Sometimes the severity or likelihood will increase when performing an active scan too, so where possible these are my go-to tip.
Perform an active scan on any page which serves JavaScript
Even if that page does not take a parameter directly in the URL or have any obvious dynamic functionality.
Tools are great, and they may flag valid issues, but you will have to validate the findings.
Usually you will need to know how to trace the functionality to see if it is problematic so understanding how data traverses through an application is important
When you are looking at code trying to understand what’s going on you will end up missing a lot of issues without understanding what inbuilt functions do
E.G split, indexOf, regex, and so on
When verifying issues its usually a lot easier to open up the developer console and dynamically verify things by setting break points to see what the values are
Living in the developer console will help you uncover a bucket load of issues you never once thought were there.
Here’s a good example of that.
When we have a window.location.hash, by default most sane browsers will URL encode the input, preventing Cross-Site-Scripting (XSS).
But as you may know, internet explorer is not so sane and we love internet explorer for its uniqueness!
It will not URL encode the output, so when it is inserted into innerHTML it will be interpreted as code introducing XSS.
But to get to that point you need to pass a regular expression that is attempting to mimic an email, so you would need to first pass the regular expression.
This comes back to the point that you need to learn the javascript inbuilt functions and understand browser behaviour
Before ESLint there was ScanJS which was created by Freddy, which had a nice little UI and you could reference a source location and it would scan some code and give you back some results
ESLint is a pluggable linter which has a bunch of security rules, the scanjs rules have been migrated, along-side issues for Node.js and even front-end frameworks
When you are dealing with source-code you should always run some security linting rules over the code base to try and see if it uncovers things you missed.
When we interact with systems, those systems also interact with other systems
Who knows what exactly is happening when your data will end up.
Will it end up in an out of date logging system
Will it be sent to another system that has a vulnerable SQL Injection issue
Will it be interpreted as an AngularJS expression when curly braces are included
Will an misconfigured XML parser run quarterly on submitted invoices
This is why we need to detect things out of bound because in six months one of these vulnerable systems could finally ‘trigger’ that payload or data and interpret it as code
Burp Collaborator is the best resource you can use as it allows for DNS, HTTP, and SMTP OOB detection, sometimes systems don’t directly query over HTTP.
I have some news from the source, when you are working on platforms for bug bounties
Use one burp project for all scanning to ensure when a blind xss fires it will correlate
Use one burp per target, and whenever you revisit that target, the burp scanner will correlate any identified bXSS from when you closed the project file.
Burp does have Blind XSS Payloads, but only a few and I don’t think the tool itself should be a hail mary as it will just bloat the tool for no reason.
There is a problem, imagine a system is using AngularJS, and burp did not have that payload configured for a blind xss context, the issue would probably be undiscovered
The first tool which really expanded the blind XSS capabilities was sleepy puppy, but they deprecated it in 2018
Today there are two tools, and I’ll be honest XSS Hunter is by far the better tool which is why it’s the Top Dog.
My tool, bXSS is the hot dog because it does try to be less intrusive, and has additional payloads that can be useful from a framework perspective.
For this talk, I wanted to provide something useful.
Now just because sleepy puppy is a deprecated puppy, doesn’t mean we still cant use its capabilities.
It took 2 hours to modify and hack around with the sleepy puppy burp extension to remove the dependency of the sleepy puppy server, and use a default set of payloads.
Feel free to use the extension which I intend to improve (given more hours) to add a default payload list with customization where you can add payloads
Now I want to give a scenario on why having something like this is extremely useful.
I use fastmail as my email provider (opsec fail)
Fastmail uses DOMPurify to sanitize HTML in their main web client
This means fastmail also probably uses DOMPurify for their internal systems too
DOMpurify is a super tolerant xss sanitizer, but it just released a new version as various mutation XSS variants were discovered
Now if I were to test any system I would want to have that mutation XSS which may pass some sanitizers, but also versions of DOMPurify below 2.0.2.
JavaScript applications are single-treaded, when we perform a regular expression validation, this blocks that thread.
JavaScript applications can be vulnerable to a particular type of issue called Regular Expression Denial of Service.
On the server-side its pretty disastrous as it can cause the application to completely exhaust the memory or halt the application
On the client-side its still pretty bad, but its normally user specific
We should not have to manually attempt to find every regular expression and determine if they are vulnerable by hand
Let’s take a look at how a regular expression may introduce a ReDoS
The expression you see uses quantifiers which means it expects one or unlimited values, because there are two quantifiers this can cause catastrophic backtracing or run away expressions, where it will take far too much computation time to be able to validate the regular expression.
When we add an additional a it would increase the amount of times it would require to check
This can of course consume the server resources causing a denial of service against the server
So how can we automatically detect ReDoS?
The most robust tool is probably vuln-regex-detector but it is written in perl…
There are many other packages that you can use such as safe-regex or ReDoS, all you would need to do is extract the regular expressions out of the application and supply them to the tools.
I would highly recommend reading the usenix paper by Christian and Michael that really goes into this topic
Now this is the type of output you will see from vuln-regex-detector, it will show you the interpreted language
What regular expressions were vulnerable, and how many were vulnerable, and the file location.
There are many services which we use to enhance our applications, these services generally require an API key, token, or password.
A few examples are:
GITHUB_TOKEN= which allows to commit to a github organization, or repository
RSA Private Keys – no comment
Passwords -
You will see services like github now partnering with providers to identify and remove these tokens from source-code however this issue will be around for a fair amount of time.
There are so many services that got released in the past week that monitor github alone, however if you want some good suggestions on how to look for secrets in an automated way.
Using ripgrep to scan a repository for common patterns is probably the easiest thing to integrate.
There are many other services such as trufflehog, git-secrets, and many many more.
Modern applications depend on an abundance of third-party components, and those third-party components have transitive dependencies.
If you put a modern applications dependency list flat on the ground, it would probably fill the Netherlands a few times.
There is no way you can manually audit each dependency for security issues, which is why we need to rely on services which tell us when there are problems.
There is a flavor of each type of dependency scanning tool..
Some projects are built with yarn, npm, or bower.
For example, an npm project can be defined by either an npm-shrinkwrap, or package-lock.json file and subsequently a yarn.lock file can determine if yarn was used.
Using the inbuilt services or third-party services such as snyk, retirejs or auditjs can be pretty helpful at identifying known vulnerabilities.
The two types of tools a security person will likely use is
When they are dynamically testing an application they will have the burp extension for retirejs installed and whenever it fingerprints a vulnerable version it will report it
Generally this wont lead to a direct exploit, further investigation would need to be performed to determine if the vulnerable component is being used.
Secondly, you can use a tool such as npm audit against source code when performing a static analysis to determine if there are any known issues.
If there is no package.json file, then you can use retirejs or auditjs to find front-end issues, as generally these tools rely on a node_module folder or package.json file
Now for some real talk!
There has been some hyperbowl around how the amazing innovations that services are identifying issues with third-party components and reporting them to services or developers to fix.
Most of the time, these tools will report issues in developer dependencies which means code which is not accessible at runtime, or user-input will not enter it.
So whilst its important to report these known issues, its important to understand the context
If its an issue directly with a top-evel dependency, report it immediately
If its an issue with an transient dependency, the likelihood is lower and should be reported but documented that it is not a direct dependency
If its an issue with a developer dependency, the likelihood is astronomically low, and should be reported but more as an minimal impact, unless good reason to report differently
Now what I want to talk about is a little different
We have all of these wonderful tools that yell at us that our dependencies are bad, but that’s not really the elephant in the room.
Most of the time a developer downloads a popular npm package and uses it, without understanding the conseuences.
What if we had a crowd sourced tool which would work similar as NPM audit, when you see a package.json file take a look at the top level dependencies.
Once you’ve determined those dependencies, are there any potential misconfigurations or security issues that can arise by using that dependency?
Now I’m sure you’re like what is this person talking about, but lets take a look at how this may work..
Storing the secret in source-code is the same as allowing anyone to forge their own valid session, this should be avoided at all costs
The browser should never send the session cookie over HTTP, this can be achieved in a few ways:
Setting the ‘secure’ flag on the cookie
Using a cookie prefix, __Secure or __Host, with __Host you need to set the path to be the root path
We also don’t want to allow CSRF, so we want to ensure the sameSite cookie is applied.
Finally, using the default memory store can exhaust the memory on the host, therefore we should use a different store, for example a MySQLStore.
Now lets get to the real content
What if we had a meta analysis approach to try and help drive secure development and identify issues in applications?
Npm audit or yarn audit
Ripgrep
Vuln-regex-detector
Eslint
Doyensec/electronegatitivty