As organizations shift control of their infrastructure and data to the cloud, it is critical that they rethink their application security efforts. This can be accomplished by ensuring applications are designed to take advantage of built-in cloud security controls and configured properly in deployment.
Attend this webcast to gain insight into the security nuances of the cloud platform and risk mitigation techniques. Topics include:
• Common cloud threats and vulnerabilities
• Exposing data with insufficient Authorization and Authentication
• The danger of relying on untrusted components
• Distributed Denial of Service (DDoS) and other application attacks
• Securing APIs and other defensive measures
2. About Ed Adams, CEO
• Helping companies secure software
• Research Fellow, Ponemon Institute
• Privacy by Design Ambassador, Canada
• Mechanical Engineer, Software Engineer
• In younger days, built non-lethal weapons
systems for government & law enforcement
www.edtalks.io
3. Agenda
• Cloud threats and vulnerabilities
• Authorization & Authentication
• DDoS and other application attacks
• Application Code & Untrusted Components
9. Identity & Access Management
• It’s harder than you think!
• Broadly-scoped permissions overprivileged users and services
• Avoid wildcard access
• Reduce scope of permissions to specific business cases
• Cross-account access configuration is scoped to specific use
cases
• Access keys rotated on a regular basis
• Users with multiple keypairs flagged for business use case
review
CSPs offer some turnkey IAM solutions; however, YOU still
have to configure & deploy them correctly
10. Weak Authentication
• Unauthorized modification of device settings
• Disruption of service
• Access to critical data and control
An attacker’s gateway to system control
• Multiple types of authenticators (something you know, have, and/or are)
• Client-side authentication duplicated on server side
• Protect authentication tokens, e.g., OAuth 2.0,TLS, etc.
• Define roles: anonymous, normal, privileged, administrative. How should each
be authenticated? What service(s) do each have access to?
• Consider attribute-based access control vs. role-based
Strengthening authentication
12. Managed Access Control
• Dave needs to be able to write to a web content directory
• Sally needs to be able to modify a database table
Grant role-based access that allows access to different resources:
• Computing, Networking, Storage, Management services, Others
These resources fall under the following categories:
• Centrally managed permissions
• Software-defined configurations
• Reporting to easily audit resource access
By using a managed access approach, you get:
13. Tags
• Similar to metadata for cloud resources
• These key or value pairs are useful for searching, reporting, and
tracking costs, but also helpful for security
• By matching up users or policies to specific tags, you can create
dynamic security groups for controlling permissions
• When using CSP IAM features, you can focus on defining roles
and policies vs. who can/cannot administer which server
14. Data Provenance
• Similar to a historical data record – documents inputs, entities,
systems, and processes that influence data of interest
• Tied to Big Data applications that have an enhanced degree of data
classification for which additional metadata must be incorporated
• Given privacy laws, it’s imperative teams implement proper capture
techniques and enhanced logging to execute granular access controls
15. Secure Key Management
Whichever applies to your enterprise, controlling the
encryption keys is critical to controlling data
• Some CSPs permit "bring your own" encryption; others offer natively
• While encryption may occur in the in CSPs environment, customers must
maintain control of the keys that secure their data.
Traditional
Cloud Data
• While some providers offer encryption, securing data is the customer's
responsibility, including compliance with data security & privacy mandates
SaaS Data
• When using public cloud services, some enterprises send encrypted data
cloud, while others utilize the the CSPs encryption functionality
Public
Data
16. Don’t Store Your Lock and Key Together
You may be giving the cloud provider access to your keys
Separate them and back them up in offline and secured locations
Centralize Your Key Management Infrastructure
You may be using multiple key management servers and protocols across different services
This complicates your data infrastructure, increasing cost and risk while decreasing efficiency
Never hard-code encryption keys
Carefully plan file system permissions
Store encryption keys outside the web content directories
Build apps to support periodic key changes and establish a regular schedule
Do not include encryption keys in backups
Secure Key Management Tips
18. DDoS: Available in 2 Flavors
• Botnets that flood your application with traffic
• Results in service or entire app becoming unavailable
• Reflected Attacks
• Exploits flaw in network protocol to amplify traffic
19. Distributed Denial of Service (DDoS)
via Botnet
• Allows a single attacker to send
commands to thousands of
controlled zombies to attack target
• Generates more traffic than the
system can handle; shuts down
service capabilities
20. Distributed Denial of Service (DDoS)
via flaws in protocols
• Also known as a reflected attack
• Amplifies a small amount of traffic
into larger amount
• Attacker sends a group of devices
a connection request that looks
like it’s from victim machine
• Computer sends acknowledgment
to the victim computer
Prank example: asking GrubHub to email all their menus to your friend’s address
21. Mitigating DDoS
• AWS Shield Standard is free
• Amazon CloudFront and Amazon Route 53 offer additional protections
• Elastic load balancing
• Automatic monitoring, notifications, and traffic ceilings
• Geographic isolation and dispersion of excess traffic
CSPs offer DDoS attack mitigations (both tech & best practices)
• Also help with SQL injection & cross-site request forgery
Use CSP DDoS protections withWebApplication Firewalls (WAF)
• Principle of least privilege regarding traffic/communication
Limit application access to ports, protocols, and services
22. Credential Stuffing
• Hackers leverage the power of APIs to initiate an account hijack
• Brute-force attack using leaked username/password combinations
• Proliferation of microservices and containers make this attack popular
• Exchanging info via APIs is standardized and well suited for automation
• Throttle for defense
• Set rate limiting for authentication attempts
• Practice “zero trust” – don’t trust any input without verification
24. Application Hardening
• An insecure application deployed in the cloud is still insecure
• Most vulnerabilities exist at the application level
• Follow AppSec best practices
• Include security in requirements/use cases
• Use known good components in design
• Train build & deploy teams on security
• Regularly assess/test your application
25. API Insecurity
• Expose back-end systems, mobile apps, browsers, other systems
• Threats similar to web apps, but have special considerations
• Handling input, parsing data, authenticating users
• 83% of web traffic today is now API traffic*
• In 2019 Gartner** predicted:
• Within a year, 90% of web-enabled apps will be more
exposed to attack by API weaknesses than via the user interface
• Within two years APIs will be most targeted attack vector
*Akamai – Retail Attack & API traffic
** Gartner - How to Build an API Strategy
26. Insecure APIs
• CSP code modules introduce vulnerabilities similar to native code
• Common to exploit API keys to identify 3rd-party apps using those services
• Other application exposure points
• Anonymous access, reusable tokens, clear-text authentication, open transmission of
content, rigid access controls
• To protect against API attacks
Use CSP APIs that help control access to resources, optimize delivery of workloads, and
provide insights around usage
Audit managed API log files on a regular basis
Enforce strict access control mechanisms (based on least-privilege and need-to-know)
Segregate duties and responsibilities
Implement lockouts for repeated incorrect password entry
27. Treat APIs More Like Products
• Have their own SDLC for designing, building, testing, managing
• API-specific testing
• Responses to invalid data types or formats, e.g., malformed
XML/JSON
• Attacks meant to elicit API failure (abuse cases)
• Attacks that induce “old school” flaws, e.g., buffer overflows
• API-specific security training
• OWASP API Security Top 10 is a great start*
* https://owasp.org/www-project-api-security/
28. 99%
Today’s Applications are more Assembled than Coded
3rd-party Components
91%
75% 60%
of codebases
contain at
least one OS
component
of apps contain
outdated or
abandonedOS
components
of apps have OSS
components with
known security
vulnerabilities
of breaches involved
vulnerabilities for
which a patch was
available but not
applied
70%
of application
code is comprised
of OSS
*souces: 2020 Open Source Security and Risk Analysis (OSSRA) report; CSO Online “9 key cybersecurity statistics at-a-glance”
29. Assessing 3rd party software
• Security audit and reviews
• Simple questionnaires vs. technical analyses
• Threat Model: know the critical assets the software will interact with
• Analyze entry and exit points
• Create risk profiles for each key asset
• Analyze and rate risks
• Dynamic Analysis / Penetration Testing
• Even if you can’t fix the problem identified, you can mitigate it
• Responsibly disclosing vulnerabilities can improve partnership with ”in this together” communication
• Red team & Attack Simulation
• Objective-based options: can you compromise/access asset X?
• Attackers often use a vulnerability in low-risk application to escalate privilege
• SCA (software composition analysis)
Not very different from assessing your own; same attack methods apply
30. Threat & Risk Identification
• Initial Threat Modeling
• Robust
• Business and Risk focused
• Can be done before any code bought/built
• Updates to Threat Model
• Significant system change
• Realization of new threats
• New security-related development
• e.g., authentication
Dataflow Model courtesy of OWASP
Resource pooling -- gives you access to powerful systems you might not have otherwise been able to afford; however, you’re not the only one using the software/system!
Measured service – resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer.
Cloud services provide ease of access, relative anonymity, and robust sharing capabilities; however, these same features result in attacks that are attractive to hackers.
Data Exposure can include disclosure of database info, internal directory & file paths,
DPFs support the separation and parallel processing of an application’s procedural, logical, functional, and physical components. They are vulnerable because access is controlled at the client level not file system level.
ABAC controls access to objects by implicitly evaluating rules against the requesting entity’s actions, the attributes associated with objects relevant to the request, and the environment in which the action is performed.
ABAC controls access to objects by implicitly evaluating rules against the requesting entity’s actions, the attributes associated with objects relevant to the request, and the environment in which the action is performed.
Most organizations currently use Role-Based Access Controls (RBAC) to handle authorization rules for applications and networks instead of
Attribute-Based Access Controls (ABAC) which provide a more granular authorization model.
RBAC, as its name implies, assigns access based on a user’s role. Due to the number of potential roles that must be managed, often manually, RBAC is not as well-suited to the dynamic environments associated with cloud-based services.
ABAC is distinguishable from RBAC because it controls access to objects by implicitly evaluating rules against the requesting entity’s actions, the attributes associated with objects relevant to the request, and the environment in which the action is performed.
Data provenance documents the inputs, entities, systems, and processes that influence data of interest, in effect providing an historical record of the data and its origins.
Proper implementation is complex, expensive, and usually makes sense for highly sensitive data only.
Much like granular access controls and audits, data provenance is tied to Big Data applications that have an enhanced degree of data classification and categorization for which
additional metadata must be incorporated.
With today’s globalization and jurisdictional privacy laws, it is imperative that developers understand proper data capture techniques and the need to implement enhanced logging to execute the necessary granular access controls
For Traditional Cloud Data
Enterprises enjoy numerous benefits from offloading workloads to traditional cloud services providers, such as co-location services, managed services providers and others, but still need to ensure the security of their data. Some cloud providers permit "bring your own" encryption, while others offer encryption natively. While the data encryption may occur in the cloud provider's environment, customers must maintain control of the keys that secure their data.
Secure Key Management for SaaS Data
Gartner reports that enterprises now spend tens of billions of dollars on software-as-a-service offerings, with continued growth expected. While some SaaS providers have added encryption to their increasingly powerful applications, ensuring the security of sensitive data is ultimately the customer's responsibility. This includes key management in compliance with data security and privacy mandates.
Secure Key Management for Public Cloud Data
When using public cloud services such as AWS, Microsoft Azure or others, some enterprises will send encrypted data to the cloud, while others may utilize the encryption offered by the cloud provider. Whichever security key management process applies to your enterprise, controlling the encryption keys is critical to maintaining control of your data.
We might want to move this closer to the front after the Overview. Imagine you’re a buyer and have people pitching CBT of all kinds all the time. Let them know early what part of their org we help.
I like the content on the 3 slides around this, but maybe boil it down into one earlier in the deck?
e.g., flood of HTTP requests to a login page
Let’s look at an illustration of a DDoS attack using a large existing botnet.
The botnet allows a single attacker to send commands to thousands of controlled zombies to attack a target system.
This generates more traffic than the system can handle, and effectively shuts down its service capabilities.
This type of attack is performed with the help of a botnet also called reflectors in this case.
The attacker sends a host of innocent computers a connection request using a botnet, that looks like it came from the victim machine (this is done by spoofing the source in the packet header). This makes the host of computer send an acknowledgment to the victim computer. Since there are multiple such requests from different computers to the same machine, this overloads the computer and crashes it.
This type is also called a smurf attack
verify incoming connections before passing them to the protected service
It’s a threat that’s been around before everyone started adopting the cloud, but the credential stuffing attack is still a problem security architects are having a hard time handling. Credential stuffing occurs when hackers leverage the power of API to initiate an account hijack, with high probability of infiltration. APIs were, after all, created to automate communication between data and facilitate communications between apps. This specific attack is one of the most frequently used by hackers, with the proliferation of microservices and containers that rely on APIs to interact with one another.
To fend against credential stuffing attacks, set rate limiting for authentication attempts, also known as throttling attempts. However, hackers can work around this by configuring scripts to submit requests at a slower rate that prevents blocking. Hackers are also relying on login failure notifications to identify which usernames do and do not exist, using the data to tweak credential lists and increase probability for success. More and more, organizations are relying on the principle of zero trust to embolden security. The concept asserts that organizations should not trust anything inside or outside its perimeters without verification.
On
7. Legacy Banking Systems are Risk Factors: Globally, banks are struggling to develop and implement new technologies rapidly in response to their underperforming and outdated, non-patched core banking systems, which are vulnerable to various kinds of cyberattacks. While fintech integration will happen with such legacy systems, the fintech platforms will also become preferable targets for hackers. Banks aspiring to get into fintech need to prioritize refreshing their core banking systems.
Application programming interfaces (APIs) from cloud service providers may not be secure.
When these code modules are included in your application, significant vulnerabilities may be introduced including easily exploited API keys employed by web and cloud services to identify third-party applications using the services.
Anonymous access, reusable tokens/passwords, clear-text authentication, open transmission of content, and rigid access controls that can’t be customized easily also can expose your applications to risks.
To protect against API attacks, consider implementing managed APIs that provide several protections, such as those that help control access to resources, optimize delivery of API workloads, and provide insights around API usage and quality
of service. Additional considerations include:
• Audit managed API log files on a regular basis
• Enforce strict access control mechanisms (based on least-privilege and need-to-know)
• Segregate duties and responsibilities
• Implement lockouts for repeated incorrect password entry
Web services increase the functionality and interoperability of web-based applications. At the same time, they introduce new security risks. They increase the application’s overall attack surface, provide programmatic interfaces that facilitate automated attacks, and introduce attack vectors that could be easily overlooked or neglected. While many of the threats for web services are similar to any web application, there are special considerations for handling input, parsing data, and authenticating users.
Secure applications should use trusted service layer APIs (commonly using JSON, XML, or GraphQL) that implement the following controls:
• Adequate authentication, session management and authorization of all web services.
• Input validation of all parameters that cross from untrusted to trusted zones.
• Security controls for all API types, including cloud and Serverless APIs.
Threat & Risk Identification
Having an accurate and complete inventory of all your COTS applications is a key component. Make sure you are aware of all the libraries and components that are being used to make up the software by the third-party vendors.
You can better understand risks by conducting a variety of analysis and testing techniques including:
• Security audit and reviews. These can be as simple as questionnaires as opposed to technical analysis.
• Threat Modeling. It’s important to know all the critical assets COTS software will interact with. These assets can be identified through threat model exercises:
o Analyze entry and exit points.
o Create risk profiles for each key asset.
o Once you have you analyzed and evaluated risks, rank them and address the highest priority risks first.
• Conduct Penetration Testing. You may find vulnerabilities the ISV doesn’t know about. Responsibly disclosing vulnerabilities to the ISV can help cement a stronger partnership and facilitates open communication with the vendor. Learn about our software penetration testing approach.
• Conduct red teams and attack simulations on your IT infrastructure with very specific goals in mind (i.e. stealing your most sensitive IP) to understand which 3rd party (and even internally developed) applications are putting your enterprise most at risk. Attackers often use a vulnerability in a low risk application to gain access to more valuable targets via escalation of privilege and application traversal. This isn’t limited to COTS software or internally developed applications… to an attacker, all applications are potential entry points.