When did we forget that old saying, “prevention is the best medicine”, when it comes to cybersecurity? The current focus on mitigating real-time attacks and creating stronger defensive networks has overshadowed the many ways to prevent attacks right at the source – where security management has the biggest impact. Source code is where it all begins and where attack mitigation is the most effective.
In this webinar we’ll discuss methods of proactive threat assessment and mitigation that organizations use to advance cybersecurity goals today. From using static analysis to detect vulnerabilities as early as possible, to managing supply chain security through standards compliance, to scanning for and understanding potential risks in open source, these methods shift attack mitigation efforts left to simplify fixes and enable more cost-effective solutions.
Webinar recording: http://www.roguewave.com/events/on-demand-webinars/shifting-the-conversation-from-active-interception
3. “With all software, there will be more
security holes, you need to plan for it,
have tooling, prepare for some notification
process so you can quickly learn when
there is an issue, whether it’s open source
or from somewhere else, that you know
there’s an issue, and then have a
mitigation plan in place so you know
what is affected.”
-Rod Cope, CTO
30. See us in action:
www.roguewave.com
Rod Cope| rod.cope@roguewave.com
Editor's Notes
There are several risks involved in securing embedded systems, some of those risks revolve around network intrusion, information theft, outside reprogramming of systems, and code vulnerabilities.
So when we examine security as it often happens during development, the functions are separate. In a traditional development environment, security, compliance, and development are separate, autonomous groups. The argument for this is the principal of Separation of Duties – in this case making sure that independence exists between development and security. Development builds a release, tests it for functionality, then passes it to security for testing.
Security tools have traditionally been used only by security personnel, typically later in the development lifecycle. Tools include static analysis of source code, dynamic analysis of running applications, and scanning for vulnerable open source components. At each phase of the Secure Development Lifecycle, there are a number of best practices. Security requirements, threat modeling, and several other activities help organizations avoid problems later in the secure software development lifecycle.
The problem with this, in particular as it applies to an Agile environment, is that the testing happens late in the development process, independent of the development team. For example, Traditional static testing requires a compilable application, complete with all dependencies, usually only possible after significant development efforts.. Dynamic analysis, or pen testing, requires a finished application in a test environment, complete with data. By definition this will be very late in the development process.
There have been many studies on the costs associated with delaying security testing, but the numbers remain fairly consistent. The later in the process a bug is identified, the more costly it is to remediate; up to 150X as much as fixing the bug during the requirements or design phase. It’s easy to understand why. Not only is it likely that more code refactoring will be required once an application is “fully baked”, but organizational costs come into play, including triage, prioritization, research, recoding, and retesting to ensure the fix didn’t introduce other problems. In short, costs are higher and releases are delayed. We recently had Jeffrey Hammond of Forrester present at an event, and he said “"It takes 18 months to deploy a new release of a project/product/app where a single line of code is changed.”
Finally, we’ve seen this model result in conflicts between security and development teams. The development teams often feel that security has little involvement during the build process, to only parachute in late in the process, run code through their magic boxes, and produce a long list of bugs – with lots of false positives – just when the product is getting to its release date.
Everyone has seen this chart at some point I’m sure. It maps where software defects are found and the cost to fix them along the software development process. The later you find them the more it costs to fix them.
With the Build-only approach to source code analysis, issues are being found before release but later in the cycle than most desirable.
Now we will discuss some solutions that we offer to ensure your developers are delivering secure, defect-free software in their embedded systems
Memory leak – a failure in a program to release discarded memory, which can build up over time causing impaired performance or failure.
Untrapped exceptions – a failure at a higher level to catch errors generated at a lower level, causing unexpected behavior or crashes
Unchecked stacks and buffers – unchecked areas where data is stored
Misplaced pointers – does your pointer actually work/make sense?
Problems with array indexes – Did your team put the array indexing in the appropriate place?
Errors in error handlers – Error handling refers to the anticipation, detection, and resolution of programming, application, and communications errors. Sometimes there are errors within these areas.
These types of bugs occur all the time and are sometimes easy to fix, sometimes not. The trick is detecting where these flaws are, especially in code that’s split over multiple files or even many developers
Graph: Continuous improvement fits well with the Agile methodology, which is built around brief, repeatable processes. Helping Agile teams avoid, rather than fix security issues should be a high priority. Rather than forcing developers to look up remediation advice on the web, or from internal coding standards, push that information directly to the IDE where it can be easily used.
Studies show that people learn through repetition. The graph represents what is known as the “forgetting curve”. The black line in the graph shows memory retention from a single class. In terms of security training, this means that holding a secure coding training event can be helpful, but if the information is not reinforced quickly and consistently, over 90% of the knowledge from that class can vanish with the first week! If brief reminders are provided, as shown by the yellow line in the graph, knowledge retention improves dramatically, until it ultimately becomes part of a student’s long term memory. Pushing security testing and remediation guidance to the IDE also provides developers with near real-time feedback, improving their ability to recognize risky coding structures and self-correct.
We support hundreds of checkers which can be selected on an individual basis to fit the customer’s needs.
We support numerous coding standards.
Julia to present this slide as we transition into open source software—OpenLogic
Support - Ensure your release dates with OpenLogic technical support, providing the same level of confidence for open source code as technical support for commercial products. Supporting hundreds of open source software packages for issues encountered in both development and production environments, OpenLogic technical support has you covered.
Scanning - Knowing what, where, and how open source software is used within your organization is key to reducing risk and minimizing liability. Understanding the technical issues, licensing models, and security flaws before using open source code is critical to making good choices. Delivering applications on time, including open source that you're confident of, is achievable. Our scanning tool is a software-as-a-service (SaaS) governance platform for comprehensive governance and provisioning of open source software. Our tool scans source code as well as binaries to identify open source code and licenses – even when the open source code has been copied or modified.
Services –
Application Audit - Our Application Audit service analyzes internally-developed software for open source packages and identifies the bill of materials (BOM) and bill of licenses for open source components. After comprehensive code scanning, we aggregate the results to create comprehensive reports that help you make informed decisions about distribution.
Application Certification – Provides certification that an application has been scanned for open source software and licenses, and that all open source license obligations have been met. With this certification, you avoid customer objections and potential litigation.
License Obligation Analysis - Our License Obligation Analysis service uncovers the license information you need to understand open source license obligations and to reduce potential risks. We identify the licenses, obligations, and requirements associated with open source packages your organization uses. With this service, we provide comprehensive reports that give legal and compliance staff the information they need to make informed decisions about open source deployments and distribution.
M&A Open Source Audit - Buyers, venture capitalists, legal and compliance teams, and other interested parties use M&A Open Source Audits to ensure that products are correctly licensed and free of intellectual property conflicts. Sellers can use the M&A Open Source Audit service to ensure mergers and acquisitions move forward smoothly and without asset devaluation.
Cloud Services - We offer pre-configured stacks on the AWS Marketplace, Certified CentOS images, and, as always, trusted OSS support.
Professional Services - We also offer best-practice advice and hands-on assistance. Consulting packages are available with varying levels of hands-on guidance and technical depth to address different needs and levels of complexity.
http://www.openlogic.com/products-services
What attacks will these software components be exposed to?
Will it be accessible over some type of network? Is remote access possible? Is the weakness easy to comprehend by the average attacker?
How do we gauge the “security health” of code coming in?
How do we achieve compliance?
Lengthy process, unclear expectations, lots of resources
Let’s not forget the regular bugs
Can automated testing be more effective?