By Jonathan Knudsen
Product security is hard. The goal is to minimise risk by finding and fixing as many vulnerabilities as possible. Developers create vulnerabilities. A vulnerability is born every time a developer does not correctly handle a corner case, or forgets to validate some input, or makes some other mistake.
Knowing this, you can quickly use logic to come to a completely incorrect conclusion: Developers make vulnerabilities. Vulnerabilities increase risk. Therefore, to reduce risk, let’s make it so developers won’t make mistakes. We’ll use a combination of public shaming and security education.
The problem is not that developers are unaware of security, or disinterested. Well, sometimes that is the problem, but more on that later. The real reason developers make vulnerabilities is that they are human. The reason these vulnerabilities persist into released products is that developers are working inside a broken system.
Developers are Rock Stars
With tongue in cheek, we sometimes say that developers are rock stars. The analogy makes sense. Suppose you went to see a concert by the Jonas Brothers. (I don’t know why. It’s just hypothetical.) The Jonas Brothers are the developers in this scenario. While they bring their own talents to the event, it won’t be much of an experience without the instrumentalists in the band, the lighting technicians, the sound people, the ticket sellers, the people that take care of the venue, and so forth. Performers cannot succeed if they are not placed in an environment that helps them succeed.
Likewise, if you want to minimise risk in the products you’re making, you must surround your developers with a process that helps them succeed.
A Broken System
Unfortunately, developers are trained to value functionality above all else. Think about it. When undergraduate computer science students have to do homework, they start with a list of requirements, the assignment itself. They use whatever they can get to fulfil the requirements. Beyond that, some assignments have automated tests that must also be passed. The students are focused solely on providing the required functionality.
Fast-forward to the workplace, where the story is much the same. Developers start with a list of requirements, created by a product manager or a designer. They use whatever they can get to fulfil the requirements. Beyond that, they might have to pass some automated tests. The developers are focused solely on providing the required functionality.
It’s akin to asking someone who has just learned how to nail two pieces of wood together to build a house. Would you feel safe walking around on the upper floor? Would the walls stay up in a strong wind?
The Right Set of Tools
Through a combination of intense security training and a program of severe punishments for security vulnerabilities, we should be able to make developers write better code, right?
Not quite.
Security training is an excellent idea. Among other benefits, security training will help developers write better code. But no matter how much training your developers get, they are still human and will still make mistakes.
The key is improving the process and giving your developers the right tools. Instead of having developers focus solely on functional hurdles, add security testing tools into the automated build and test processes. With the right integrations, developers can work from the same to-do list they’ve already been using (likely Jira) and fix functional and security issues in the same smooth workflow.
The Tools
Many product development teams start on this journey using source analysis, sometimes also known as static analysis or SAST. A source analysis tool examines your source code and reports on probable bugs. For example, Coverity is an industry-leading source analysis solution. You can easily integrate Coverity analysis into existing automated builds, then feed identified issues to your issue tracker. Coverity can even assign issues to developers automatically based on information in your source code management system (e.g., git).
Supply chain analysis, also known as software composition analysis, is another useful, high-impact tool that you can easily integrate into existing workflows. Black Duck is the premier supply chain analysis solution. It builds a comprehensive list of the third-party components you’ve used and helps you manage both security risks and license compliance risks. You can also easily integrate Black Duck into existing automated build processes so it can flag known vulnerabilities and license policy violations in your regular issue tracker.
You can integrate other types of security testing tools into your existing pipelines, including IAST(Seeker), fuzzing (Defensics), and DAST (Tinfoil). When humans hunt for vulnerabilities, it is slow, expensive work. One way you can effectively control risk is to use automated security tools as much as possible to maximise your security investment.
Tackling Lack of Patience
Integrating to existing processes is excellent, but think about the life cycle of a vulnerability:
A developer makes a mistake and creates a vulnerability.
- The nightly build occurs, including security testing.
- The security testing observes the vulnerability and creates an issue in the issue tracker.
- The developer responds to the issue by fixing the vulnerability.
The round-trip time is 12 – 24 hours under the best of circumstances, which is pretty darn good. To really optimise the developer experience, though, imagine if we could get security testing results delivered directly to the developer as they write code.
The Code Sight plugin makes it happen. Developers do their work in an integrated development environment (IDE) like IntelliJ or Visual Studio. The Code Sight plugin is installed into the IDE and shows information from security testing tools in the IDE. Code Sight can even use Coverity technology to show coding errors as they are written. It’s just like when you misspell a word in your word processor, and it gives you a squiggly red line to show you what to correct. When reporting vulnerabilities or other issues, Code Sight also displays relevant links to security learning to give developers all the context and knowledge they need to fix a problem.
Let Developers be Developers
Developers are the creative heart of your product organisation. Part of the work of moving to a secure development life cycle is trying to minimise disruption to your organisation. While it is important to provide security education to everyone in the organisation, including developers, you cannot expect that developers will start writing code that is free of vulnerabilities.
The keys to effectively implementing security testing are integration and automation. Security testing has to happen automatically, and it has to work with your existing processes. Developers are at the centre of this dance, so it’s important to get security testing results in front of developers just like you would with any other type of test results. Integrating with existing build pipelines and issue tracking systems provides this visibility, and an IDE-based solution like Code Sight provides even tighter feedback loops, accelerating your secure development process. With the right tools, developers will help your team create more secure, more robust, better products.
(Disclaimer: The author is a Senior Security Strategist at Synopsys Software Integrity Group and the views expressed here are his own and need not be those of the publication)