DryRun Security today added an ability to use natural language to define and enforce application security policies as application developers build software.
Fresh off raising an additional $8.7 million in funding, DryRun Security CEO James Wickett said the Natural Language Code Policies (NLCP) extends the capabilities of a platform the company developed that makes use of multiple large language models (LLMs) to identify the level of risk attached to any change made to a code base.
That ability to provide Contextual Security Analysis (CSA) replaces the need to rely on legacy code analysis tools that surface too many false positives for application developers to trust, he added.
NLCP extends that capability further by making it simpler to incorporate security rules into application development without having to master yet another programming tool, said Wickett.
As the overall speed at which code is being written in the age of artificial intelligence (AI) increases, it’s become apparent that application developers will also need to rely on AI to ensure that code is secure. The DryRun Security platform makes use of multiple LLMs to identify in real-time various types of vulnerabilities as pull requests are made to a GitHub repository.
The additional funding will be used to add additional engineering resources and marketing/sales teams to drive further adoption of the platform, said Wickett.
It’s not clear to what degree organizations are applying AI to DevSecOps workflows but there is a clear need to make it simpler for application developers to follow best practices. There has been a tendency to shift too much accountability for application security onto the shoulders of developers without providing them the tools required to succeed, said Wickett.
Left with a choice between investigating false positives or falling behind on delivery schedules, many application developers will simply opt to ignore static reviews of code, he added. The DryRun Security platform creates an equilibrium that enables application security goals to be achieved without slowing down the pace of application development, said Wickett.
In the short term, AI tools have increased the amount of bad code being generated largely because they don’t have enough visibility into the production environments where that code is intended to run. As a result, many application developers are spending too much time trying to debug code that they don’t understand because it was created by an AI tool. The DryRun Security platform promises to reduce the time spent debugging security vulnerabilities that might have been created by either an AI tool or a human developer.
Hopefully, there will soon come a day when the overall number of cybersecurity incidents being incurred dramatically declines as overall application security improves. Far too many of those incidents can be traced back to simple mistakes because an application developer didn’t understand the implications of a change being made to a code base. The challenge now is getting the AI tools needed to prevent those mistakes from happening in the first place as soon as possible.