Most security and software development professionals are using GenAI-based solutions for building or delivering applications, but despite its widespread use, significant security challenges persist.
The Regina Corso survey of more than 400 security professionals and software developers found nearly eight in 10 development teams regularly integrate GenAI into their workflows.
However, most developers (85%) and security professionals (75%) worry that depending too heavily on these tools could compromise security.
One of the most pressing issues involves the use of GenAI-powered code assistants, with 84% of security professionals surveyed expressing concerns about potential exposure to unknown or malicious code introduced through these tools.
Nearly all respondents (98%) said they agree security teams need a clearer understanding of how GenAI is employed in development, with 94% stressing the need for better strategies to govern its use in research and development.
Liav Caspi, CTO of Legit Security, the company that sponsored the survey, said when building an application with GenAI-generated code, there are new threats and security risks that are different than with traditional software services.
“Threat modeling must take into account AI security dangers like data exposure, prompt injection, biased responses and data privacy concerns,” he explained.
He acknowledged that AI-generated code tends to be riskier, error-prone, or even malicious.
Caspi said one best practice is to ensure it is getting the same or tighter security testing than human-developed code.
“You have to think about it as if this is coding the organization received from an anonymous contractor,” he said.
GenAI Brings Innovation, Risks
Chris Hatter, COO/CISO at Qwiet AI, said GenAI significantly enhances productivity in app development by accelerating coding processes and automating routine tasks, leading to faster innovation.
“While we embrace these benefits, it’s crucial to address the unique security challenges that AI-generated code introduces,” he said.
From his perspective, the solution lies in implementing strong governance frameworks to assess AI development tools, understanding their training data sources, and establishing robust AppSec programs to evaluate AI-generated code for security vulnerabilities.
“As leveraging AI to write code quickly becomes a necessity to deliver software in a timely fashion and keep pace with competitors, security teams must find a way to implement safeguards that mitigate the risks while keeping pace with accelerated software development,” Hatter said.
He added there is good reason for security professionals to worry about insecure code generated by AI assistants.
“We know that they’re generating insecure code,” he explained. “The underlying models are typically trained on vulnerable open-source and synthetic data; multiple studies prove this.”
Hatter said when using AI assistants for coding, developers must ensure that they clearly understand the model they’re using and where the inference takes place.
“You should ensure that AI-generated code is examined closely for vulnerabilities and use tools capable of detecting hallucinated package recommendations,” he said.
GenAI in DevOps: Oversight Needed
The report also suggested that security teams need better oversight of GenAI usage in development.
Hatter explained data and AI have their lifecycle intertwined with traditional software development lifecycles.
“As enterprise AI use is now here, two things need to happen,” he said.
First, organizations need to treat the AI lifecycle with the same level of urgency as they do the software development lifecycle.
“Data preparation, model selection, and runtime application of AI all need to be secured,” he explained. “There is an entire category of emerging tools in this space.”
The second is that the organization’s SDLC security capabilities need to be adapted to deal with the influx of new code generated by AI.
“Your toolchain needs to be able to scale vulnerability detection and provide developers a head start by delivering high-quality autofix solutions–free of hallucinations,” Hatter said.
Caspi said with reliance on GenAI in software development expected to grow, security teams must get educated on how to secure AI systems and AI-generated code and get visibility into the usage of GenAI in development.
“That’s the first step,” he said. “It is critical to ensure consistent security steps across all code changes, both AI and human.”