survey

A survey of 800 security decision-makers finds nearly all (92%) are concerned about the use of code generated by large language models (LLMs) within applications. Nevertheless, a full 83% report application developers within their organizations are using these tools, with 57% noting it has become a common practice.

Conducted by Venafi, a provider of a platform for securing machine identities, the survey also finds 63% of respondents have considered banning the use of AI in coding because of security risks, but 72% also conceded they have no choice but to allow developers to use AI if their organization is to remain competitive.

Two-thirds (63%) of respondents said it is impossible to govern the safe use of AI in their organization, because they do not have visibility into where AI is being used. Less than half (47%) said their organization has policies in place to ensure the safe use of AI within development environments.

Ultimately, however, more than three quarters (78%) said AI-developed code will lead to a security reckoning, with 59% already losing sleep over the security implications of AI.

It’s not advisable to ban usage of AI coding tools, because application developers will simply opt to use these tools with no oversight whatsoever, says Kevin Bocek, chief innovation officer at Venafi. Rather than creating yet another shadow IT issue, cybersecurity teams would be well-advised to better understand how these tools are actually being employed, he adds. “Both sides need to find common ground,” says Bocek.

AWS

In fact, many of the concerns being expressed about these tools stem from a general feeling, rather than actual security breaches that have been caused by these tools, notes Bocek.

Reliance of AI to write code is further exacerbating long-standing tensions that have existed between application development teams that lack cybersecurity expertise and the teams that are ultimately held accountable for any breach, he says.

At the core of the tension between application development teams and cybersecurity teams is the number of vulnerabilities that find their way in production environments mainly because scans were either not run or simply ignored at the time code was being developed. If, and when, those vulnerabilities are discovered, developers have generally moved on to their next project. As a result, they no longer have the context needed to assess and then prioritize remediation efforts.

Large language models (LLMs) compound that problem because they are typically trained using code of varying quality collected from across the Web. Much of that code, from a cybersecurity perspective, is deeply flawed; so as LLMs generate more code, the odds more vulnerabilities will be inadvertently included in the code being generated increases.

Many of those vulnerabilities can also be found in the tools being used to construct AI models. A full 86% of respondents said they believe open source code encourages speed over best security practices amongst developers.

Nevertheless, respondents estimated on average 61% of the applications they run incorporate open source code at some level, and a full 90% said they trust code in open source libraries, with 43% saying they have complete faith, even though 75% said it is impossible to verify the security of every line of open source code. A total of 92% noted that, ideally, code signing should be used to ensure open source code can be trusted.

Hopefully, there will soon be an army of AI agents that verify the level of security being attained by code written by machines and humans alike. In the meantime, when it comes to relying on LLMs to generate code, it may be advisable to proceed with a little more caution given all potential risks.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY