Like many technology innovation breakthroughs before it, artificial intelligence is changing the world around us. Similar to how cloud computing revolutionized the way software was developed and deployed, AI is creating a new evolution in software development. AI is now being used to assist developers with coding at faster speeds and automating processes wherever possible. As the way we develop software is changing, so too must the way we test software, if we hope to create and keep successful practices for software security.

Artificial intelligence has actually been leveraged in software testing for some time already. Nonetheless, many current testing methodologies are ill-equipped to deal with the demands of modern software developments, especially now as AI is revolutionizing the development process.

Static Application Security Testing: High Coverage, But Lots of False Positives

Static analysis testing has used AI via machine learning (ML) and large language models (LLMs) for years in an effort to identify erroneous patterns in source code. This approach to software testing has the advantage of not requiring executable code, but has significant shortcomings. These include:

  • False Positives: Over-approximation issues are flagged on code patterns, not runtime behavior. This means there’s no way to discern whether certain states can happen during execution. Every candidate must then be triaged manually, which is a time and resource intensive task.
  • False Negatives: A lack of context contributes to this shortcoming.
  • Not Reproducible: Static tools are not easily reproducible, so debugging and understanding issues is a slow moving process.

Dynamic (Black-Box) Application Security Testing: Detects Runtime Issues but Low Coverage

Dynamic application security testing is often used in addition to static analysis. This involves “attacking” the application under test, without knowing what is inside, making it a “black-box” approach. This means test cases are generated randomly or based on predefined inputs or scenarios. This approach doesn’t require access to source code, making it a popular tool for attackers. Overall, this approach leads to fewer false positives and negatives, but still includes shortcomings:

  • Hard to Automate: The process requires a lot of manual work.
  • Limited Reproducibility: Once an issue is detected, it can be difficult to locate without source code.
  • Challenging to Use: Dynamic testing takes place separately from the rest of the development workflow, making it difficult to be integrated into a smooth process. It provides delayed feedback about code, making remediation more difficult and time consuming.
  • Inconclusive Results: Without code coverage, there’s no way to know if 1% or 99% of code was tested.

Staying Ahead of Modern Attackers

To handle the shortcomings of traditional software testing techniques and to be better equipped for an age in which AI is leveraged by developers and hackers alike, new testing strategies must be implemented. These strategies should leverage the source code to give an advantage over those trying to exploit vulnerabilities. They should also use the latest advancements in self-learning AI to keep pace with the speed of development.

Dynamic White-Box Testing + Self-Learning AI

The most effective way to leverage the source code for security is to enhance dynamic white-box testing with self-learning AI. “White-box” refers to having full access to the source code, which enables self-learning algorithms to gather information and automatically generate new test inputs. This approach is like solving a maze with full visibility over all paths. Add in self-learning AI and this approach has several key benefits:

  • Unit Test Integration: These tests can be integrated into existing tests or fully automated with each change in code.
  • Testing Unknowns: AI-based test generators look and find issues in places that humans never think to look.
  • Actionable Results: Tests yield no duplicates or false positives, with results that can be reproduced with crashing input and stack trace information.
  • Code Coverage: Quantifying code coverage allows for continuous refining of test inputs and enables an assessment of which parts of a system have yet to be tested.

Software Testing in the Age of AI

AI is on its way to touching every aspect of the software development lifecycle in one way or another. This is, and will continue to, speed up software development, but also produce new challenges along the way. Luckily, we can also use AI to help solve these challenges. Where previous software testing methods that use AI fall short, new strategies can be implemented that leverage self-learning AI to comb through codebases to uncover security issues and use these findings to avoid mistakes in the first place.

Modern software security testing that leverages self-learning AI and source code will result in improved testing capabilities and stronger, less vulnerable systems in production. All while under watchful human eyes, of course.