While AI-augmented software testing tools are widely adopted, the need for human oversight remains strong, according to a Leapwork survey.

The study, which gathered insights from 400 respondents across the U.S. and U.K., found 79% of organizations have integrated AI into their testing processes.

However, nearly three-quarters (74%) of respondents said they believe human validation will continue to be essential in software testing for the foreseeable future.

The survey also highlighted a notable split between technical teams and C-Suite executives regarding this need.

While 80% of technical leaders, such as IT directors, insist on human involvement, only 68% of C-Suite executives share this view.

Overall trust in AI-powered testing remains high across the board, with 68% of respondents expressing confidence in these tools.

More technical leaders (72%) than executives (64%) believe in the effectiveness of AI-augmented testing.

Evolution, Limitations of AI Testing

Frank Moyer, CTO at Kobiton, said while AI-powered testing tools have advanced significantly, there are still limitations that drive the need for human validation.

“One key challenge is that AI lacks the deep understanding of the nuances specific to your product and the unique problems it solves within its domain,” he said.

Moyer explained this context is critical in testing, especially for complex or highly specialized applications.

“I’ve personally gained confidence in AI for the things I hadn’t even thought of,” he added. “It has significantly improved both the quality and speed of what my team and I deliver, making a measurable difference in our overall productivity.”

Moyer said to overcome the remaining confidence gaps, transparency is key.

“Techniques like explainable AI, combined with AI’s ability to improve its responses based on what it learns about me and my business, will go a long way in boosting trust and reliability,” he said.

Scott Wheeler, cloud practice lead at Asperitas, said he thinks there are several reasons for developer’s high trust in AI software testing.

“Increased testing coverage and scalability are often cited as one of AI testing’s most significant benefits, as manually writing test code for all test cases rarely happens due to time and cost constraints,” he said.

Removing Cost, Time Barriers

Wheeler noted AI removes the cost and time barriers to providing complete test coverage.

“AI testing can also provide more complete test coverage due to the increased number of valid and invalid inputs it can process,” he added.

Continuous improvement and self-learning are other benefits of AI testing software.

“AI can learn from previous test results, user behavior, and patterns in the codebase to improve future testing,” Wheeler said.

The survey also touched on the impact of AI on job creation: Nearly 45% of respondents said AI testing tools have created new roles requiring AI skills, while 43% noted a reduction in roles due to increased efficiency.

The health care sector showed the highest rate of job creation, while manufacturing reported a significant reduction in positions.

Wheeler said an increase in AI reasoning capabilities would reduce the need for human validation, particularly in testing lower-risk software applications.

“Currently, non-AI fully autonomous testing processes exist for simple test cases, so I believe the question is, how much of our current testing can AI take over and in what timeframe?” he said. “I think the answer is that AI will handle most software testing in the next 5 years.”

He added that in five to 10 years, AI would handle nearly all aspects of software testing autonomously, even in high-stakes, regulated environments.

Moyer said “human-in-the-loop validation” would remain important for the foreseeable future, especially as a safeguard for ensuring AI outputs align with real-world expectations.

However, as AI systems become more sophisticated and can be retrained based on human-in-the-loop annotations, the need for human intervention will gradually decrease.

“The number of issues that meet the threshold for requiring human oversight will diminish as AI learns and adapts,” Moyer said. “In the not-so-distant future, we’ll find ourselves ‘reminding’ the AI far less frequently, allowing it to handle more complex tasks autonomously.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Extra at Cisco Live EMEA

TECHSTRONG AI PODCAST

SHARE THIS STORY