Synopsis: In this Techstrong AI video interview, Stytch CEO Reed McGinley-Stempel explains how artificial intelligence (AI) agents are using application programming interfaces (APIs) to access content despite controls website owners have put in place.

In this Techstrong AI interview, Mike Vizard speaks with Reed McKinley Stempel, CEO of Stitch, about the growing concerns surrounding AI agents and their ability to bypass traditional web scraping restrictions. While many websites use tools to signal that their data shouldn’t be scraped, not all large language model (LLM) providers adhere to these requests. Stempel points out that some major platforms like Twitter and Reddit have begun locking content behind authentication to protect proprietary data and prevent unauthorized scraping—especially as they develop their own AI-driven services. This shift is prompting even smaller businesses to reconsider how accessible their data should be in an AI-dominated web environment.

The conversation shifts to AI agents—automated tools that can carry out tasks on behalf of users, such as logging into websites or gathering data—raising new questions about data access and abuse potential. Stempel explains that these agents, when combined with tools like OpenAI’s browser or operator functions, effectively extend a user’s reach online, enabling both productive automation and potential misuse, such as creating fake accounts or launching credential-stuffing attacks. This agent-driven activity may disrupt business models that depend on user interface engagement, such as OpenTable, by removing the user from the transaction flow.

To combat these risks, organizations are beginning to segment their data, increase authentication requirements, and implement stronger bot detection. Strategic decisions about what content to expose to AI and what to guard often come from a combination of business and security teams. While some companies welcome AI indexing for visibility in generative search tools, others are pursuing legal action—like The New York Times—to recover damages or restrict past model training. As AI agent usage increases, Stempel anticipates a mix of technical and legal responses, with an eventual need for differentiated governance models based on agent privilege, task sensitivity, and business priorities.