Cisco, GenAI, AI report, artificial intelligence, AI adoption

Software architecture begins with understanding the context into which an application will fit.  Who will use the application, and how? What are the financial goals behind it? What kind of performance is necessary, and what will it cost to run it? Are there any regulatory issues? What are the spoken requirements–and, perhaps more importantly, what are the unspoken requirements? What features are must-haves, what are nice-to-haves (perhaps in version 2.0), and what are YAGNI (“you ain’t gonna need it”)? What are the software development team’s capabilities–is this a simple project that they can do quickly, or will it require significant new expertise? Architects spend their time examining these questions (and many more) throughout the project’s life, not just at the start.

Questions like these are why I don’t believe generative AI will change the substance of software architecture. That’s not to say that it won’t be a help; it’s good at transcribing and summarizing meetings, it’s good at helping build presentations, and it might even be good at suggesting what questions to ask. However, I haven’t yet seen an AI that could build the big picture of what a client needs, which may be different from what the client wants. I’ve also yet to see an AI that could discover the unspoken requirements–which are often more important than the spoken ones. Finally, I’ve yet to see one that could navigate the internal politics of an organization–who’s really calling the shots–or do a good job of determining which features should be in the Minimum Viable Product or put on the YAGNI list. AI will be another tool in the software architect’s toolbox. But it won’t change the substance of what architects do, which is understanding all these human factors, making the necessary compromises, and designing what Neal Ford calls the “least worst” system possible. That’s the hard part of the job.

AI Will Make Changes

Although AI won’t change the practice of software architecture, AI will make a big change in what software architects architect. The first generation of AI-enabled applications will be similar to what we have now. For example, integrating generative AI into word processing and spreadsheet applications (as Microsoft and Google are doing) or tools for AI-assisted programming (like GitHub Copilot and others). But before long, we will be building different kinds of software.

We will be building agents that can take a command from a user, develop a plan to accomplish that goal, and then execute that plan using a number of AI-based services. I can imagine an AI that plans a difficult surgery and then (under a doctor’s supervision) controls a network of intelligent robotic instruments to perform it. How do we design those networks? What new patterns will we discover? When one AI in a network generates an incorrect answer, how do you detect the error and get the whole system back on track? These are questions software architects will have to answer in the near future; some are looking at those questions already.

AI is Different

AI is fundamentally different from the software we’ve been writing for the past 70 or so years. The output of an AI model (whether “generative” or not) is stochastic, not deterministic. We’ve all seen it answer simple questions incorrectly and “make things up” (hallucinate), earnestly and convincingly making statements that have no relationship to fact.

AWS

I recently asked one of the well-known AI chatbots to tell me about myself; among other things, it told me that I founded a multi-billion-dollar networking company in the 1990s. I wish that were true, but no. Designing systems that can detect incorrect output is part of the architect’s job. Likewise, anyone who builds a generative AI application has to realize that some users will try to get the application to generate hate speech. How do we deal with that? Architects must play a role in designing guardrails that prevent an AI model from generating harmful output–whether that output is merely a factually incorrect answer or hate speech.

An Architectural Question

Architects will also play a role in evaluating an AI’s performance. Evals determine whether the application’s performance is acceptable. But what does “acceptable” mean in the application’s context? That’s an architectural question. In an autonomous vehicle, misidentifying a pedestrian isn’t acceptable; picking a suboptimal route is tolerable. In a recommendation engine, poor recommendations aren’t a problem as long as a reasonable number are good.

What’s “reasonable”? That’s an architectural question. Evals also give us our first glimpses of what running the application in production will be like. Is the performance acceptable? Is the cost of running it acceptable? Would it perform better with another model? On O’Reilly’s Generative AI podcast, Andrew Ng noted that it’s discouraging to spend a week writing an application and a month doing evals–and that the time (and expense) of evaluation may make it difficult for developers to experiment with different models. We don’t know, but we expect that architects will be responsible for saying, “This isn’t ready yet; we can do better.”

New Challenges

How will AI change software architecture? The substance of the job will be unchanged, but AI will give software architects new challenges. But new challenges are really nothing new; they’re what keep any job fulfilling and interesting.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY