stack, LLM app, tech stack, SaaS, LLMs, AI, GenAI, Practicing AI with Sentence Transformers

The “mobile-first” era was a defining moment for SaaS companies, pushing them to reorient their strategies around mobile apps and responsive websites. Success required a new approach to user design, service architecture and roadmap priorities. Those that pulled this off became wildly successful, like Adobe, DocuSign and HubSpot.

Today, a new wave is upon us: The rise of generative AI in applications. As customers’ expectations pivot rapidly towards this breakthrough technology, many SaaS-based applications are vulnerable. And this is not a replay of the mobile-first era. As LLMs (large-language models) become rapidly commoditized and easy to implement, there is no need for incumbents like Microsoft, Google, Adobe and Amazon to rip out their existing infrastructures and cannibalize their businesses. Instead of the usual scenario where startups disrupt the market, it’s the incumbents that have the edge. This is so long as they pursue the right strategies – and so far, they have.

LLM Panic for SaaS Companies

For SaaS companies, this presents a unique challenge. It’s an LLM panic. The playbook that worked in the mobile-first era will likely fall short. Simply adding LLM technology will not be enough to stand out and usher in disruptive change against the incumbents.

There will need to be a combination of buying existing platforms and fine tuning them for the unique needs of their customers. It will be a “build and buy” decision. There will also need to be a rethinking of the UI as well as the approach for monetization.

A fresh, innovative strategy is imperative to navigate this new terrain so as to build a lasting business.

The Generative AI Platform Shift

Since the early days of computing, the ultimate aspiration for user interfaces has been the seamless understanding and generation of natural language. But of course, the UIs in the real world were much less inspiring, such as with menus, buttons and sliders on a screen.

Yet LLMs have changed the game. Interacting with them seems more like chatting with a person. You can ask them anything and often they come up with a useful answer. The LLMs possess the ability to understand context, grasp nuances, and even exhibit humor or empathy. This is not just about processing information. It’s about making real connections – and this can be transformative for SaaS applications, say with customer service or reaching out to a prospect or interacting with fellow employees.

Generative AI is still in the nascent stages, but the technology is continuing to improve at a rapid pace. It’s at the heart of much academic research from data scientists. Then there are the heavy investments from VCs and strategic investors. For this year, more than one dollar out of four in funding has gone to AI startups, according to Crunchbase.

Generative AI is likely to impact all business sectors. Consider a study from McKinsey. The findings suggest that 60% to 70% of tasks performed by employees are ripe for automation. Furthermore, the economic implications are monumental, with generative AI projected to inject between $2.6 trillion and $4.4 trillion in annual economic value. McKinsey states that the biggest impact will be on customer operations, marketing, sales, software engineering and R&D. These areas are foundational pillars for SaaS applications.

However, generative AI goes beyond generic models. They can be meticulously refined for specific domains, resulting in operational systems like chatbots and actionbots that integrate seamlessly into existing environments. By tailoring these LLMs to address the needs of the “last mile,” businesses can achieve improved accuracy, minimize errors and foster contextualized interactions.

Red Ocean

The landscape of generative AI is undergoing a rapid transformation, moving at a pace that’s both exhilarating and challenging for SaaS players. Yet the playbook is actually in line with what’s typical for a platform shift – albeit with a compressed timeline. The early beneficiaries are infrastructure pioneers, like CoreWeave, platform developers such as OpenAI, strategic consultants like Accenture, and specialized SaaS solution creators like Jasper.

However, the tides are beginning to shift, especially within the SaaS ecosystem. Google, Microsoft and Adobe are not just observing from the sidelines. They’re making aggressive moves, infusing generative AI capabilities into their expansive platforms that cater to billions of users. OpenAI, too, is broadening offerings with the introduction of an enterprise-grade version of ChatGPT.

One of the driving forces behind this rapid integration is the accessibility and simplicity of implementing generative AI. With an API, a developer can spin up an LLM-based application with 20 to 30 lines of Python code. It’s even easier when using a framework like LangChain.

This is in stark contrast to every other platform shift. For example, the transition from on-premise to the cloud was a nightmare for the incumbents. They had to make significant investments in new architectures and infrastructure systems. In the meantime, they had to continue to support existing applications. This also involved cultural issues, as there would often be resistance and skepticism of immature technologies. Then there was the disruption of new business models based on subscriptions.

Given these challenges, it is no surprise that there emerged breakout companies that capitalized on the cloud opportunity.

But in the generative AI world, the use of LLMs is much less disruptive. It’s an overlay on existing infrastructures. This means that the incumbents can reinvigorate their franchises quickly. In fact, companies like Salesforce, ServiceNow and Microsoft are leveraging generative AI as an opportunity for offering premium pricing on new services.

The New Playbook

For SaaS companies, the temptation of developing their own LLM is strong. The logic seems straightforward: This is a sure-fire way to stand out from the crowd, right?

A big part of this strategy is to leverage an open source model and fine tune it. This can be a way to lower the costs and get to market quicker.

Yet this ignores some important pitfalls. First of all, fine tuning LLMs is far from easy because of the complexities of the algorithms. They are also expensive and difficult to maintain and operationalize.

Moreover, even the most advanced open-source LLMs often don’t measure up to the capabilities of models developed by giants like OpenAI and Google. These industry leaders have the advantage of dedicated data scientist teams, expansive infrastructure, vast user bases, and access to huge troves of data.

History further cautions against attempting to reinvent the wheel. Look at the transition from DOS to Windows. Various companies ventured into creating their own GUIs, often to their detriment. Similarly, during the cloud revolution, some firms invested heavily in building data centers instead of leveraging services like AWS, often incurring significant losses and losing their competitive edge.

Finally, there is a practical reason for not building an LLM: The severe GPU shortage. Many companies are on waitlists to get Nvidia chips. Then there are the hefty costs, which can make it difficult to justify custom AI projects.

When it comes to generative AI, it’s wiser for companies to align with established, best-of-breed LLMs. Users have grown accustomed to the caliber of outputs from models like ChatGPT. Any deviation in quality could lead to user disillusionment, posing a competitive risk.

But there needs to be adaptability. The AI landscape is dynamic. Today’s leading model could be overshadowed tomorrow. Rumors suggest that Google’s next-generation model, Gemini, has multi-modal capabilities that are trained on platforms like YouTube.

The silver lining for SaaS startups lies in identifying and capitalizing on specific verticals and niches. Enterprises have troves of data – like logs, emails, transcripts, IoT streams and so on — and often only leverage a small portion of it effectively. But an LLM grounded in this content can provide real value. It will be curated for their business needs and applications.

For example, suppose you are building an application for the insurance industry. It would not only effortlessly interact with users but also answer complex issues about the products and services. A system can suggest cross-sell opportunities, say for adding new riders to a policy. Then actionbots can act as intelligent agents to carry out tasks. These would come integrated into the CRMs, ERPs and billing systems. The result would be a comprehensive domain-specific AI copilot.

But there will also be a need to rethink an application’s UI, similar to what happened with the mobile-first era. Traditional linear and structured workflows will often get in the way. Instead, an LLM-based SaaS application may be more like a casual conversation. The generative AI will anticipate the needs and carry out actions, without a need for configurations and setting preferences. These applications will also become multimodal with images, voice and video.

Finally, the LLM applications will need to be monetizable and this should not be delayed. Customers will be willing to pay for concrete ROI, such as with improved productivity, lower operating expenses or better ESAT or CSAT scores. Microsoft has already shown how this strategy can work with its GitHub Copilot platform, which charges a per-month user fee. During the first year of its launch, it was able to attract over 1 million paid users.

The GenAI Evolution

The shift towards generative AI represents a significant evolution in the SaaS landscape. However, the rapid commoditization of AI technologies means that simply building an LLM is not a sustainable strategy for SaaS startups. Instead, the focus should be on leveraging these advanced models to create intuitive, conversational and multimodal user interfaces that redefine user interactions and expectations.

The key differentiator for SaaS companies will be their ability to innovate and offer unparalleled user experiences. In doing so, SaaS enterprises will be able to ride the wave of this technological trend.