infrastructure, bottleneck, Growth, testing, AI testing tools, AI, AI regulation, generative AI, GenAI, AI regulation, AI growth, AGI, AI infrastructure

Looking back on the last two decades, I can recall more than a handful of technologies touted as game-changers. It’s always good press to call something a game-changer, even if, in the end, the game mostly stays the same.

With the sudden launch of generative AI tools like ChatGPT, the power of artificial intelligence is now at the fingertips of the average user. As is tradition, the bandwagon started rolling and the horns blared about this new GAME CHANGING technology. But there was already AI and mass scale machine learning in place. What is it about the generative and interactive part that has people going nuts? I’m here to say that we should take a breath and understand what it is we’re talking about when we see AI spattered onto a new feature launch.

The Information Technologist Challenge

It’s important to acknowledge we are navigating, yet again, a technological evolution that challenges us to adapt to its new world.

As an IT professional, I was always excited to bring to my boss whatever new tool or automation made things easier. I get nostalgic remembering the days when our software deployment plans were mass duplicating CD-ROMs and handwriting serial numbers on the sleeves… now, we click a few buttons and automation tooling just handles it.

At my current job, I’m fortunate to work with technologists who are doing their part to invent the future. Now I’m the one assuming the role of the IT person who asks, “What?” when presented with a new way of doing something that didn’t seem broken in the first place.

When it comes to large language models and generative AI, I do have a few ideas on how we can adopt these new tools while still sticking to our goals of providing the highest quality IT services.

Lean on an LLM as you Would an Efficient Research Assistant

Using LLMs to automate helpdesk tasks can be scary, mostly because the typical IT interaction occurs in the context of a crisis. Big or small, an incident or service request ticket comes with the understanding that help is needed. That help must be efficient and effective – and in most cases fast.

So, it’s a scary thought to entrust your ticket system’s chatbot to an assertive and unpredictable employee. The bot wins with some of the easy stuff – turn it off and on again! But the bot can also fail hard when it very confidently does something completely wrong.

One way to leverage AI safely and effectively in your helpdesk is to implement it as an “assistant,” or “coach,” to your existing helpdesk team – but not the expert or a replacement for the team.

Many of you know the term Google-fu (aka search-fu). The term alludes to the skill involved in asking the right questions and sifting through the nonsense to find the right answer. I appreciate the phrase because it also speaks to the truth that we in IT don’t KNOW everything. We just have – or build – the skills to figure stuff out.

I like to encourage practitioners to implement LLMs as the new Google-fu master. The LLM has done the research, collected the arguments, and logged the “voted best answer” posts on your favorite tech support sites. By leveraging this more efficient workflow, your team can ask the appropriate troubleshooting questions and get the most effective known answer far quicker.

Automate the Escalation

It’s wild, but you could argue LLMs effectively automate the process of escalating a ticket to the wizards in L3. The entire back-and-forth of “Did you try X?” between your experts and users is replaced by a judgment-free and cost-effective expert.

Just like you and me, the LLM will get it wrong from time to time – after all, it gets its intel from us. But we remember there’s a delete button in this chat, right? Click it, and the LLM won’t remember the mistake we made moments ago. So, we can all move past the hiccup and focus on solving the problem at hand.

Sometimes, however, you don’t know where to start. But remember, you can always approach the chat window without fear of annoyance, shame and judgment. This may sound silly to some, but in my experience, it’s extremely valuable to the learning experience of solving a particular problem.

Being able to say, “That didn’t work, I got this error instead,” can be frustrating IRL. But with generative AI at your fingertips, you’ll immediately get a suggestion for the next thing to try – and the potential fix will come without any side-eye or attitude. Woohoo for AI – the best co-worker and coach, ever.

Use Generative AI as a Bridge Over Knowledge Gaps

If you’re not willing to implement LLMs in the standard workflow of troubleshooting, escalation, or review, the safest approach I can think of would be to use the tech to accelerate your team’s training and development.

By utilizing AI to translate PowerShell code into Bash or Python, you not only get a head start with 80% of the script drafted for you, but also gain a clear explanation of the conversion process. This is a significant time saver and a bridge across skills gaps that were previously insurmountable due to budget constraints. Think about it: you’re no longer required to bring on board a Mac or Linux specialist to manage the handful of machines and servers in your office running on these systems.

Instead, you can elevate and equip your Windows administrator to apply their knowledge across any platform, effectively crossing the present skills gap. This solution not only saves time but also promotes internal growth. Problem solved.

Generative AI: The Invisible Game Changer

Generative AI has already changed the game by becoming an efficient research assistant, automating escalation processes and bridging knowledge gaps. And the key to harnessing its potential lies in understanding its capabilities – and limitations.

Integrating LLMs and other such models into your systems just needs to be done in a way that best supports your organization’s goals. Take it slow; navigating this new tech is supposed to be the fun part! As it learns, we learn. Us, and then it. It, and then us. No doubt we’ll find several ways to grow generative AI as it subtly, yet significantly, changes the game.

The key will always be remembering to stay calm and carry on.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY