
Integrating artificial intelligence (AI) into the business is one thing, but developing the skills within the workforce – including both the technical staff responsible for supporting and managing AI, and workers who simply leverage AI as end-users – to drive optimal use of the technology can be quite another thing.
This is especially true at the present moment. The surge of interest in AI that followed the introduction of commercial generative AI tools and services like Microsoft Copilot and ChatGPT has left many businesses in a state of frenzy where their utilization has remained immature while their spendings have steadily gone up.
Many companies are testing AI use cases and proof-of-concept (PoC) deployments but they haven’t yet fully upskilled their workers to leverage AI as effectively as possible and at scale. And that’s the challenge.
The AI Upskilling Challenge
Before diving into how to teach workers to perform effectively with AI, let’s look at some of the difficulties that companies face with AI upskilling.
Much of the issue stems from the inherent complexity of AI. In most contexts (not including more simplistic AI use cases that some organizations have had in place for years like using AI models for predictive analytics), AI represents a fundamentally new type of technology. Most workers have limited, if any, experience working with AI, and expecting them to understand how to manage and use the technology effectively without any kind of support from the organization is not realistic.
On top of this, effective AI adoption requires process changes since it can lead to new types of automations that affect workflows. For example, when it comes to developing AI tools or services, the process resembles traditional software engineering practices in many ways. It includes tools and emthods such as CI/CD, version control and agile development methodologies.
However, AI also introduces additional requirements like data management, model training and customization needs, that don’t exist in the conventional software development life cycle (SDLC). For this reason, you can’t simply shoehorn AI development into existing software development processes; you need to refine your processes to support AI.
Tips for Streamlining AI Upskilling
Making these types of changes takes time and effort, but with the right AI upskilling strategies, business can conquer the challenges.
Because there are two fundamentally distinct sets of workers who engage with AI – technology staff who develop, support and manage AI technology, and non-technical end-users – I’m breaking down the list of upskilling best practices into two components, one for each persona.
Upskilling technical staff
In many respects, teaching technical staff how to support AI effectively is the greatest difficulty because these workers need to understand the intricacies of how AI tools work and mitigate the special challenges they pose. The following steps can help.
1. Unify AI Development and Operations
At many businesses, developing AI models is a separate process from supporting the models once they are deployed. Even worse, some organizations have no clear strategy in place for AI support operations at all.
To close this gap, businesses must devise a unified approach to AI development and operations. They can do this by adopting practices like LLMOps, which integrates AI development, deployment and operations in a fashion similar to how DevOps encourages tight integration between software development and support processes.
2. Discard Undesired AI
As businesses transition from AI experimentation to production AI use, many will find that the AI models or other AI tools and services they initially used are not ideal for their needs. To avoid keeping unnecessary AI technology active – a practice that increases the burden placed on technical teams to maintain the technology, while also increasing the risk of misuse of data by AI – businesses should assess which AI solutions they have in place, which they no longer need and decommission the latter.
3. Separate Sensitive Data
To mitigate security and data privacy risks associated with AI, it’s critical for technical teams to prevent the exposure of sensitive information to AI models or services that shouldn’t be able to access them.
Instructing non-technical workers to avoid risky practices like feeding proprietary business data into a third-party AI chatbot is not enough; data management and privacy teams must invest in the automated controls necessary to mitigate the risk of misuse of data by AI.
4. Manage the Ethical Impact of AI
AI services can create a number of ethical challenges, such as the risk that models will generate biased content due to biases within their training data or parameters. This is a challenge that businesses often overlook when adopting AI, but one that is crucial to address if they want to use AI effectively and at scale.
To this end, technical teams must learn how to evaluate the results produced by AI models, check for bias and, when necessary, update and redeploy models to correct the issue.
5. Invest in Product Management
In the context of AI development especially, careful product management is critical given the central role played by user feedback in driving processes like fine-tuning of models. For this reason, businesses must ensure they invest adequately in product management experts who can help guide the development of models in ways that best align with business needs.
AI Upskilling for End-Users
Teaching end-users to make optimal use of AI is simpler overall, especially when technical teams do their jobs well and place the right AI solutions in the hands of the users. Still, non-technical employees can be upskilled in certain key areas as well.
1. Establish Permissible AI Use Guidelines
The organization should establish clear policies defining which AI tools and services end-users can engage with, and under which conditions.
For example, end-users may need to be educated about the risks of providing proprietary business data to third-party AI service vendors that the business does not trust to manage that data securely. In some cases, the business may choose to prohibit the use of certain AI solutions altogether, and end-users need to know what the rules are.
2. Teach Workers to Manage AI Output
Just as feeding sensitive data into AI poses risk, failing to manage output properly can also be risky. End-users should be trained to understand, for example, the implications of storing AI-generated output in insecure locations if the output contains sensitive data.
They should also understand the ethical challenges related to AI bias that I noted above; technical teams can help to mitigate the impact of this bias, but they can’t prevent it entirely, so end-users need to know how to identify AI output that may reflect bias.
3. Help Staff Discover AI Solutions
A final basic, but easily overlooked, step in teaching end-users to leverage AI optimally is ensuring they know where to find the AI solutions available to them. This can be especially challenging in large enterprises where employees may not know where to access the models or services that the company has deployed. Without that insight, employees may miss out on the productivity opportunities that AI provides. They may also turn to unsanctioned third-party AI tools.
Creating an inventory of approved solutions can help workers find the tools they need. Even better, consider using AI to deploy chatbots that can guide users toward approved tools and services proactively.
Businesses can build and deploy AI solutions all day long, but until they provide their workers with the skills necessary to manage and use those solutions effectively, they are at risk of investing heavily in AI only to receive minimal returns. On the other hand, when organizations invest systematically in upskilling their workforces for the AI era, they position themselves to take full advantage of AI technology, while also keeping AI’s risks in check.