Over the past several years, the evolution of generative AI technology like large language models (LLMs) and image generators has accelerated at what seems like an inexorable pace. This evolution holds the promise of an unprecedented era of efficiency and creativity — as AI applications proliferate, new use cases are constantly being created and discovered. There’s no question that we have entered the AI age, and the rapid advancement and adoption of the technology will likely pick up momentum in the coming years.
However, like the broader internet, AI also has a dark side. LLMs have a stubborn tendency to perceive patterns that don’t correspond with reality, which leads them to “hallucinate” facts and present falsehoods as the truth. AI is being used by threat actors to launch a variety of attacks with enhanced capabilities, spread misinformation with deepfakes and other forms of manipulated media, and eliminate old barriers to entry like a lack of language or technical skills. And AI presents significant cultural issues for companies, as employees don’t want to admit how much they use it because they’re worried about being viewed as lazy or replaceable.
Despite the remarkable improvement and large-scale adoption of AI, trust in the technology remains a significant challenge. At a time when AI is increasingly necessary for companies to improve operational efficiency and protect themselves from emerging cyberthreats, this lack of trust is a major impediment. This is why it has never been more important for companies to place reliability and trust at the center of their plans for AI adoption.
The AI Revolution Creates a New Set of Business Risks
AI adoption is surging. According to McKinsey, nearly two-thirds of organizations report that they are regularly using generative AI — a proportion that has almost doubled in just ten months. Three-quarters believe generative AI will cause “significant or disruptive change in their industries,” which is why it’s no surprise that 67% anticipate greater investments in AI over the next three years.
As AI becomes increasingly ubiquitous, it’s all the more important to recognize the risks it poses — from workforce displacement and disruption to cybersecurity challenges. For example, Microsoft reports that companies should be prepared for an AI-powered “era of phishing schemes” and notes that cybercriminals are already using the technology for surveillance, data collection and social engineering attacks. However, AI can also be used to thwart cyberattacks by protecting data across operational environments, monitoring and controlling user access, and identifying threats.
Although IBM has found that 82% of executives say “secure and trustworthy AI is essential to the success of their business,” less than a quarter of generative AI projects are being secured. Executives are also concerned about business disruption, a loss of employee creativity and problem solving, new cyberthreats, AI skills shortages and uncertainty about how to allocate their AI budgets. There are also operational problems with AI deployment, such as a lack of transparency in how deep learning models interpret data and produce answers (what’s sometimes referred to as the “black box” problem).
These are all reminders that trust should be at the heart of any effective AI implementation strategy. CISOs, CTOs and other company leaders must make sure that they’re maximizing transparency and accountability in their adoption of AI.
Don’t Ignore the Human Element of AI
Many of executives’ top concerns about AI adoption are directly related to the health and management of their workforces. Because generative AI is such a versatile technology, employees across many different industries and roles are concerned that it presents a threat to their jobs. A recent Forrester survey found that 36% of employees fear losing their jobs to AI or automation over the next decade. They have good reasons to be concerned — a recent IMF report found that 60% of jobs in advanced economies will be impacted by AI, and while many of these jobs will be augmented, some will inevitably be replaced.
Company leaders worry that their workforces don’t have the skills to keep pace with this transformation. For example, a recent Microsoft survey found that 82% believe employees will need new skills to prepare for the growth of AI. That said, there are encouraging signs that employees are willing to adapt to the AI era. Sixty percent acknowledge that they don’t have the skills they need, while over three-quarters say generative AI will create new learning opportunities, 73% say it will help them be more creative, and 72% believe it will improve the quality of their work. It’s no wonder that a substantial proportion of employees are using unsanctioned AI tools, which presents a range of security and operational issues.
When it comes to critical issues like cybersecurity and compliance, company leaders must implement AI without losing sight of the human element. Human expertise is essential for evaluating AI outputs, training models with the proper data, developing the right prompts, and figuring out how to put AI-powered solutions and insights to the best possible use. This will increase transparency and ensure that companies are only relying on vetted data — a shift that will help to overcome the trust deficit that makes AI adoption more difficult than it needs to be.
Meeting the Urgent Need for Trusted AI
AI has already become integral for many employees around the world, and this transformation has occurred at a blistering pace. Three-quarters of knowledge workers say they use AI at work, and 46% started using it within the past six months. However, it has become increasingly clear that AI has a trust problem in the workplace. According to a recent Salesforce survey of knowledge workers, the majority of AI users don’t trust the data used to train AI systems. Fifty-nine percent of employees believe AI outputs are biased, 54% think these outputs are inaccurate, and nearly three-quarters think generative AI introduces new security risks.
These concerns present a major obstacle for AI integration. Sixty-eight percent of employees who don’t trust AI training data are hesitant to adopt it, while three-quarters believe untrustworthy AI lacks the information necessary to be useful. Workers observe that AI data must be accurate, secure and holistic — and the top requirement they cite for effectively using generative AI is human oversight. These findings are mirrored by other research. IBM has found that companies regard a lack of data privacy, trust and transparency as the biggest inhibitors of generative AI adoption, and they blame limited AI skills and expertise for these shortcomings.
It’s vital for companies to ensure that AI is transparent and trustworthy, which means developing and implementing it with robust expert oversight. AI can be a powerful resource that drastically improves everything from a company’s productivity to its cybersecurity posture, but adoption will continue to be impeded if CISOs, CTOs and other company leaders fail to close the trust gap. This is particularly important when it comes to urgent issues like cybersecurity, which are directly connected to data security and privacy.
The AI era is already upon us, and companies are sprinting to adopt the technology to maintain a competitive advantage, improve efficiency and protect themselves from rapidly evolving cyberthreats. This process of adoption and iteration will only succeed if it is built upon a foundation of human expertise and trust, which is why companies must actively build this trust with the deployment of AI applications which are vetted, transparent and secure.