Mattermost, a secure collaboration platform for technical teams, recently introduced a self-hosted, open source solution enabling AI-augmentation of mission-critical operations while allowing organizations to maintain full data control.
Aimed at giving defense, government and technology enterprises the power to leverage generative AI technologies to enhance or replace operational workflows, the platform is focused on ensuring stringent data control measures and avoiding dependence on vendor-specific platforms.
The framework, which represents the initial release of a series of reference architectures that Mattermost plans to offer, enables collaboration through chat-based communication across web, desktop and mobile platforms. Users can engage with AI bots, powered by interchangeable open-source AI platforms called large language models (LLMs).
The system grants full control to customers within their private networks or data centers, even allowing deployment on air-gapped networks isolated from the internet.
The customer-controlled AI platform offered by Mattermost encompasses chat-based collaboration features such as messaging, file and media sharing, search, integrations, custom emojis and more.
The platform also facilitates the integration of conversational AI bots into channels, allowing them to respond to inquiries and requests using different LLMs, including models from the HuggingFace AI community.
This scalable AI model framework supports deployments ranging from private clouds or data centers utilizing LLM models for group work, to commodity laptops for individual developers to explore LLM capabilities. The platform ensures conformity to security and compliance requirements including automated code scanning, network traffic monitoring and security and deployment to restricted networks.
Craig Jones, vice president of security operations at Ontinue, points out data governance is particularly important in sectors dealing with sensitive information or where AI decisions could have profound effects.
“In healthcare, patient records are confidential, requiring strict data control,” he explains. “Also, AI decisions can impact patient health, necessitating transparency and auditability.”
As the financial services sector handles sensitive personal and financial data, governance is crucial to avoid misuse and prevent financially damaging AI errors, while defense and government often deal with classified information–secure and transparent AI processes are vital.
“Technology companies must consider that handling user data and relying on AI makes robust data governance critical for user trust and legal adherence,” Jones adds.
He cautions vendor lock-in can severely restrict an organization’s flexibility, subjecting it to the vendor’s whims regarding product updates, pricing or even discontinuation.
“This dependence also hampers control over data and configurations, making the organization vulnerable to potential security risks from the vendor’s practices,” he says. “Moreover, vendor lock-in can undermine negotiating power since moving to a competitor becomes a logistical nightmare.”
Specifically, with AI, dependence on a vendor’s specific AI model or API could cause operational disruptions if changed or discontinued.
“Therefore, independence is crucial when adopting Gen AI tools, ensuring data security, control and operational continuity,” Jones advises.
Joseph Carson, chief security scientist and advisory CISO at Delinea agrees there are many data security considerations and risks associated with using generative AI solutions that could result in devasting implications for businesses and governments.
“Generative AI is only as good as the data it has been trained on, so an important security measure is to not solely rely on Gen AI for accuracy and correctness,” he explains.
This means that until those solutions include a confidence or accuracy rating, you must review and analyze all responses to verify accuracy before allowing the data to be used in your own decision making.
He adds another important risk is around data rights management issues that Gen AI introduces which could result in legal issues and challenges.
“Gen AI guard rails can be easily abused that can result in leaking information or even sensitive code,” Carson says. “One of the big reasons for carefully deciding on data flow and vendor lock in is that the gen AI industry is quickly evolving.”
This will likely result in several acquisitions and changes soon that could result in some legal and national security concerns.
Carson points out that in such in a fast-paced industry, vendor lock in must also be minimized to ensure that Gen AI tools are portable and continue to be legal in the countries in which they are used and operated.
“The industries where governance is particularly important is around the use of personal data and personal identifiable information such as facial recognition that has already had quite a bit of controversy already with issues related to bias and false positives,” he says.
These are situations where there is no room for mistakes or low accuracy, and they must always be right with decision making, so governance must be a top priority to ensure mistakes can be avoided or handled accordingly.