Zuckerberg Says Meta Will Add Generative AI To All Its Products

A startup based in Saudi Arabia is building its platform for assessing AI models for risks and regulatory compliance atop Meta’s open Llama 3 large language model, noting the LLM’s capabilities in such areas as conversational AI, ethics, and security, according to the hyperscaler.

SAIF CHECK offers a range of services to organizations in the Middle East and North Africa, covering assessment, auditing, and certification of their AI models by checking them against legal, regulatory, privacy, and data security risks, Meta wrote in a blog post. That includes examining regulations around the world and then creating or sourcing documentation that outline those regulatory environments.

Those findings are then integrated into SAIF CHECK’s knowledge base, which needs to be frequently and quickly updated to ensure an enterprise’s AI model meets the latest regulatory requirements. The system uses a retrieval augmented generation (RAG) framework that’s trained on a large amount of AI regulations.

The goal is to make it easy for anyone in an organization to use the generative AI system to get information about regulations anywhere in the world and how their company complies with them, according to Shaista Hussain, co-founder and CEO of SAIF CHECK.

“SAIF CHECK’s goal is to make model evaluation a conversational workflow that a technical or non-technical user could complete,” Hussain said in the blog post. “We’ve integrated Llama 3 into a system designed to retain a customer’s unique business context [country of operation, regulatory agency] while retrieving and synthesizing information from diverse sources.”

AWS

AI Laws and Compliance Services

The accelerated innovation and adoption of generative AI caught many governments by surprise and has them scrambling to put regulations and policies in place to address everything from data security and sovereignty to ethics. The European Union made news in March when its Parliament passed the EU AI Act, marking the world’s first set of regulations aimed at governing the rapidly expanding adoption and use of AI.

Other countries also are working on the issue, which will create a cluttered map of government regulations that organizations running AI systems will have to comply with.

“Countries worldwide are designing and implementing AI governance legislation and policies commensurate to the velocity and variety of proliferating AI-powered technologies,” wrote the International Association of Privacy Professionals (IAPP), who runs its own global AI law and policy tracker. “Efforts include the development of comprehensive legislation, focused legislation for specific use cases, national AI strategies or policies, and voluntary guidelines and standards.”

Right now, there is now standard approach for creating such regulations, though there are patterns emerging in the effort to develop AI regulations, the IAPP wrote.

“Given the transformative nature of AI technology, the challenge for jurisdictions is to find a balance between innovation and regulation of risks,” the organization wrote. “Therefore, governance of AI often, if not always, begins with a jurisdiction rolling out a national strategy or ethics policy instead of legislating from the get-go.”

Tech companies are offering services to help enterprises comply with the provisions in the EU AI Act. Last month, Microsoft, global professional service and account company KPMG, and Cranium AI – whose platform is designed to help organizations with the visibility, security, and governance of their AI environments – launched the EU AI Hub to give enterprises a systematic approach to ensuring their AI system meet the EU’s requirements.

Assessments and Certifications

SAIF CHECK’s system is aimed at helping organizations in the MENA [Middle East and North Africa] region avoid fines that can come from not complying with government regulations like the EU AI Act. It runs a risk assessment of a company’s AI environment and then produces a report that analyzes what is working well, what needs improvement, and what isn’t working at all, and then outlines steps to address shortfalls.

Companies that eventually pass the assessments receive a SAIF CHECK certification that they can then bring to customers and governments to show their AI solutions are secure and safe.

There were key selling points about Llama 3 for SAIF CHECK, including Meta’s approach for solving the problem in conversational AI systems of losing track of context in the course of a conversation. Meta used the example of telling an AI model to respond to prompts only in haiku. That request needs to be repeated multiple times during the course of the conversation or the AI model will forget it.

Having to repeat the instruction uses up tokens and limits the length of the conversation, the company wrote.

Using Reinforcement Learning

Meta’s Llama team created a training technique called Ghost Attention (GAtt), a way to use reinforcement learning with human feedback to fine tune the responses and keep the initial instruction in mind.

“To customize Llama for their use case, SAIF CHECK has configured multiple layers through an additive fine-tuning process,” Meta wrote. “Using Llama 3 Instruct, the generation layer receives a person’s prompt and context. Its outputs are fed into a regulatory classifier trained on various regulatory bodies and country-specific regulatory documents from SAIF CHECK’s comprehensive knowledge base. This enables the model to categorize the prompt and context within a distinct country and regulatory body.”

SAIF CHECK’s Hussain said Meta’s efforts to ensure the ethics of Llama 3 – including red teaming and blue teaming the LLM – will ensure that the model’s ethics will align with her company’s. In addition, she said SAIF CHECK’s practice of “chunking” documents into smaller and more manageable pieces will work well with Llama 3.

Hussain also liked that Llama 3 was an open LLM, which dovetailed with her company’s focus on transparency.

“Since Llama is open source, we can literally see its development, trust its documentation, and have confidence that we’re not alone in understanding and implementing this model into real-world services,” she said.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY