I’ve seen a shift that’s seen FHE and AI become tightly linked this year, as the world attempts to balance data accessibility and privacy.
Looking back on 2024, one of the most dominant themes to emerge in AI has been privacy – or rather, growing concerns over its absence.
With the rapid adoption of generative AI tools, there’s been a huge amount of scrutiny over data usage, model training practices, and the potential misuse of sensitive information.
Alongside regulatory actions and a growing focus on ethical AI, there’s also been a number of high-profile incidents that have prompted these privacy discussions.
Take the highly publicized Samsung case, for example, which involved employees unintentionally leaking proprietary code and sensitive company information by inputting it into ChatGPT to solve coding problems.
Following this and the very real potential for further data leakages, many companies – Samsung included – have since tightened up their policies and have even implemented restrictions on AI tool usage. But avoidance measures can only go so far.
Data Accessibility is Crucial for AI’s Effectiveness
As we’re discovering more each day, AI tools offer real value to businesses, researchers and individuals alike – unlocking levels of efficiency, innovation, and personalisation never seen before. But for AI to work effectively, it needs access to substantial, diverse datasets to train on and learn from.
For those keen to embrace AI rather than reject it – but who are simultaneously keen to safeguard privacy – we’re starting to see a rising interest in advanced privacy technologies. Take sectors like health care, finance, and government, where data privacy is critical, but so is the need to share data. In health care, for example, enabling secure, privacy-preserving data analysis across institutions would mean that hospitals could collaborate on patient data for research or AI training without ever exposing sensitive information.
And this is just one of the use cases where Fully Homomorphic Encryption (FHE) comes in, having become tightly linked with AI particularly over this past year.
Surge in FHE-Related Announcements and Advancements
For those not in the know, FHE is an advanced cryptographic method that allows computations to be performed on encrypted data without ever decrypting it.
Because it can enhance privacy without compromising analytical performance, we’ve seen a number of tech giants showcase FHE’s potential lately.
Apple’s privacy-preserving technologies team made its much-discussed swift-homomorphic-encryption announcement just this year, for example. An initiative that involves the Brakerski-Fan-Vercauterena FHE scheme, Apple’s move – which allows developers and researchers to collaborate and use it to build secure applications – shows that big actors in technology are recognizing the value of handling sensitive data responsibly.
While Apple has followed similar initiatives by Microsoft and IBM, who released HE libraries SEAL and HElib much earlier, this particular update marks an important step. Because of Apple’s ability to integrate complex technologies into mainstream tools and workflows and the fact its library is designed for Swift – a programming language that’s widely-used – FHE is now far more accessible to the developer community.
But as well as the tech world adapting to support FHE more effectively, public perception of data privacy will also play its role in FHE demand. We expect that as people become more aware of how their data is used and the potential for misuse, organizations will in turn have to adopt advanced privacy measures to meet this shift in opinion.
While this all spells good news, challenges remain. One of the main issues is the high computational cost of encryption, decryption and homomorphic operations, which can still be much slower than plaintext computations. When handling large datasets, there can also be scalability issues, which is down to the fact current FHE schemes can struggle with memory and processing requirements, while real-time processing latency for applications like video streaming or live analytics remains equally challenging.
That being said, FHE is developing fast, with improvements around computational efficiency and cost decreases developing by the day pushing FHE closer to large-scale deployment. These include new cryptographic schemes like CKKS optimizations, which support approximate arithmetic and are more efficient for AI tasks, and improved algorithms, which have enhanced bootstrapping processes and drastically reduced the time needed to refresh ciphertexts.
Libraries such as TenSEAL and Concrete have also been further optimized, making it easier for developers to deploy FHE at scale, while more developer-friendly APIs have made integration into existing workflows easier. And finally, hardware acceleration through GPUs and FPGAs has reduced computational demands.
FHE Heading Mainstream?
All of the above signifies FHE is getting ready to go mainstream and accelerate its integration into AI workflows, set to become a key part of the privacy-enhancing technology toolkit.
In fact, within the next 5–10 years we believe FHE will become the default standard for privacy-preserving AI, with issues around privacy a thing of the past.