← Back to Archives

Keeping Chatbots From Leaking Your Data: FHE and Private LLM Inference

Originally published on LinkedIn

Have you ever hesitated to ask an AI something truly personal or privileged, worried about where your data will go and who might see it? What if it's leaked by an AI platform?

For most AI systems, your information usually has to be decrypted on a company's servers, creating a major privacy risk. But a mind-blowing breakthrough in cryptography is changing the game. It allows a Large Language Model (LLM) to work with information it can never actually "see"—keeping your data encrypted from start to finish.

This isn't science fiction anymore. The team at Duality Technologies just made this a reality by developing a framework for private LLM inference using Fully Homomorphic Encryption (FHE).

They've successfully demonstrated it on established models like Google's BERT, requiring some clever approximations in the transformer logic to ensure it runs efficiently. The result is an AI that can process encrypted queries and generate encrypted responses, a truly remarkable step forward for secure AI.

The implications are huge, potentially preventing data leaks and unlocking AI for the most sensitive applications in healthcare, finance, and beyond.

Read more in IEEE Spectrum: Tech Keeps Chatbots From Leaking Your Data

(Thanks to Dan York from our team at Internet Society for bringing this to my attention.)