Artificial intelligence (AI) and WhatsApp may be a combo devised in the fiery depths of oblivion, but that isn’t stopping Meta from expanding the technology’s reach inside the messenger. What started as simple AI-powered customer support soon transitioned to a full-blown Meta AI assistant living on your home screen. And now, the Facebook owner wants to fill every nook and cranny of your private chats with AI.
Why not try out these totally secure AI features?
Tucked away in a blog post announced during the company’s first-ever LlamaCon, Meta briefly mentioned something it calls “Private Processing” that’ll deliver in-chat AI features directly to its messaging app. Meta focuses on the security implications of the new AI features rather than the features themselves. Slightly ominous, no?
“We’re sharing the first look into Private Processing, our new technology that will help WhatsApp users leverage AI capabilities for things like summarizing unread messages or refining them, while keeping messages private so that Meta or WhatsApp cannot access them.”
It does, however, reveal that soon WhatsApp users will be forced to contend with Meta AI in some form inside their private chats, helping them summarise or ‘refine’ long messages (whatever that means). It’s mightily similar to the AI features landing on nearly every smartphone these days, except it’s now specific to WhatsApp. Meta didn’t mention when these features might be ready for consumption.
Read More: WhatsApp Web to gain voice and video calling powers at long last
Meta was so quick to forestall criticism and assure users that these features would be private, rather than talking about the features themselves, that we can’t help but feel slightly worried about what’s to come. WhatsApp has long championed privacy, promising end-to-end encryption across all chats, and constantly delivering security-themed updates.
Taking that commitment to Private Processing a step further, Meta provides a more in-depth look behind the curtain on its Engineering blog, including a look at the threat model “that guides how we identify and defend against potential attack vectors.”
“We’re working with the security community to audit and improve our architecture and will continue to build and strengthen Private Processing in the open, in collaboration with researchers, before we launch it in product.”