
WhatsApp is facing a storm of user backlash over its new Meta AI feature, embedded directly within the messaging app. While WhatsApp insists the AI is “entirely optional,” users are discovering they can’t actually remove it. The ever-present Meta AI logo – a blue circle with pink and green splashes – sits in the bottom right of the Chats screen, constantly inviting interaction with the AI chatbot.
This forced integration is drawing significant frustration, with users taking to X, Bluesky, and Reddit to voice their anger. The inability to disable the feature has sparked privacy concerns and comparisons to Microsoft’s controversial Recall feature, which initially launched as an always-on tool before a public outcry forced a reversal.
“We think giving people these options is a good thing and we’re always listening to feedback from our users,” WhatsApp told the BBC, attempting to quell the rising discontent. The rollout of the AI feature is currently limited to select countries, with Meta advising users that it “might not be available to you yet, even if other users in your country have access”.
Besides the icon, a search bar at the top now prompts users to ‘Ask Meta AI or Search’. This feature mirrors integrations already present in Facebook Messenger and Instagram, all powered by Meta’s Llama 4 large language model. Before engaging with the AI, users are greeted with a lengthy message from Meta, emphasizing that Meta AI is “optional”.
WhatsApp’s website claims Meta AI “can answer your questions, teach you something, or help come up with new ideas”. However, early user experiences have been mixed. While the AI can quickly provide information like weather updates, it sometimes struggles with accuracy, as demonstrated by one user’s query about Glasgow weather, which resulted in a link to a London railway station.
Dr. Kris Shrishak, an AI and privacy advisor, has strongly criticized Meta’s approach, accusing the company of “exploiting its existing market” and “using people as test subjects for AI.” He argues that “no one should be forced to use AI” and calls Meta’s AI models “a privacy violation by design”, citing the company’s alleged use of web scraping to train its AI on personal data and pirated books.
An investigation by The Atlantic suggests Meta may have accessed millions of pirated books and research papers through LibGen to train its Llama AI, leading to legal challenges from author groups worldwide. Meta declined to comment on the investigation. When using Meta AI, WhatsApp states that the chatbot “can only read messages people share with it” and that “Meta can’t read any other messages in your personal chats, as your personal messages remain end to end encrypted.”
The Information Commissioner’s Office stated it would “continue to monitor the adoption of Meta AI’s technology and use of personal data within WhatsApp,” emphasizing the need for organizations to comply with data protection obligations and take extra precautions when processing children’s data.
Dr. Shrishak cautions users to remember that “Every time you use this feature and communicate with Meta AI, you need to remember that one of the ends is Meta, not your friend.” WhatsApp advises users to “Don’t share information, including sensitive topics, about others or yourself that you don’t want the AI to retain and use.”