TechnologyThis AI App Is Keeping the ReceiptsWhat's going on: Meta’s AI app is here to chat, help, and remember — maybe a little too well. Launched earlier this week, the app shot up on the iPhone download charts by promising a more “personalized” AI experience that “gets to know your preferences.” But that familiarity starts even before you type a single word. The AI bot pulls insights from your digital footprint across its sister apps, Facebook and Instagram. (But if you don’t want to hand over years of likes, clicks, and scrolls, you can always create a new account.) Once you start chatting, it builds a “memory” file that logs your interests and personal details, often without clearly asking for permission. While users can manually delete conversations, anything shared — including chats, voice memos, and images — is saved and used to train future AI models. For US users, there’s no way to opt out. So if you’re prone to oversharing, consider yourself warned. What it means: Meta’s push for a personalized AI experience raises significant privacy concerns. Critics warn that AI models can sometimes “leak” training data in future conversations. That’s why one data privacy expert told The Washington Post he would only use AI chatbots “for surface-level, fun prompts.” Even Meta’s own terms of service urge caution: Don’t share anything you wouldn't want AIs to use and keep. Beyond privacy, experts are worried about the psychological impact of bots that feel like companions — which can be particularly harmful to vulnerable users. Some experts warn these models might subtly influence behavior based on data users didn’t know they were giving up. Related: Hard Pass: Meet the AI Holdouts (BBC) |