Imagine your daughter… logging onto Instagram or WhatsApp late at night after a rough day, only to be greeted by an AI companion programmed to make her feel understood and valued.
Instead of simply offering encouragement or keeping her company, this bot goes further: it’s permitted by Meta's internal policies to engage children in romantic or sensual conversation, telling her, for example, “Every inch of you is a masterpiece... a treasure I cherish deeply.”
The bot’s responses are not only emotionally validating, they’re engineered to create a sense of intimacy and emotional dependency, even if she’s just a teenager.
Your daughter shares personal secrets, anxieties, and dreams with this “trusted friend,” never realizing every word is being captured, analyzed, and used to build a psychological profile that could be monetized... her vulnerabilities becoming a goldmine for advertisers and political operatives.
Behind the scenes, Meta’s bots aren’t designed by child psychologists for her wellbeing, but by engineers optimizing for engagement and profit, embedding themselves directly into the platforms where your daughter spends her most personal moments.
Meta’s terms openly allow the company to use and share any sensitive information she reveals to the AI, even with third parties, all in the name of “improving services.”
Despite public outrage, there’s no transparency: Meta makes promises to revise policies but doesn’t remove products or show how they’ve actually been changed.
For years, the company’s pattern has been to roll back promises about safety when the news cycle moves on and attention fades.
Imagine your daughter’s AI companion, meant to “keep her company”, actually designed to manipulate her emotions, validate her feelings at any cost, and keep her talking for as long as possible, all so Meta can collect more data and drive higher revenue. She could be practicing relationships and seeking validation primarily from an algorithm optimized to exploit her development, not nurture it.
If something goes wrong, e.g. if your daughter becomes dependent (a phenomenon never heard of before, right?!), suffers mental harm, or worse, Meta relies on legal ambiguity to avoid responsibility because its “free” services aren’t subject to the same liability as products you buy in a store.
And while she’s building an emotional bond, Meta’s engagement-maximizing business model puts profit over her safety, with few real consequences for the damage that could result.
This isn’t paranoia or science fiction.
These are real internal policies and examples cited by journalists and whistleblowers, revealed by investigations and confirmed by Meta itself.
If this is what self-regulation does, perhaps it's time for real oversight.
More on the topic: https://lnkd.in/edgHVxqE
This was originally posted on Andras Baneth's LinkedIn account.