I’m Concerned About AI Images and Advertising Affecting My Children

I’ve been thinking a lot about advertising lately, but not in the abstract way I used to. This time it’s personal. I’m thinking about my kids, the images they see, and how fast the ground is shifting under all of us.

I already spend time teaching them about what I call tricky advertising. We talk about why ads are designed the way they are, how they try to make you want something, and how not everything you see online is there to help you. That felt manageable when advertising was obvious. A banner. A commercial. Even influencer posts were still recognisable as someone selling something.

AI changes that equation.

What worries me most is that we are now moving into territory where images, people, and messages can be generated at scale, personalised in real time, and delivered in ways that are almost impossible to distinguish from ordinary content. Not just for adults, but for children whose brains are still developing and whose ability to spot persuasion is limited.

Advertising has always adapted to psychology, but AI-driven advertising takes this much further. These systems do not just target age groups or interests. They learn from behaviour. They adjust based on mood, timing, attention, and reaction. They optimise continuously. Even the people who build these systems cannot always fully explain how a particular message ended up in front of a particular child at a particular moment.

For adults, that is unsettling. For children, it is something else entirely.

One of the most concerning shifts is how advertising is now blending seamlessly into entertainment. Games that are also ads. Influencers who are not real people. Content that does not look like marketing but functions exactly like it. Research consistently shows that younger children simply cannot distinguish between content and advertising, and even older children struggle when ads are interactive, immersive, or emotionally engaging. The line we once relied on has blurred to the point where it may no longer exist.

This has real consequences. Constant exposure to highly personalised advertising shapes values, identity, and self-worth. When children are repeatedly shown that happiness, popularity, or belonging come from owning or looking like something specific, materialism becomes normalised. That pressure lands at exactly the same time as identity formation, which is not a coincidence.

AI-generated influencers make this even harder. These are digital personalities designed to be perfect, always available, and endlessly aspirational. They do not age, fail, or have bad days. For teenagers already navigating body image, comparison, and self-esteem, this sets an impossible standard. The evidence linking this kind of exposure to anxiety, low self-worth, and depression is growing, and it sits alongside broader concerns about youth mental health that we are already struggling to address.

Then there are the mechanics that sit underneath all of this. Many apps and platforms used by children rely on what are known as dark patterns. Designs that pressure, manipulate, or trap attention. Characters that shame you for stopping. Timers that create urgency. Navigation that makes it hard to leave. Studies have found these techniques in the vast majority of apps aimed at young children. These designs exploit impulse control that simply has not developed yet.

What makes this harder is that none of this lands evenly. Algorithms are not neutral, and neither is exposure. Children from households with fewer resources are more likely to encounter manipulative advertising and pressure to purchase. Over time, this reinforces inequality, shaping what children see, want, and believe is possible for them.

When I ask myself where the laws are in all of this, the answer is uncomfortable. Most of our existing frameworks were designed for an earlier internet. In the United States, Children’s Online Privacy Protection Act focuses primarily on consent and data collection. In Europe, GDPR and more recently the Digital Services Act have gone further, including restrictions on targeted advertising to minors and requirements around platform safety. These are important steps, but they are still playing catch-up with systems that adapt faster than regulation can move.

What worries me is not just what children are seeing today, but what is coming next. We are approaching a point where AI-generated images, videos, and personalities will be indistinguishable from real ones. Teaching kids to spot tricky advertising was already hard. Teaching them to question reality itself is something we are not prepared for.

I don’t think the answer is to panic, and I don’t think it’s to reject AI outright. These technologies can and do have positive uses, including in education and accessibility. But advertising is a different domain. It is designed to influence behaviour, and when that influence becomes invisible, personalised, and constant, we need to slow down and ask harder questions.

How do we teach children critical thinking when the persuasion itself is adaptive and hidden? How do parents keep up when the systems are changing faster than we can learn them? How do lawmakers regulate psychological manipulation rather than just data collection?

I don’t have neat answers to these questions yet, and I’m wary of anyone who claims they do. What I do know is that this is not a future problem. It is already here. If we wait until everything is indistinguishable, we will have waited too long.

For me, this comes back to responsibility. Platforms, advertisers, and governments all have a role to play, but so do we as adults. We need better regulation, yes, but we also need better digital literacy, clearer boundaries, and far more honesty about what these systems are doing to young minds.

We are in new territory. Pretending otherwise does not protect our kids. Paying attention might.


Next
Next

Are We Witnessing the Decline of Social Media?