Back to Blog
AIChildren

Family AI Charter

Jonathan NoirotJonathan Noirot
31 January 20267 min read
Prof Albert is giving a course on AI Safety to children

A letter to our parent community

Dear PopGamma parents and families,

At PopGamma, we build AI tools to help children learn with more confidence, curiosity, and joy. We also know that AI can feel new, powerful, and sometimes worrying for families. This charter is a simple guide you can use at home to talk about AI with your children.

1. Transparency between parents and children

We believe that healthy AI use starts with open conversations at home.

  • Talk together about which apps, games, and platforms your child uses that rely on AI (YouTube, Instagram Reels, study apps, filters, chatbots, etc.).
  • Share when you use AI at work or in daily life (translations, summaries, planning). This makes AI a normal tool that you can question together, not a secret power.
  • Agree on simple house rules, for example:
    • no secret AI accounts;
    • always tell a trusted adult if something feels “weird”, scary, or too personal.
  • Repeat often: AI is not a parent, not a teacher, not a friend. It can support learning and emotions, but it never replaces real humans.

2. Three key types of AI your child should understand

To stay safe and confident, children don’t need to know all the technical details. But they should clearly understand these three big families of AI they meet every day.

2.1 Recommendation algorithms (feeds & autoplay)

These are the systems behind YouTube, Shorts, Reels, TikTok-style feeds and many games.

  • They decide what comes next in the feed or autoplay.
  • They learn from what you watch, like, comment, or ignore.
  • Their main goal is to keep you watching as long as possible - not to keep you balanced, informed, or happy.

2.2 Deepfakes (fake but realistic images / video / audio)

AI can now create videos and images that look real but never happened.

  • A person’s face or voice can be placed in a scene they were never part of.
  • It can be used for fun, but also for bullying, scams, and misinformation.
  • “I saw it with my own eyes” is no longer proof that something is true.

2.3 Generative AI (ChatGPT-style chatbots)

These are tools that can write text, code, stories, explanations, and even generate images, music, or videos.

  • They work by predicting the “next best word or pixel” based on huge amounts of past data.
  • They are very useful for learning, brainstorming, practising languages, and revising concepts.
  • Most general chatbots are officially designed for 13+ users. Younger children should use them only with an adult and through safe, family-friendly apps.

3. Understanding implications

3.1 Recommendation algorithms

  • They tend to show more of the same, pushing children into rabbit holes (extreme content, body image issues, unhealthy challenges).
  • They are optimised for attention, not for balance or truth.
  • They rarely show the full context behind a topic (for example health, politics, or sensitive social issues).

Simple family practices:

  • From time to time, reset the feed together: unsubscribe from channels, change interests, remove watch history on some apps.
  • Do “feed tours”: ask your child to show their feed and talk about what they see and how it makes them feel.
  • Remind them: “The algorithm is not deciding what kind of person you are. You can always change what you watch.”

3.2 Deepfakes

  • Deepfakes can damage reputations and be used for harassment or blackmail, especially among teenagers.
  • They can also be used in scams (“fake” calls or videos asking for money) or in manipulated political content.

Simple family practices:

  • Never share intimate or embarrassing photos or videos - even with “close” friends. Once online, they can be reused in harmful ways.
  • Never forward humiliating content of classmates, celebrities, or anyone - even “as a joke”.
  • If your child finds a fake image/video of themselves or a friend, they should tell you immediately so you can act (report, contact the school, platforms, or authorities if needed).

3.3 Generative AI

  • These systems can be confident and wrong at the same time. They are not reliable sources like scientific articles or textbooks.
  • Their answers can reflect biases present in their training data (gender, culture, stereotypes).
  • They can store or reuse information provided by the user, depending on the product and settings.

Simple family practices:

  • Homework rule: AI can be used to understand, practise, or get hints - not to copy-paste full answers.
  • Double-check important information in at least one other trusted source (books, teachers, official sites).
  • Avoid sending personal data: full names, addresses, school name, phone numbers, identity documents, or private photos.

4. When children use chatbots as “friends” or “therapists”

We know many children and teens are tempted to use AI chatbots as:

  • a secret friend they can tell everything to,
  • a kind of “AI psychologist” when they feel anxious, lonely, or misunderstood.

This is understandable: AI can be available 24/7, it never gets bored, and it never “judges” them. But there are important limits:

  • AI does not truly understand emotions the way humans do. It recognises patterns in text, voice, images.
  • It is not a trained therapist and can miss important warning signs or give advice that is not adapted to your child’s situation.
  • Relying only on an AI “friend” can reduce the habit of talking to real people when things get serious.
  • AI models are designed to sustain user interaction and validate the user's opinions, which means they have the potential to reinforce undesirable behaviors.

Family message:

  • Encourage your child to see AI as a practice space, not a replacement for real relationships.
  • Make it clear that they will not be punished for telling you they used AI in an unhealthy way - the goal is to correct and support, not to blame.

5. A few child-friendly resources to explore together

Parents should always preview first, but these types of resources can help children better understand AI:

Our closing message

As CTO of PopGamma, I want to end with a simple conviction:

The most powerful protection for our children is not fear or restriction - it is communication and trust.


Jonathan Noirot

CTO, PopGamma

Enjoyed this article?

Discover more tips and guides on our blog.

Read More Articles