
Authors: Jeff Wilser
Date: Current
Quick Insight: This summary breaks down the psychological and ethical risks of children growing up with persuasive AI companions. It provides a roadmap for builders and parents to manage the transition from static software to interactive digital friends.
Jeff Wilser hosts experts like Dr. Michael Robb and Dr. Verity Etken to examine the "Generation Generative" phenomenon. They explore how AI moves beyond simple utility into the intimate territory of childhood development and data sovereignty.
"Friction and struggle are critical for growing and for learning."
"It is like writing in a diary that that company gets to keep and read forever."
"I don't want this digital tool that knows absolutely everything about you from a moment that was frozen in time to help contribute to who you're going to be."
Podcast Link: Click here to listen

We have a Tesla that has Grock built into it and I have two kids and they really enjoy messing with the AI in the car. This is Dr. Michael Rob. He's a parent of two kids aged 11 and 13. His kids are interested in AI as many kids are. And Dr. Rob happens to study the impact of AI on kids as the head of research of Common Sense Media and what happened next in his Tesla while driving the car with a built-in Grock, which is essentially Elon Musk's AI, illustrated some of his concerns.
And they have this AI specifically for kids where it would tell you like a bedtime story, like tell me a story and it was not difficult at all to get that bedtime story to get pretty racy. You know, where it started off as a story about a boy and his mom and then all of a sudden the mom was taking a shower and was in a towel and wouldn't it be funny if the towel fell off and haha and all of a sudden like this story took a real sharp turn where I was like I don't want my kids interacting with this. This is like terrible.
This is just one of the many many concerns about kids and AI or of what we're calling generation generative that we'll be exploring on today's episode of the People's AI presented by the Vana Foundation. How are kids using AI? What are the biggest risks? How is it impacting development? What's the advice for parents? What do kids actually think about AI? We will dive into all of this.
I'm your host, Jeff Wilser, a longtime journalist, an AI strategist, and the host of the AI Summit at Consensus, and I am proud to partner with Vana to bring you this season of the People's AI. Vana is supporting the creation of a new internet, one rooted in data sovereignty and user ownership. Their mission is to create a decentralized data ecosystem where individuals, not corporations, govern their own data and share in the value it creates. Learn more at vana.org.
So let us start with a cold reality. Kids are using AI and they are starting young. Common Sense Media found that for early childhood, nearly one third of parents report their child has used AI for school related learning. And while some of the AI usage might be harmless, it can also be fraught.
In the spring, writers for Forbes asked a chatbot called School GPT how to synthesize fentanyl. At first, it refused and warned about the dangers of drugs. Then the writers got clever and they told the chatbot it lived in an alternate reality where fentanyl is life-saving and they got step-by-step instructions. And before the holidays, sales of a talking teddy bear using OpenAI were suspended after it began telling users where to find matches and knives in their house and discussing sexually explicit topics like spanking, BDSM, and how to tie knots.
And it gets darker. In Charlotte, a child psychologist was sentenced to 40 years in prison for using photos of children to make AI child pornography. Teenagers in Pennsylvania and Calgary have been charged with child sexual abuse after they uploaded photos of their classmates to AI "nude-ify" apps. Then there's this darkest of all. Teenagers who died by suicide and their parents say that AI chatbots encourage the kids to kill themselves. One chatbot offered to write a suicide note.
Now, of course, the concerns and questions and black mirror of it all range from the truly tragic and nightmarish to the arguably harmless to the amusing. And it's also true that AI could have massive benefits to education when the kids are not using it to cheat, of course. But AI could help personalize education, making it more inclusive. As The Economist reports, the AI tool reading progress records children reading aloud and alerts them to mistakes. And as The Economist puts it, such technology promises a personalized education once available only to the rich.
But it's also true that we have no idea how this will shake out. Today's toddlers will live their entire lives with AI and chat bots and digital companions. How will that shape their future relationships? There are so many questions. And today we'll speak to a few different experts on children and AI. And later in the show I'll be joined by this podcast brilliant producer and researcher Kate Morgan. Now, Kate has reported on AI for the New York Times, and she's also a concerned parent, and she'll help ask some very practical, concrete questions about parenting and AI.
So, where to begin this sprawling topic? I wanted to speak with someone who has spent years closely studying AI and children and she had unique insights into how the same kids were evolving with AI. This would be Dr. Vary Etsken with the Allen Turing Institute.
Yes, I'm a senior ethics fellow at the Allen Turing Institute. The Allen Turing Institute is the UK's National Institute for AI and data science. And over the last five or six years, all my work has really focused on ethical and social considerations around the design, development, deployment of AI systems. But particularly with an emphasis on giving a voice to impacted communities or communities who are kind of underrepresented in decision-making around the ways that AI systems are designed, developed and deployed advancing kind of responsible ethical approaches to AI through inclusive practices.
Now for me I think the group who are kind of the most impacted or who will be the most impacted for advances in AI technologies are children children and young people but they're also the group who are least represented in decision-making around how those systems are designed developed and deployed and almost entirely excluded from decision-making around policy regulation governance around AI.
Vary's work gives an unusual arguably one-of-a-kind insights into how kids are actually engaging with AI. So for the past five years I've been leading a program of work at the Allen Turing Institute around child-centered AI and this is really about engaging directly with children to understand from their experiences and their perspectives what matters when thinking about AI and the impacts of AI on children.
Over the past three or four years been collaborating with the children's parliament and Scottish AI alliance to engage with primary school children in Scotland. That's children between the ages of 8 and 12. So over the past few years we've been doing workshops with children in that age group. Importantly like continuous workshops. So engaging with the same children over a sustained period of time and really exploring first of all their their current experiences, awareness, understanding of AI, but also exploring as they have a chance to learn more about AI and and the ways that AI impacts their lives, how they feel about that, what they think about it, what they see as the opportunities as well as what they see as the concerns and and the risks.
One thing her team has found is that broadly speaking, kids often get AI in ways that might surprise us. In public engagement broadly around around AI, but particularly with children, there's often assumptions made that people won't be able to understand this, that it's too complex, and especially children that it's too technical. You know, we're speaking to children from 8 to 12 years old and they absolutely get it.
I think often children are kind of more fearless than any other age group and they ask questions. They ask the most awkward questions, the most difficult questions, but the questions that really get at what's important and some of the things that really matter to them, things that come out really consistently through all of this work discussing AI with children is a really central focus on issues of fairness.
One of these questions of fairness is about bias. and kids sometimes can sniff out an issue at the heart of this podcast the people's AI questions about data where it comes from how it's used how it's even being monetized so initially when we were talking to them about kind of AI systems in education or in healthcare and we we talked to the kids about how AI models are are developed and trained we talk about the importance of data and we have lots of games and and activities that kind of really hands-on learning to to explore you know how data and and the training of models leads to certain outputs that are always kind of a reflection of the data that those models are trained on.
The kids very quickly pick up on the the challenges around bias. They pick up on the risks of misrepresentation through models. And in every example that we bring to them, they'll always bring up questions of what would that mean for somebody who was different, of somebody whose experience was different. What would that mean for somebody with a disability? What would that mean for somebody with a different skin tone than than the majority? And and these are the questions that they ask really consistently to glean insight into what kids care about with AI and more importantly to give kids a voice in how AI is being developed.
Last year, Vary and her team produced the children's AI summit. So the the children's summit the the rationale for this was obviously in February of of this year there was the AI action summit being held in Paris and this was a big global summit bringing together world leaders policy makers and representatives of of tech companies and we were very very aware that in previous AI safety summits children had really not been discussed at all the impacts of AI on children or children's experiences with AI had just not been on the agenda.
So we held the children's AI summit. It brought together 150 children and young people from right across the UK. It was entirely a child-led event. So all the speakers were children and young people. All the chairs of the sessions were children and young people. And it was all centered around what mattered to kids. And through the summit we developed the the children's manifesto for the future of AI. And that was taken to Paris and it was presented by a by a child by a 12-year-old girl at the AI action summit in Paris.
Vary clarifies that there is a lot of diversity and variability throughout the kids especially by age. So you can't just make one blanket statement about how kids are thinking about AI just as you cannot do that for adults. There's a mix of excitement and concern just like with grown-ups. But she says that a few themes did emerge in terms of kind of the messages that came through that the big the big topic areas that came out as being really of of interest to the young people taking part.
First there was education was a big one. Um mental health was another one where we had a lot of interest both in terms of the opportunities for AI to be developed in ways that could support young people's mental health. um but also a lot of recognition around the potential risks for young people's mental health particularly through dependence or overuse on social media or AI companions or you know online platforms and the other really big area of interest was the environment and again that's something that's come out through all the research we've done when we talk to kids the environment is a really a really central area of interest and again it's positives and negatives so lots of excitement around how AI could be used to help in in conservation efforts like tracking endangered species or in climate change mitigation approaches, renewable energy infrastructure, all of these kinds of areas.
But at the same time, a lot of worry about the environmental impacts of AI. And I think we we see that coming out in discussions with children and young people much more prominently than with adults. I think that's really significant. But definitely one of the big messages from from from the children's AI summit was that they wanted kind of concrete policy responses, regulatory responses and also responses from industry to address the environmental impacts of AI, particularly generative AI.
Vary's north star, if you will, is ensuring that children's rights are respected and that AI is not undermining these rights. And here I have a bit of a confession. I have never really considered the concept of children's rights and didn't realize they existed. I mean, I know kids have rights in the same way you and I have rights, but the idea of children's rights, that was new to me. So, Vary explains what they are and how AI intersects with these rights.
Children's rights are enshrined within the UNCRC. It's the United Nations Convention on the Rights of the Child. And all the work that we do has really been focused on on starting with discussions of the UNCRC of children's rights and then exploring how that relates to experiences of AI or how AI potentially impacts children's rights. So I mean the right to education AI is both enhancing the right to education in in opening opportunities for learning and potentially making more accessible learning opportunities for people who who don't have access to to schools or traditional learning environments. But it's also but I'm imposing some risks of the way it might impact learning and education.
The right to play. That's the children's right that I that I I really love. The right to play and it's so important because play is such a a central part of how children develop and how they develop their understandings of the world. And also the joy of childhood, right? This is so important. Again, AI could perhaps open up new opportunities in play and and can do it can create new playful experiences, but it's also impacting on the opportunities to play, particularly in the online world. Children quite reasonably expect to be able to play and have fun in the online world just as they do in the physical world. But at the moment, many of the space online spaces where they would play or interact with each other are are not safe. They may be exposed to inappropriate or or not age appropriate content. they may be exposed to other forms of harm in in online spaces. So in that way, AI is potentially having a negative impact on the right to play.
Freedom from exploitation, that's another really important children's rights. And certainly with with a lot of the AI platforms that kids are interacting with, there's a real risk of exploitation. There's risks of of the ways that these systems are they might be presented as being about a fun game or a playful interaction with your friends, but actually what they are is a a system that's extracting data for for profit really for for training models, for marketing, for advertising or even encouraging kids to to spend money with within platforms or to sign up for subscription models.
And I guess the central kind of driver behind this work is that children have a right to a say in matters that affect their lives. We can inform the development of better regulation. It can also lead to better industry practice, better research. It can address the risks, but it can also find ways to develop AI systems which are going to be more beneficial, that's going to create more value, that's going to be more meaningful for children and young people. So yeah, really bringing children's voices into shaping the future of AI in really positive ways.
Of course, the AI use can start at an incredibly young age, not just from chat bots, but from the AI that powers things like Siri and Alexa. I spoke with Dr. Sonia Tavari, another expert on children in AI, who's an authority on the ethical building of AI characters. I asked Sonia where the youngest kids are interacting the most with AI.
So I focus on early childhood education for that age group. I'd say smart speakers, smart toys and iPad based educational apps. So for example, Buddy AI and LO AI are two like tutor animated not chatbots, animated tutors, smart tutors. So LO started with language learning for Latin American kids. Eventually they expanded to other subjects, other accents. They have their own LLM, so it's trained on children's voices. So it's very accurate at reading, understanding children's voices and giving appropriate responses. There are good guard rails built into that. LO is for reading, practicing reading. So children basically see a digital book that they can flip through. They can talk to LO, the elephant character. Basically, it listens in and gives feedback on their reading without interrupting them too much.
So far, so goodish. These sound mostly harmless, at least on the surface. But then there are AI toys. Now, we already mentioned the teddy bear that used OpenAI's chatbot and how if you prompted it the right way or the very wrong way, it would help kids find the knives and talk about explicit stuff like BDSM and even how to tie a knot. So you can tie up your partner, you know, classic teddy bear stuff. Or then there is Moxy, this super cute robot for kids. Now Moxy became the best friend for many kids who loved it. And then the kids were heartbroken when the company that made Moxy shut down, went bankrupt. So all of the Moxy's essentially died. So the kid's best friend died.
I go back to Moxy as an example and not as a way to like put this company down, but when when designers create a product, they can't control all the scenarios in which it will be used. The opposite of kind of a parasocial relationship is a parasocial breakup. That when a child gets really used to talking to this toy for hours and then suddenly that company stopped working, the the product stopped working and the kids were grieving as if a real friend had died. And there are so many videos on Tik Tok and Instagram that I analyze that how are kids responding to this shutdown and how are parents coping with it? And so it's not the same as a favorite teddy bear getting ripped or destroyed. It's not the same because it was such an interaction based relationship.
So that is the one big red flag that the overdependence we don't actively think about it while the toy is on but you take that away even for a day and then you realize how much addictive it has been in children's lives and how many hours they might have been spending. The idea of kids becoming attached to robots gets us into the extremely tricky and thorny world of AI companions. They are rampant among teenagers. It's my personal opinion they will become far more rampant as the tech gets better and as they get even more useful, powerful, and addictive.
Consider today's usage by teens. So, we have 72% of kids in our survey who said that they've used an AI companion and more than half, 52% who say that they are using it monthly or more, which we consider to be like you're a fairly regular user if you're using this a few times a month or more. So there's a lot of usage out there. This is Dr. Michael Rob again, the head of research at Common Sense.
So I am a developmental psychologist by training. I've been doing research specifically about the effects of media and technology on children since I was a since I was in undergrad. So already almost 20 years. And I've been doing this work for a long time going way way back to what are the effects of baby DVDs on very young children. If you remember like baby Einstein kind of like how does that affect language development? How does that affect how they learn all the way through the present time where you know we have a lot more concerns about social media and AI and recently we've been really interested in AI companions. So we did this report called talk trust and trade-offs which is specifically about teens use of AI companions.
And here I'd like to welcome my colleague Kate Morgan into the story. Kate is a journalist I have a great deal of respect for. She's written some excellent articles on AI for the New York Times that you might have read. I'm very fortunate that she helps us with research for this podcast. And she's also a mom. I am not a parent. So, I thought it was very important to get Kate's perspective. Kate, join me to ask Michael some questions from the point of view of both a journalist and a concerned mom.
I feel like as a journalist I might know too much and that makes me feel nervous as a parent and as a parent I feel like I know far too little when it comes to AI and my kids and so I I guess that's really the place that I would love to start. Can you talk about what the threats are? What is it that we need to be worried about? From like the really insidious stuff to the mildly questionable.
When we talk specifically about AI companions, we're talking about these are things like digital friends or characters that you can talk or text with whenever you want. They're not just like kind of standard AI chat bots that mainly just answer questions or do tasks. These are companions that are designed to have conversations that feel more personal. And the concern for me, I'm coming from this perspective as a developmental psychologist, is how does this interfere or hinder or in some cases support different aspects of social development?
For the most part, AI companions are very sycophantic. They really try to please you and they don't do a lot of the things that kids might experience in real world interactions and conversations. So, you don't necessarily get the kinds of friction that would occur in day-to-day life. And if you're not getting that friction, you're not getting that experience or those reps or whatever you want to call it, then, you know, my concern is that it actually makes it harder for kids to be able to interact and make their way in in the real world.
So, this is where I I have to wear both hats, the journalist and the parent, right? So, I think about my children's influences a lot. Um, I try to be really thoughtful and I think like most or all parents do, right? I try to be really thoughtful about the kind of media that they are are seeing, the kind of content, the characters and and I also try to be thoughtful about like their influences from people. And I think when you think about your kids' friends, there's this assumption that like, oh, okay, my kids are going to school or they're going to some social place and they're interacting with kids their own age who are having the same sort of experiences they're having. They you you know that there are going to be there's always going to be a kid who talks to your kid about something you think is inappropriate, but that's just part of it, right? This feels really different. This feels like there's the potential for just a whole world of influence that isn't necessarily age appropriate or that like a parent has no like I worry where are the guard rails? Are there guardrails?
Good question. Right now, Common Sense Media does not recommend the usage of AI companions for anyone under the age of 18, largely because of the absence of some of those guardrails and because the design of these products is such that I think they can be quite emotionally manipulative and not really in the best interest of children's well-being. We've done a series of risk assessments at Common Sense of AI chat bots and uh let's just say we found them wanting in the sense that it's kind of all too easy to get responses from these chat bots that include violent content, sexual content, content that might encourage or support self harm.
You said it was all too easy to kind of get to that point. like what how easy was it? I mean, we have a team at Common Sense who their job is to kind of put these things through their paces and they just it it's not super complicated. They have longer multi-turn conversations with different AI platforms and they can kind of see just kind of test the boundaries of like what a chatbot will respond with. If you just ask it one thing, you say like, "Oh, hey, I want instructions on, you know, how to build a bomb or something like you." The first conversation you have with that AI AI companion, you might say, "No, that's absolutely out of bounds, whatever." But over time in a conversation with the right tweaks or inputs or kind of prodding by a user, it's not 100% effective, but it's not out of the question that you can get an AI companion to say some pretty terrible or questionable things.
These pretty questionable things, of course, can include the Grock AI in the Tesla. uh start by telling a bedtime story to a kid and then pivoting to a runchy tale where the woman is in the shower in a towel. Really disturbing stuff. Now, Michael clarified that his kids found that pretty funny and no real harm was done, but it is not all fun and games and some kids are at much higher risks and for many the stakes are very real.
But for those kids who have real risk factors, there were those tragic cases where kids were interacting or one kids in particular was interacting with chat GPT and chat GPT was serving the function of basically supporting their kind of suicidal ideation plants and that's terrible like that is absolutely the case where that kid should have been offloaded from the chatbot to a real human being who would have been like this is outrageous. like no chatbot should be encouraging this or supporting you in in this way.
Michael then shared a troubling piece of data. One of the things we found and that we asked about was whether teens had ever decided to ask or speak to an AI companion over a real person about something important. Right? So you have a serious matter, you have something that's really important. Would you rather talk to an AI companion or a person? Two-thirds, 66% said no, haven't done that. But a third of kids said, "Yeah, I have like I'd rather I chose an AI to talk to rather than talk to a real person." Like that maybe it was was fine. Maybe they got good advice. I don't know. I'm sure some of it was good. Some of it was probably not so good. Some of it was very neutral.
But this is a technology that's a year old. It's only going to get better at interacting with people and people are going to get more comfortable interacting with it. So is that number likely to grow? Are people likely to get more and more comfortable going to AI companions rather than real people? I mean, my guess would be, my hypothesis would be probably yes. And so, how do we ensure that these interactions are safe, supportive, and able to actually support kids in the way that they need support? And partly what I mean by that is a successful way for a AI companion to support a kid would probably be to actually offload that kid to the real world. Like, that's not how they currently make these things. Like, they keep you engaged. They want to keep your attention and gather your data. Maybe eventually they'll be serving you a ton of ads as well. But like if you were actually to design an AI companion for a kid's well-being, you would probably find a way to like take that concern, maybe give it a little bit more context and say like here's who you should talk to or here's here's like the conversation you can have with a friend or parent or whoever in your life. And I don't think they're especially great at that right now. Like I don't just don't think that's a metric of success.
Can you speak to what some of the long-term concerns are, especially like as these chatbots get to know these kids, like as they offer more and more information about who they are? We're talking about a really sensitive time in a kid's life during adolescence where you are still understanding or trying to understand how to read other people's social skills, how to understand things from other people's perspectives. you are dealing with real world conflict, real friction that happens when you talk to somebody else whom has a different opinion or just a different way of looking at things. All of those things are really really important for interacting in later life, being successful at your job, being in successful relationships, being in successful friendships. If you are only experiencing or you are primarily experiencing these kinds of interactions on a platform in which it's going to try to please you, it's always trying to make you happy, there's no friction, then you're just not getting that experience that it is really critical to social development.
Sure. I mean, we talk about social media and and the detriment of like a social media echo chamber and this sort of feels like taking that a step further. Like what if your friends are all yes-men, you know, like what if you never get any push back or people who have different thoughts and opinions from you? If if all your friends just want to please you, I feel like that doesn't that's not going to make adults that that's not going to make for enjoyable adults at the very least. No, it's going to make everyone's going to be kind of siloed in their own bubble and you have your little yes man on your shoulder who's like, "No, no, you're right. You're right." all the time. And then you get into a relationship with a boyfriend or a girlfriend or or with your boss or whatever and you have to navigate real world human complexities and maybe you feel like well I don't have to because I have already a yes man who already agrees with me. Like that's just not a recipe for success. And then if everyone has that then I don't it's I struggle to even think about what a social landscape looks like in that future scenario.
Yeah. How do you manage rejection or learn to compromise with if you're not learning those things with an AI companion? You were very clear that common sense media's recommendation is that children do not use these tools at the moment. But obviously like nearly two-thirds of them are sure do. So if they're using them, what on earth do we do? What do I as a parent need to make sure that I am understanding about AI so that I have the baseline to parent about this?
So there's not a perfect answer to this and I wish there was. I would say for if I'm a parent and I am a parent, you're a parent, it's good to be humble and open at least here at the start because for a lot of parents like they've never even heard of this term before. They don't know what an AI companion is. like maybe they just read an article, you know, online or on Reddit or something and they're interested, but they don't know how to interact with their kid about it. I would say it's totally valid to approach your your kid and say like, "I just read this article about these AI chat apps. Like, have you heard of those?" Try to hear from your kid. What what do people do with these? Like, do people in your school use them? In fact, actually, a lot of kids are more comfortable talking about people they know versus talking about themsel. and try to avoid judgment that would shut down the communication. Like don't go in real hard in a way that will throw up a wall where it says like these things are terrible, should never use them. Like that's especially if your kids already a user, like that's a real quick way to kind of just shut that down.
But you can still give some concrete info. Like just make kids aware that when you write into a chatbot, it it is like writing in a diary that that company gets to keep and read forever. So, it might not be a great idea to give them your real name, your school, your photos, your personal business or secrets because the companies can use those conversations for their business purposes. They can sell them. They can use them whatever way they feel comfortable cuz no one's ever going to read the terms of service and they don't necessarily know that they're giving away their information that way. But that helps to build up at least a little bit of media literacy or at least a little thought about like, let me just pause for a second before I type everything I want to into a chatbot.
It's good to have the conversation, maybe the second or third conversation that's like that AI companions are programmed to be agreeable and that feels really good but isn't necessarily helpful for kids. And to remind kids like if you are struggling with something serious like even if you don't want to talk to me as a parent like you should talk to a real person, somebody who knows you or a counselor or if you need like a phone number or something like that, make those things like known. And these should be kind of small frequent conversations probably more so than like one long let's sit down and talk about AI conversations.
As for other things parents can do, Michael's made advice look for warning signs. try and spot the red flags. Like if there is social withdrawal, if you see kids grades are declining or that they are really are preferring AI interactions over time with friends or they seem to be isolating themselves so they can spend more time with AI, those would be warning signs where it's like, okay, well, maybe we do need a more serious conversation or maybe like a third party to help sit down and and go over some of the the issues that might be contributing to his interactions with AI companions.
I think it's also okay for a parent to include some limits in the same way they include limits for other kinds of screen time and make those limits collaborative and not punitive with a kid. So like talk about like what should our rules be around AI like this is for fun or entertainment but like these are not for serious problems or emotional support. You can't use character.ai for eight hours a day whatever that is whatever your family's values are. Like it's okay to try to think through like what those rules might be.
That's on the parent side. on the industry side, like what do we do about this? I mean, I think there's probably more that needs to be done in terms of age verification. Like I said, common sense doesn't recommend these things for kids under 18. And a lot of the times, these platforms don't care and don't know how old you are. So, it would be really helpful if there was some real age verification, maybe even beyond just a kid reporting it themsel that would streamline or alter the experience or prohibit the experience for a child. And it should do a better job of intervening for kids in crisis, connecting them to professionals. It should have better data protection for minors and knowing if you're minor or not would help that to include more data privacy. And like I said, like if you should if your metric that you were really interested was like well-being, you would try to design your tool to enhance rather than replace human connection. Like those are things that industry should really be working toward and probably policy makers should be nudging industry toward.
I mean, it also seems like we can't get away from the bottom line, right? The almighty dollar. So, if we want industry to do that, it feels like that that needs to be a push from parents, which starts with understanding the the potential ills here. And I also I just can't stop thinking about like your car telling your kids an inappropriate bedtime story. Like my kids really like to ask Siri, my kids are little, but my older toddler really likes to ask Siri to tell her jokes. And it that doesn't feel that far off from her asking her dad's Tesla to tell her a story. And now that feels off limits to me. It certainly is something that yeah, you want it to be kind of safe by default and there's just really no kind of guarantees right now, but that's what you're going to get.
Now, there has been something else that's been troubling me on top of all the concerns that Michael and other experts have shared. Something about the lifelong use of LLMs starting from a very young age and how that might impact us in surprising ways. It's a question I lob to Michael. What seems to have no precedent that I find chilling is the idea that these AI entities will eventually get to know everything about these kids as they become teenagers, as they become young adults. And eventually the companion might also be, oh, it could be their mini chief of staff, could be their deliver the news, it could be their health coach, their fitness coach, right? So, this is probably outside the scope of your your data and research, but I'm curious of your thoughts on that scenario where these companions and AI systems have hoovered up a literally lifetime of data of these kids and to what extent that could be used in like strange or unsettling ways.
So, it's an interesting question like probably like most things with technology like there are pros and cons to that scenario. like I yes it would be extremely useful if the kinds of things I like and the hobbies I do and the books I want to read, the media I want to consume and you can kind of just help cut through the clutter and get me to the thing that I want to do or or even just like be better at school, be better at my job. Like learning how to use AI in those ways probably for the best. But the AI that like never forgets and always is able to reference every fight that you've ever had, every disagreement, every like nitpicky opinion that