Growing up with AI: What are the privacy risks for children?
AI is playing an increasing role in children's lives, fundamentally reshaping their everyday experiences and the places where they spend their time – from home and school settings to many public spaces. But what impact is it having on their right to privacy?
Artificial intelligence – or “AI” as it’s commonly known – is an emerging technology that can spark intense reactions about its potential applications, from enthusiasm to fear. But one topic is rarely mentioned: The impact of AI on children.
AI technologies already surround children, from interactive toys and virtual assistants to video games and educational software. It’s both common and accepted for algorithms to provide recommendations to children on what videos they should watch next, what online content they should read, what music they should listen to, and even who they should be friends with.
Some AI-powered applications more visible than others
According to Noah Zon, one of the lead researchers of a project by CSA Group that was funded by the Office of the Privacy Commissioner of Canada (OPC) – ”Children’s privacy in the age of artificial intelligence“– AI shows up in the operations of systems and applications with which children of all ages interact, some less visibly than others.
“Less visible examples include the algorithms behind YouTube, YouTube Kids, and other services that serve up and make decisions and track information about content,” notes Zon. “Facial recognition, which is AI-powered, exists all around children, but we are often unaware of it.”
More explicitly, says Zon, AI is already being deployed in certain interactive toys, like Mattel’s Hello Barbie, which relies on aspects of algorithmic data collection and voice recognition. It’s also showing up in educational settings, with secondary schools now commonly relying on AI-based anti-plagiarism software, and the International Baccalaureate Program using AI-based services to evaluate the schoolwork of young people during the pandemic.
Privacy risks sparked by the scope and scale of AI data collection
Of course, it's not as though AI-powered applications are the first applications to collect data about children. But according to Zon, it’s the sheer scope and scale of AI’s data collection capabilities that spark new privacy risks.
“AI relies on and therefore incentivizes the collection of large amounts of data to feed algorithmic decision-making and ‘train’ artificial intelligence. So, it ‘hoovers up’ much more data and much more sensitive data, and also lacks transparency and oversight into how it makes decisions.”
Used responsibly, says Zon, AI technology has the potential to improve the wellbeing of children. Examples he cites include AI-powered learning applications, health and environmental research, and content moderation that can make Internet spaces safer. But in the absence of effective interventions, this extensive data collection and the resulting AI-based “decisions” could have significant and negative impacts on children's current and future lives.
“The idea of handing over consequential decisions to AI can play into children’s lives in different ways,” says Zon. “There are cases in the justice system and public safety systems, like child protective services, where algorithmic use of data is being used to assign risk scores that influence decisions about child welfare or other justice and probation-related risks, which can have very consequential impacts on children’s lives.”
Policy responses mostly adult-centric
Despite the potential risks to children, Canadian policy responses to AI and digital privacy have remained mostly adult-centric, neglecting the privacy rights, distinct needs, and unique circumstances of children. The research project showed that targeted action is needed to close this critical strategic gap.
“Canada, like many countries, has tended to overlook children's rights and has been slower in responding to some of the needs related to AI than say Europe or Australia, but we are not unique in children's rights related to digital privacy not being heavily embedded in policy or practice,” explains Zon.
In a recent Unicef report, referenced in the project, 20 national AI strategies were reviewed, and only minimal discussion about children was found. Even in Canada’s own private sector privacy act (PIPEDA), there's only one reference to minors.
Building on a previous CSA Group research project on Children's safety and privacy in the digital age, the research team aimed to respond to policy and solution gaps around the rising adoption of artificial intelligence on two fronts: The need to better understand evolving AI technology and its applications, and the need to fill a longstanding gap in how digital privacy is approached that's specific to the unique needs, interests, and rights of children.
“This is new territory and children's privacy needs are by nature under-explored, so we wanted to get insights from a variety of experts,” said Zon. “CSA Group’s goal was to conduct policy research and develop complementary documents that would engage policymakers around this topic. We wanted to come up with constructive solutions, which often requires different and more creative approaches.”
To bring a variety of perspectives together, CSA Group created an advisory panel comprising children's advocates, researchers, and industry experts. The panel provided input into both the direction of the research and the project’s analysis and recommendations.
The research team also conducted direct interviews with experts to understand how AI systems work and their potential impacts, and held collaborative workshops to explore the specific impacts of AI on children. Rounding out the project was a close examination of how other jurisdictions have responded to AI issues, and a comprehensive review of the research literature, from both academic and other sources.
Consent is not the solution
One key finding was that the mechanism of consent does not provide a solution to AI-related privacy issues that can affect young people. Typically, children under the age of 18, or in some cases under the age of 13, are not able to provide consent under current legislation. Parents must provide consent on behalf of their children, but consent is complex and such decisions could have long-lasting and unforeseen implications for their children. And in the absence of “right to be forgotten” laws, any resulting AI data collection would likely be hard to undo.
“Parents might consent to something that can’t be undone by their child, and there are currently no easy technical ways to remove personal data from AI systems.”
“There's a need to be more fine-grained in our understanding that children have privacy rights separate from their parents,” comments Zon.
For now, the report recommends that specific policy interventions should occur at three phases in the lifecycle of an AI-based technology: Before it gets into the marketplace, before it's deployed, and while it's being used. On top of that, there is a need for more general oversight and accountability.
Privacy by design
Intervening before something hits the market – during the design phase – is also known as “privacy by design.” This could include restricting the collection of certain information and other governance measures designed right into a product.
“The more we can engage in privacy by design, and the more we can have productive and constructive engagement between companies and policymakers and regulators, the fewer challenges we will have to mitigate down the road. If you design a product with children's needs in mind, then you're not collecting more data than you need to, for example, which can lessen the need to manage and govern the personal data collected in the future.”
Most manufacturers of children's products are not software designers or cybersecurity experts, yet they're introducing new AI functions into their products. Having some best practices and guidance to follow would likely contribute to better outcomes.
“Just as parents and children need to be supported with greater capacity and information, and policymakers and regulators need to be supported to respond to rapid changes in technology and make appropriate decisions, manufacturers also need to be supported to design products appropriately,” comments Zon.
In the end, says Zon, designing products with children in mind could have a much broader positive impact.
“Interestingly, a product that is designed well for children's needs is in fact often a product that is designed well to consider the informed privacy choices and consent for people of all ages.”
What Does Artificial Intelligence (AI) Mean?
Artificial intelligence, also known as machine intelligence, is a branch of computer science that focuses on building and managing technology that can learn to autonomously make decisions and carry out actions to support human beings.
AI is not a single technology. It is an umbrella term that includes any type of software or hardware component that supports machine learning, computer vision, natural language understanding (NLU) and natural language processing (NLP).
Today’s AI uses conventional hardware and the same basic algorithmic functions that drive traditional software. Future generations of AI are expected to inspire new types of brain-inspired circuits and architectures that can make data-driven decisions faster and more accurately than a human being can.
AI-Related Risks to Children’s Privacy: Three Main Areas
The research report that resulted from this project funded by the OPC identified three main areas of AI-related children's privacy risks: Data risks, function risks, and surveillance risks.
AI needs a lot of information, which it uses to make decisions in ways that can be difficult for humans to understand or reverse, and decisions made by AI can have a profound impact on children's lives and prospects.
The report’s authors recommend strategic interventions to address these risks throughout the AI “lifecycle”, from design, through technology adoption and use, to monitoring and accountability systems. For all recommended actions, policymakers need to consider the unique needs of children and engage children themselves in policy development.
Other articles from Real Results
Disclaimer: The OPC’s Contributions Program funds independent privacy research and knowledge translation projects. The opinions expressed by the experts featured in this publication, as well as the projects they discuss, do not necessarily reflect those of the Office of the Privacy Commissioner of Canada.
- Date modified: