Cinthya Laky, founder of Style and Byte, talks with European jurist Blanka Halasi about the Children’s Online Privacy Code and the challenges of protecting sensitive data in the age of AI — from biometric identifiers to digital autonomy.
The advent of artificial intelligence has ushered in a new era in data management. It has clearly brought to light ethical and legal dilemmas that require prompt responses. Automated decision-making, facial recognition systems, and biometric identification are now actively present data protection challenges. However, if we ourselves were or are not prepared for them, how could our children be?
Australia’s 2025 Children’s Online Privacy Code precisely addresses these questions: its purpose is to protect the most vulnerable users – children. The Children’s Online Privacy Code does not merely set technical conditions for data collection but also represents a shift in mindset, established in collaboration with professionals, that the processing of children’s data must always prioritize the „best interests of the child.”
In this latest installment of our series on Australian privacy reform, we explore the topic with privacy expert Blanka Halasi. Our expert helps to understand why biometric and behavioral data are considered particularly sensitive, how Australian regulations align with European data protection models, and the potential impact on AI-based services like ChatGPT.
AI and Data Security – Handling Particularly Sensitive Data
Australia will introduce the Children’s Online Privacy Code in 2025, alongside stricter anti-doxxing regulations, with a special focus on the protection of biometric data. You have also worked extensively on this topic. Could you specify what this area entails?
Question: Why are data types such as facial recognition or biometric information considered particularly sensitive? What exactly are biometric data?
Answer: Biometric data are unique physiological or behavioral characteristics, such as
- facial patterns (e.g., Face ID on our phones)
- fingerprints (smartphones, workplace access systems)
- voice profiles (Siri, Alexa, or other voice assistants)
- or even DNA sequences, which enable the unequivocal identification of an individual.
These data are extremely sensitive because they are closely linked to our physical integrity and, unlike a password, cannot be changed over the course of our lives. Consequently, in the event of a data breach, affected individuals may face long-term, potentially irreversible vulnerabilities.
The European Union’s data protection framework, the GDPR, provides particularly strict protection for this information. According to the GDPR, biometric data fall into a specially protected category because they directly affect human dignity and the right to privacy. Accordingly, their processing is only permitted under very strict conditions, such as explicit and informed consent from the individual.
In parallel, the EU’s new AI regulation, the AI Act, classifies applications capable of real-time, remote identification of individuals as particularly high-risk. These tools have the potential for mass surveillance, fundamentally endangering anonymity and individual freedoms.
For this reason, the AI Act only permits such technologies in strictly defined, legally specified cases, such as counter-terrorism investigations.
Both regulations aim primarily to prevent the biometric markers of our bodies from becoming „data sources,” thereby undermining
- personal autonomy,
- privacy,
- and fundamental rights

Question: What can you tell us about the proposed rules of the Australian Children’s Online Privacy Code? Why is it particularly important for the protection of children?
Answer: A key element of Australia’s data protection reform is the strengthening of children’s digital rights, with the central planned instrument being the Children’s Online Privacy Code (COPC).
The proposed code aims to provide special regulation for the protection of users under 18, particularly regarding data collection and processing online.
Concrete examples of data collection involving children in daily life include:
- Educational apps that track how long a child spends on tasks or their level in a game.
- YouTube Kids or TikTok, where personalized content is shown based on the child’s interactions.
- Interactive toy robots that learn a child’s behavior and preferences.
The code draft was developed by the Office of the Australian Information Commissioner (OAIC), in close cooperation with child protection and data rights organizations, and forms a cornerstone of the comprehensive Privacy Act reform coming into effect in 2025.
One of COPC’s most important innovations is that data processing operations concerning children are regulated according to the child’s best interests.
Data controllers will need to design their services to collect the minimum amount of data by default, similar to Article 25 of the EU GDPR. Data from children under 16 can only be collected with parental consent, and the code strictly prohibits behavioral advertising or targeted marketing to children, considering the psychological manipulation these practices could induce.
The Australian code is strongly inspired by the United Kingdom’s 2020 Age Appropriate Design Code, which has already compelled major tech companies, such as TikTok and Google, to adopt more child-friendly data practices.
Australia aims, following this model, to create a regulatory environment where online platforms are required to implement a „child-centered design principle”:
- For instance, it will be forbidden to track children or use manipulative UX tools (e.g., „dark patterns”) to coerce them into sharing data.

COPC’s significance lies in recognizing the limited autonomy of children not only legally but also from a developmental psychology perspective.
Children often do not understand the consequences of having a profile created about them via biometric data, location tracking, or facial recognition software. The Children’s Online Privacy Code clearly aligns with European data protection thinking, where the child is not only a “user” but also a bearer of rights deserving protection.
Question: What impact could the new Australian rules have on language models like ChatGPT, which also process personal data?
Answer: Australia’s data protection reform can clearly affect these systems. The reform’s fundamental goal is to
- strengthen individual rights,
- increase accountability,
- and establish clear standards for data controllers,
- especially in areas where new technologies, such as generative AI, lack clear regulations.
The changes also bring closer convergence with the European data protection model, aligning with GDPR and AI Act principles.
Why is this important? Which systems might be affected?
- If a child chats with ChatGPT, the system could infer their interests or behavior from their messages.
- Educational platforms that tailor content through profiling.
- Interactive AI games that “learn” from the child’s activity.
One of the key elements of the amendment is that it strengthens the rights to access and deletion. These rights already existed in Australia, but in practice were limited and often theoretical.

Under the reform, all organizations processing personal data – including services like ChatGPT – must clearly and comprehensibly inform users about what data is collected, how it is used, with whom it is shared, and what remedies are available. The law emphasizes that this information must be presented in an “understandable, transparent, and non-misleading” manner, in line with Article 12 of the EU GDPR.
For generative AI services, this also means that if the system generates, learns from, or stores personal data, users have the right to request its deletion and to access information about its nature and use.
The law stipulates that if the legal basis for data processing ceases (e.g., if the user withdraws consent), the data must not only be deleted but, where applicable, processed anonymously thereafter.
“Another important aspect of the reform is enhanced protection for children.”
The Australian Children’s Online Privacy Code draft states that users under 18 receive special protection, and data processing for children under 16 requires parental consent. This means generative AI services (GPT) must implement age verification systems and ensure that no targeted data collection, profiling, or advertising occurs for children.

Conclusion
I have witnessed numerous times that a lack of education can have significant negative consequences. And now, at this moment, when apps, software, and language models are being developed daily, of which the average person knows very little, education becomes even more critical.
If we do not understand how the technologies that already actively shape our lives function – such as Face ID unlocking our phones or educational AI apps tracking our children’s progress – we cannot make responsible decisions, nor can we teach our children conscious and responsible use.
Therefore, I believe awareness is the first step: we must understand how our data is used and what rules are behind it. What protects us and what protects our children?
I consider both the protection of children and the establishment of global legal frameworks as urgent. I believe that by involving child-focused professionals in setting these legal frameworks, Australian legislation is taking a significant step toward the global synthesis of GDPR, the AI Act, and other data protection laws, forming the foundation for our future.
Indeed, if global data protection and technological development can combine innovation with human values and ethical standards, we can create a digital environment where we, women, mothers, professionals, and users can live and develop freely, as can our children.
Leave a Review