Cinthya Laky, founder of Style and Byte, continues her conversation with European jurist Blanka Halasi — this time exploring how mandatory Data Protection Impact Assessments (DPIA) shape the future of high-risk AI systems, balancing innovation with responsibility in Australia’s latest privacy reform.
The development of artificial intelligence is not only reshaping technological boundaries but also legal and ethical ones. The issue of data management and its responsibilities is now far more complex than it was just a few years ago. After all, decisions made by AI systems can impact people’s lives, safety, and rights.
But there’s another side to this matter!
What about the innovative solutions developed by companies that lack the capacity to comply with stringent regulations?
Innovation or protection? Can a balance be achieved?
Australia’s latest data protection reform mandates the use of Data Protection Impact Assessments (DPIAs) for high-risk AI systems.
The purpose of a DPIA is to identify potential data processing risks at the outset of development, preventing legal violations and making technological innovation safer and more transparent.
We analyze the details and impacts of the Australian reform with the help of data protection expert Blanka Halasi, who explains how DPIAs are becoming a key tool in the ethical development of artificial intelligence and whether a balance can be struck between data protection and innovation.
Previous articles in our series on this topic:
Data Protection Impact Assessment
The Australian reform mandates the use of Data Protection Impact Assessments (DPIAs) for high-risk AI systems, in line with a risk-based approach.
Question: What is a Data Protection Impact Assessment (DPIA), and why has it become particularly important in the Australian reform?
Answer: A Data Protection Impact Assessment (DPIA) is a preliminary risk analysis process aimed at identifying and addressing potential data protection and security risks related to personal data processing.
During a DPIA, organizations must assess how data processing may affect individuals’ rights and freedoms and take necessary measures to mitigate or eliminate these risks. The recent amendment to Australia’s Privacy Act 1988, the Privacy and Other Legislation Amendment Act 2024 (Cth), introduced significant changes to data protection regulations.
At the core of these amendments are high-risk data processing activities, particularly in the context of artificial intelligence (AI) applications and the protection of children’s online data.
The law mandates that a DPIA must be conducted before such activities to ensure the protection of individuals’ rights and the secure handling of personal data. This stricter regulation plays a pivotal role in the development of AI systems, as these applications often process vast amounts of personal data and can significantly impact individuals’ rights and freedoms.
Making DPIAs mandatory ensures that data protection and security are prioritized during AI development, promoting the responsible and ethical use of technology. This, in turn, fosters greater trust in AI systems, enabling innovation to flourish.

Question: What does it mean for an AI system to be considered ‘high-risk’? How does European regulation define this? Can you provide examples?
Answer: High-risk artificial intelligence refers to systems whose use can have a significant impact on people’s lives, safety, or fundamental rights. As such, regulations impose strict requirements on them.
Typical examples include:
- Autonomous vehicles, where a flawed decision could result in loss of life,
- AI systems supporting diagnostics or treatment, where an incorrect recommendation could lead to serious health harm.
For such systems, developers must maintain detailed documentation, ensure transparency in how the algorithm operates, and guarantee data quality and protection.
Question: Do you think such stringent regulations encourage or hinder the development of AI systems?
Answer:
This is a highly complex question because strict regulations, such as DPIAs and requirements for high-risk AI systems, can simultaneously encourage and hinder innovation.
On one hand, strict regulation is essential as it mandates transparency, responsible development, and safety, which, in the long term, builds user trust in AI technologies and enhances the accuracy of AI systems, a significant responsibility in today’s world.
If individuals and businesses can be confident that AI systems meet high data protection and security standards, they are more likely to adopt and use them.
This promotes sustainable and ethical AI development and, over time, the broader adoption of the technology.
On the other hand, excessive regulation—particularly if it leads to complex and costly compliance processes—can hinder innovation, especially for smaller companies or startups that lack the resources to meet intricate legal and technical requirements.
This can slow the market entry of new AI solutions and reduce competitiveness. For years, software developers and medical professionals have been trying to advance image diagnostics, but such regulations currently hinder progress in medical science as well.

Question: How does a Data Protection Impact Assessment work in practice, and who is responsible for it?
Answer: The Data Protection Impact Assessment (DPIA) process is well-structured and guides organizations through a thorough risk analysis of data processing step by step.
- First, the data processing operation and its purpose are detailed:
- The types of data processed are identified,
- The affected individuals are specified,
- And the technological tools and methods involved are outlined.
This overview lays the foundation for subsequent steps, enabling the most accurate mapping of the data processing process.
2. Next, risks are identified, with a particular focus on protecting the rights and freedoms of individuals.
- Potential dangers, such as unauthorized access, misuse, or mishandling of data, are mapped out,
- Along with their potential impact on affected individuals.
3. The following step involves developing solutions to mitigate the identified risks.
- Examples include data encryption or pseudonymization,
- Strict regulation of access permissions,
- And the implementation of technical and organizational measures.
4. Finally, the results of the entire assessment must be documented, detailing how risks are managed and what measures ensure compliance.
A well-executed DPIA ensures that data protection and transparency remain primary considerations during the development and operation of AI applications, preventing data protection chaos and promoting the responsible use of technology.
The responsibility clearly lies with the data controller. This means that the organization’s leadership is accountable for ensuring the DPIA is conducted and cannot delegate this responsibility to, for example, system administrators or developers.
However, the GDPR mandates that if the organization has appointed a Data Protection Officer (DPO), they must be involved in the DPIA process. The DPO provides professional advice and assesses whether the data processing complies with regulations—but the ultimate responsibility remains with the data controller.
Conclusion
I believe that drawing a clear line between data protection and technological innovation is becoming increasingly difficult. Phenomena such as doxxing or facial recognition demonstrate that personal data protection is no longer just a national issue but a global one.
It is clear that the regular and mandatory use of DPIAs can help identify and manage these risks from the outset of AI development, ensuring accountability and providing a framework for legal recourse.
But is it possible for data protection—such as the regulation of doxxing or facial recognition—to operate under unified principles across different parts of the world, such as the EU and Australia?
Answer: This is a truly interesting question. Regulating doxxing and facial recognition on a global level, for example, between the EU and Australia, would be an ingenious idea, as the digital space knows no borders, and personal data or facial images can easily cross continents.
A unified regulation would ensure that the protection of individuals’ data is consistent and predictable worldwide, both online and in the real world.
However, in practice, creating a fully unified regulation is nearly impossible, as Australia is not part of a unified organization like the EU. Data protection and technology regulations, as well as the interpretation and enforcement of fundamental rights, vary significantly between countries, making it challenging to establish a single, universally accepted system.
Therefore, a more realistic goal is for countries to enter international agreements and establish common minimum standards for collaboration in data protection and online safety. Alternatively, countries could draw inspiration from each other’s successful and effective regulations and adopt them where applicable.
Cinthya Laky










Leave a Review