Cinthya Laky, founder of Style and Byte, talks with European jurist Blanka Halasi about Australia’s latest privacy legislation.
Blanka’s expertise runs deep in the field of AI regulation, especially where it intersects with privacy—things like automated decision-making, how biometric data is handled, and whether AI systems can be made truly transparent.
AI is moving at a blistering pace. Every leap forward brings fresh opportunities for innovation, but also a growing set of questions—some of them uncomfortable. For women in particular, these challenges are not abstract: bias in AI-driven recruitment tools, questionable data handling in online spaces… the list goes on.
Right now, Australia has decided it’s time to act. The question is, what exactly are they putting in place?
The Australian Privacy Act 2025 represents a major shift. It forces more openness when automated systems make decisions about people, places stricter controls on the use of biometric information, and gives individuals stronger powers, like the right to have their data erased or to know exactly what’s stored about them.
To understand what this means in practice, Blanka guides us through the changes, setting the Australian approach against the backdrop of European rules—think GDPR and the AI Act.
Where does this leave us? Could these reforms push technology toward a future that’s more secure, balanced, and fair?
Automated Decision-making and AI Transparency
Australia’s Privacy Act 2025 brings in a clear rule: if artificial intelligence decides for you — maybe whether you get that loan you applied for, or if your CV makes it past the first filter for a job — you have to be told. No fine print, no guessing.
It’s not just about transparency for transparency’s sake. Studies keep showing that AI can, often without anyone realising at first, tip the scales against women. One example that’s stuck in many people’s minds is Amazon’s recruitment software. It quietly learned to reject applicants whose résumés contained the word “women”. That bias wasn’t intentional on anyone’s part, but it was baked into the system all the same.
The Australian reform took effect in 2025, and it has a familiar echo in Europe. The EU AI Act, which came into force on August 1, 2024, also pushes for openness when AI systems are used in decision-making, though the fine details differ.
Question: In plain terms, what does “automated decision-making” mean in the AI world, and how could it show up in everyday life?
Answer: Put simply, it’s when a decision — big or small — is made entirely by an algorithm, or a system powered by AI, without a person stepping in to check, approve, or override it.
Take a chatbot, for example — something like ChatGPT, which is used by millions of people every day. Under the AI Act, this falls into the low-risk category. It still “decides” how to respond without asking a human for help, but the impact of each decision is relatively minor.
Then there are systems at the other extreme — say, diagnostic software that reads MRI scans and offers guidance for medical procedures. The stakes there are far higher. That’s why the AI Act draws a sharp line between low- and high-risk systems, and sets different rules for each.
It’s worth remembering that automated decisions often happen in our daily lives without us even noticing.
Take the hiring process, for example. It’s increasingly common for software to scan résumés and filter out applicants based purely on keyword matches, long before a human ever sees the file. In the online world, similar automated systems quietly decide which ads, recommendations, or posts you’ll see in your feed — and which ones you won’t.

The EU AI Act, along with the GDPR, was designed to address exactly these scenarios. The goal is to prevent discrimination and to guarantee human oversight in certain systems. In the case of high-risk AI systems, the AI Act is clear: human review or supervision is essential, and fully automated decision-making — with no human in the loop — is not allowed.
Question: Why is this regulation such a focal point in Australia right now? And how closely aligned are the laws in the EU, the United States, and Australia? Is it fair to talk about a global legal framework in these areas?
Answer: In Australia, this whole topic — automated decision-making and how AI is kept in check — has really moved to the forefront. That’s partly because the digital economy is expanding so quickly here, and new tech solutions are popping up everywhere, often in spaces where they didn’t exist even a few years ago. It means these systems are taking on more influence in people’s everyday lives.
If you look at some of the recent changes — the Children’s Online Privacy Code, stronger laws aimed at stopping doxxing, plus reforms targeting how biometric data is collected and stored — you can see the pattern. The message is clear: Australia wants to protect its citizens’ data and, just as importantly, build trust in the digital environment. That’s especially true for groups who are more vulnerable online, like children.
When it comes to how closely the EU, the US, and Australia line up on this, there’s definitely some shared ground. The broad ideas — fairness, transparency, oversight — are present in all three. But the fine print? That’s where the differences show up. Each region is shaped by its own legal culture and political climate, so it’s not quite accurate to say there’s one single “global” rulebook… at least, not yet.
- The EU’s GDPR and the AI Act, for example, offer an unusually strict and comprehensive framework. They put heavy emphasis on privacy rights, tight oversight of high-risk AI systems, and keeping human judgment in the loop whenever those systems are used.
- In the United States, the approach is almost the opposite. Rules tend to be set on a sector-by-sector basis, they’re more flexible, and there’s less of a unified national framework. The focus leans toward encouraging innovation — though, in recent years, there’s been a growing willingness to address privacy concerns as well.
- Australia, with its own set of reforms, is trying to carve out something in between. It’s strengthening privacy rights and setting clearer rules for automated decision-making, yet it doesn’t have the same single, all-encompassing structure that the EU has built. Instead, it’s blending targeted protections with a degree of flexibility, aiming to keep pace with both public expectations and the speed of technological change.
Right now, there’s no single, global rulebook. What we see instead are regional frameworks, bilateral agreements, and an ongoing exchange of information between regulators trying to keep pace with one another.

What’s happening behind the scenes? Let’s unfold the black box problem!
Question: You’ve mentioned that AI can, in different situations, end up disadvantaging certain groups in society. So, what can actually be done to make sure AI decisions aren’t biased or unfair? And since this is often described using the idea of a black box, could you break down what that problem really means?
Answer: Yes — and sadly, this is one of the trickiest issues we face with AI. Trying to get rid of bias entirely is hard work.
If you’ve spent any time looking under the hood of these systems, you know there’s nothing mystical about them. But to talk seriously about bias, you do need at least a rough idea of how the whole thing works.
Some people imagine AI as this strange, almost alien creation that suddenly landed in the 2020s. It’s not like that at all.
At its core, artificial intelligence is a model that runs on an enormous database — so vast it can outstrip the human imagination. The model has to be fed with data, and the way it operates (and improves over time) depends entirely on what you put into it. That’s why bias or discrimination can creep in: it’s baked into the data from the start. (The UC Berkeley School article gives plenty of examples of this in action.)
If the underlying data is biased — say, it contains stereotypes or skewed patterns linked to ethnicity, gender, or social background — then the AI will carry those biases forward. In some cases, it can even amplify them.
But there’s no need to panic — experts are well aware of this challenge. And while there’s no perfect fix, there are already ways to reduce the risk.
- The first, and probably most important, is data cleaning. From the very start of AI development, you have to look closely at where the data comes from and whether it reflects reality. Doing that means the model works with more accurate information — which, in turn, produces better results. Of course, when it comes to personal data, this isn’t mandatory; it depends entirely on the individual’s rights and consent.
- Another approach is to apply fairness metrics. These are tools that measure whether the model is treating different groups fairly or showing bias against them.
- A third solution is regular auditing — having independent experts review systems at set intervals to see how they’re performing. This step is crucial because it’s where the black box problem really becomes visible.
In simple terms, a black box model has an input stage, where data goes in, and an output stage, where the decisions or results come out. But in between, with AI systems built on deep learning, there’s an incredibly complex web of internal processes. They’re so intricate that neither the users nor, in many cases, even the developers can say for certain how the system reached a particular decision.
A model like this holds such vast amounts of data, and runs so many processes in parallel that it’s almost impossible for a human to trace exactly what it learned from which data, or how each result came to be. That opaque, hidden middle part — the bit between input and output — is what we call the black box.
Conclusion
To sum it up, I believe that if we want to truly understand why AI does what it does, we first have to understand how it works. And more than that, we need to recognise that the data it runs on is drawn from the vast sea of human knowledge, which inevitably carries its own biases.
Right now, one of the main jobs for developers is to strip AI systems of discriminatory thinking and gender-based bias. But these are, at their core, human problems, not machine-made ones. That’s exactly why it’s so important to bring women — along with other marginalised groups — into the development process itself.
In our next article, Blanka and I will take a closer look at the right to access personal data and the right to have it erased.
Cynthia Laky
Leave a Review