Insight: AI Blog Series: #2 Assessing if your AI tool is “high-risk”, and getting ready for Australia’s mandatory AI guardrails

The Australian government has introduced voluntary AI guardrails to help organisations develop safe, ethical, and secure AI tools, signalling a shift towards stricter oversight in AI practices. While these guidelines are currently non-mandatory for all AI applications, they foreshadow future compulsory regulations aimed at curbing the era of unregulated AI experimentation for high-risk AI applications and deployments. 


Lets start at the beginningwhat are the Australian voluntary AI Guardrails? 

The ten guardrails are part of the Voluntary AI Safety Standard developed by the Department of Industry, Science and Resources aiming to guide organisations to create a foundation of safe and responsible AI use. By adopting these guardrails, an organisation can put in place risk-based governance program to guide them towards responsible AI use, fostering a safer and more ethical AI ecosystem.  

Aside from risk management, enhancing innovation and ensuring safety of the public, adopting the AI guardrails will also assist the organisation through building trust and fostering an enhanced reputation with customers and user of their IA, and also provide market advantage via competitive differentiation, showcasing the organisation as leaders on  ethical AI practices.,  Importantly, if the AI application is classified as ‘ high-risk’ , then adopting the guardrails will most certainly prepare the organisation for future proposed mandatory guardrail obligations that are likely to be law in Australia late in 2025. 

 

The 10 Voluntary guardrails are as follows: 

  1. Establish, implement, and publish an accountability process including governance, internal capability, and a strategy for regulatory compliance.
  2. Establish and implement a risk management process to identify and mitigate risks.
  3. Protect AI systems and implement data governance measures to manage data quality and provenance.
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed.
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle. 
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content. 
  7. Establish processes for people impacted by AI systems to challenge use or outcomes.
  8. Be transparent with other organisations across the AI supply chain about data, models, and systems to help them effectively address risks.
  9. Keep and maintain records to allow third parties to assess compliance with guardrails.
  10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion, and fairness. 

For full descriptions, see: https://www.industry.gov.au/publications/voluntary-ai-safety-standard/10-guardrails

 

How do you know if your AI tool is high-risk? 

Determining whether your AI tool is high-risk involves evaluating the potential harm it could cause to individuals, organisations, or society. The Australian Voluntary AI Safety Standard provides clear guidance to help organisations assess their tools. High-risk AI tools have the following characteristics: 

  • Impact on Human Rights: Tools that affect fundamental rights, such as privacy, freedom of expression, or non-discrimination. 
  • Critical Decision-Making: Systems used in high-stakes areas like healthcare, law enforcement, human resources, or finance, where errors or biases could lead to severe consequences. 
  • Vulnerable Groups: Applications that disproportionately affect vulnerable populations, such as children, the elderly, or marginalised communities. 
  • Autonomy and Oversight: AI systems operating with minimal human intervention, increasing the potential for unintended actions. 
  • Transparency and Accountability: Tools with opaque algorithms or complex supply chains, making it harder to trace decisions or manage risks. 

 

To identify if your AI system is high-risk, conduct a thorough risk assessment (aligned with Guardrail 2) that considers the system’s use case, stakeholders, and potential harms.  

It is imperative that this risk assessment is part of the regular maintenance of the AI tool and updated frequently, as risk levels can evolve over time due to factors such as data drift, model decay, changing regulations (such as Australia’s Privacy Act reforms and mandatory AI guardrails), cyber threats and vulnerabilities, and general model phenomena such as bias and mode errors, which require continuous testing and evaluation. 

 

What steps you can take now to get ready for future guardrails? 

Preparing your organisation for the future of AI regulation is about being proactive. By adopting the voluntary guardrails today, you can set a solid foundation for compliance, governance, and ethical AI use. Here are practical steps to take now: 

  1. Build an AI Governance Program: 
    • Appoint an AI accountability owner, gain leadership support and assemble a multi-disciplinary team with clearly defined roles and responsibilities. 
    • Develop an AI strategy that aligns with organisational goals and, most importantly, principles. 
    • Foster a culture of responsible AI within your organisation. 
  2. Conduct Risk and Impact Assessments: 
    • Regularly assess potential risks and impacts associated with your AI systems. 
    • Engage stakeholders to uncover unseen risks and biases (Guardrail 10). 
    • Develop policies to mitigate and manage third-party risks. 
  3. Strengthen Data Governance and Security: 
    • Implement data quality and provenance checks as part of your processes. 
    • Apply robust cybersecurity measures to protect sensitive data (Guardrail 3). 
  4. Enable Human Oversight: 
    • Embed mechanisms for human intervention across AI systems. 
    • Define clear acceptance criteria for AI system performance (Guardrails 4 and 5). 
  5. Enhance Transparency and Documentation: 
    • Maintain detailed records of your AI inventory and decision-making processes (Guardrail 9). 
    • Communicate transparently with stakeholders and supply chain partners. 
  6. Establish Feedback and Challenge Mechanisms: 
    • Provide accessible pathways for users to contest AI-driven decisions (Guardrail 7). 
    • Actively incorporate feedback loops to continuously improve your AI systems. 
  7. Train and Educate Your Team: 
    • Upskill employees on AI ethics, safety standards, and compliance requirements. 
    • Conduct regular training sessions on emerging AI governance best practices. 

By taking these steps, your organisation not only positions itself to meet future regulatory requirements but also demonstrates leadership in building trustworthy and responsible AI systems. The guardrails are not just about compliance—they are a pathway to innovation with integrity. 

A Call to Action for Australian Businesses 

Whether you are an AI innovator or a business exploring its potential, the introduction of these principles is your opportunity to lead with purpose. Adopting them is not just about compliance (yet!)—it is about building a future where AI enhances trust, drives innovation, and aligns with the ethical standards Australians value. 

Australia already has a voluntary standard that includes ten voluntary guardrails, and a mandatory guardrails system is rapidly in progress, for high-risk settings.  

The question is no longer if your business should govern its planning, design, development, and adoption of AI but how to do so responsibly. With these principles as your guide, the answer becomes clear.  

  • Is your business ready to embrace the opportunities of ethical AI?  
  • Is your business ready for the mandatory guardrails? 

At TFIQ, we are here to help you unlock the potential of ethical AI. Partner with us to integrate these principles into your AI strategy, ensuring responsible innovation, effective governance, and lasting success. Let’s shape the future of AI together. 


NEXT IN THE AI BLOG SERIES: #3

Your industry is set for an AI revolution: Are you ready?? Preparing your business’s data landscape and governance model. 


 

Ready to see how we can help you unlock the potential of ethical AI for your business?