Agentic AI – autonomous systems that act on goals rather than just respond to prompts – is no longer a distant concept. A recent IDC study highlighted that 41% of Australian organisations are already using agentic AI, with another 50% planning to adopt it within six months.
From customer service to logistics and financial services, Australian enterprises are moving beyond pilot projects, orchestrating multiple AI agents to streamline operations, unlock efficiencies, and drive new kinds of innovation.
But with this growth comes a critical challenge: trust. Unlike conventional AI models, which are relatively easy to monitor, agentic AI systems act proactively – making decisions that affect customers, employees, and even compliance obligations. For Australian organisations wanting to embrace agentic AI and realise its true benefits, transparency, accountability and risk-aware design must be non-negotiable.
Trust, in other words, cannot be an afterthought. It has to be engineered into these systems from the ground up.
The complexity of AI supply chains
Today’s AI systems are rarely monolithic. They’re built on a complex web of models, APIs and external data sources that interact dynamically. While this composability creates incredible opportunities for innovation, it also introduces significant risks – particularly in Australia, where sovereignty, security and regulatory compliance are front of mind.
For example, uncertain data governance makes it difficult for organisations to verify where training data originates or whether it carries bias, gaps or ethical concerns. At the same time, the growing reliance on third-party AI models and APIs expose organisations to outputs that may be unreliable, unsafe, or impossible to trace.
To mitigate these challenges, AI model supply chain transparency is crucial. Organisations need the ability to track model provenance, rigorously assess third-party risks, and leverage open-source frameworks so that every component is verifiable, auditable, and accountable. Without this visibility, conducting the risk assessments needed to meet ethical, operational and regulatory requirements becomes almost impossible.
Risk-based analysis matters
Agentic AI represents a significant evolution from earlier reactive models. Instead of merely responding to prompts, these systems can plan, decide and act independently in pursuit of high-level goals.
This shift from reactive to proactive AI necessitates a new approach to risk assessment. Organisations deploying multi-agent orchestration and function calling frameworks must evaluate whether agents’ behaviour remains predictable, explainable and aligned with intended outcomes. Organisations also need to know that decisions are consistent across contexts, that every action can be traced and understood, and that there are clear intervention points if something goes wrong.
Effective oversight and ethical alignment demand governance by design. Organisations should consider whether human-in-the-loop mechanisms are in place to guide, override or audit decisions in sensitive scenarios. They should assess whether it can adapt its behaviour when risks emerge or environments shift. These are not ‘nice-to-haves’. They’re design imperatives that support safe and trustworthy deployment at scale.
Best practices for building trustworthy AI
To truly trust AI, organisations must go beyond performance metrics and ensure transparency, traceability and safety are built in at every stage of the lifecycle. Trust can’t be bolted on after the fact – it has to be engineered into the system from the get-go.
A critical enabler of this trust is model lineage. By tracking a model from its data origins to deployment, organisations gain a clear view of how it was built and how it behaves. This visibility not only strengthens accountability but also helps surface hidden biases or systemic flaws before they can cause harm.
Equally important is explainability. AI systems must generate outputs that users can interpret and trace back to clear logic, especially in high-stakes environments where decisions carry legal, financial or ethical consequences.
As AI systems become more autonomous, guardrails and governance take on even greater importance. Organisations need frameworks that keep humans firmly in the loop, with the ability to step in when needed. Real-time intervention controls and fail-safe mechanisms are essential to ensure AI actions can be monitored, corrected, or overridden if something goes wrong.
By embedding these principles from the start, organisations will be better equipped to navigate the complexities of agentic AI – deploying systems that are not only powerful and scalable, but also safe, ethical and aligned with human values.
Building trust for a scalable AI future
With IDC forecasting that a third of enterprise applications will include agentic AI in the next three years, and a growing number of Australian organisations already exploring adoption, the pace of change is accelerating.
But to realise the true benefits of this technology, trust must be built in from the start. Transparency, accountability, and safety need to be embedded across data, models, system behaviour and decision-making.
Enterprises that invest early in governance frameworks, risk-based assessments, and supply chain transparency will be better positioned to scale confidently and responsibly, while reducing operational, reputational, and legal risks.
As agentic AI matures, trust will be the differentiator. Organisations that treat it as a foundation – not an afterthought – will be the ones able to deploy AI responsibly, meeting evolving regulatory and customer expectations, and realising its full potential at scale.
- About Vincent Caldeira
- About Red Hat
Vincent Caldeira is Chief Technology Officer, APAC at Red Hat. In this role, he is primarily responsible for engaging and building partnerships with strategic customers to explain Red Hat’s vision and establish and reinforce Red Hat as an industry leader while establishing trusted relationships with customer’s technology leaders and advocating for relevant emerging technologies.
Vincent has spent more than 20 years of his career in the financial technology sector, both as a Chief Technology Officer shaping technology strategy, enterprise architecture and driving technology transformation roadmaps, as well as driving talented engineering teams to design, build and deliver software solutions in the financial software vendor industry.
Vincent also contributes to OS-Climate, a Linux Foundation-backed open source project that intends to build the breakthrough technology and data platforms needed to more fully integrate the impacts of climate change in global financial decision-making and risk management, where he acts as the lead architect and Technical Advisory Council member.
Vincent holds a Master of Science in Management (majoring in Information Systems Management) from HEC Paris.
Red Hat is the open hybrid cloud technology leader, delivering a trusted, consistent and comprehensive foundation for transformative IT innovation and AI applications. Its portfolio of cloud, developer, AI, Linux, automation and application platform technologies enables any application, anywhere—from the datacenter to the edge. As the world’s leading provider of enterprise open source software solutions, Red Hat invests in open ecosystems and communities to solve tomorrow’s IT challenges. Collaborating with partners and customers, Red Hat helps them build, connect, automate, secure and manage their IT environments, supported by consulting services and award-winning training and certification offerings.

Techedge AI is a niche publication dedicated to keeping its audience at the forefront of the rapidly evolving AI technology landscape. With a sharp focus on emerging trends, groundbreaking innovations, and expert insights, we cover everything from C-suite interviews and industry news to in-depth articles, podcasts, press releases, and guest posts. Join us as we explore the AI technologies shaping tomorrow’s world.