Australia has seen its fair share of tech hype including AI-powered tools that are little more than chatbots following a basic script. There’s nothing groundbreaking in old-school automation dressed up in fancy words.
Now, agentic artificial intelligence (AI), an emerging disruptor that can think, adapt, and tackle dynamic, unpredictable problems with limited human oversight, is suffering the same fate thanks to marketing spin. This isn’t just white noise, it can mislead businesses into burning budgets and stifle innovation.
While companies are jumping on the AI bandwagon, from eager early adopters to those taking a more measured approach, organisations that build true agentic AI within ethical guardrails will be well-positioned to deliver smarter services and greater efficiencies.
Labelling It Agentic Doesn’t Make It Smart
The AI boom has seen some vendors slapping “agentic AI” on tools that are nothing more than enhanced scripts. This misleads businesses expecting AI that can act with agency and deliver adaptive and seamless customer experiences. In fact, according the Genesys “The State of Customer Experience: Asia-Pacific” report” , 79% of consumers surveyed in Australia and New Zealand (ANZ) say when interacting with a preferred brand, they value service teams that listen and understand what they’re trying to achieve – a clear signal that today’s consumers expect more than transactional support.
How do you identify true agentic AI from the marketing hype? It’s a simple question: can it make decisions beyond what it was explicitly programmed to do? If it’s stuck on predefined responses or preset workflows, it’s not agentic. It’s yesterday’s automation with a shiny new label. Australia’s National Artificial Intelligence Centre (NAIC) 2024 Responsible AI Index found that only 18% of organisations vet vendor claims about AI model performance, and among those, just 6% of businesses with minimal responsible AI implementation. This lack of scrutiny risks burning IT budgets on tools that won’t deliver, potentially stalling innovation while competitors harness true agentic AI systems.
Agentic AI Should Think, Not Just Follow
Today, many traditional chatbots and virtual assistants frustrate customers with scripted loops and customer service dead ends. True agentic AI far surpasses these legacy systems as it won’t just generate text or follow basic commands. Instead, it begins to reason, adapt and act semi-autonomously.
Positive customer experiences can build loyalty, enhance brand reputation, and grow revenue through smart emotional connections. Genesys’ recent report found that just 33% of CX leaders surveyed in ANZ believe they are delivering extremely personalised customer service to consumers, despite consumers calling for more personalisation. Agentic AI will enable businesses to deliver highly personalised interactions, resolve issues proactively, and predict needs across channels.
Picture a National Broadband Network’s (NBN) outage in your area. A basic chatbot might ask, “Have you tried restarting your modem?” Then redirects you to a human when it doesn’t have a scripted response. In contrast, a true AI agent would detect the outage, check the account, credit the bill for the downtime, book a technician if needed and seamlessly connect with the customer across digital channels — all without human intervention.
Unlike traditional bots, AI agents can evolve with every interaction – learning from context, predicting customer needs and tailoring responses based on previous interactions accordingly. When redirected to a human, the handover feels like continuing the same conversation without forcing the customer to repeat themselves. This matters when you consider that 71% of ANZ consumers surveyed for Genesys’ report – more than anywhere else in APAC – say immediate connection to the right person is one of the most valuable aspects of personalisation.
If agentic AI is the architect of great customer experiences, automated reasoning is its foundation. It can be used to ensure that decisions are provable and logically sound, helping safeguard against “hallucinations” and incorrect conclusions. By validating actions against a predefined set of rules and constraints, automated reasoning supports consistency and enhances system reliability.
Guardrails for Responsible Innovation
Agentic AI’s vast potential requires oversight. Constitutional AI serves as the ethical backbone that keeps AI-driven systems aligned with legal, business, and human standards while remaining adaptable. Agentic AI systems can be customised across multiple levels to adhere to country-specific laws like Australia’s Privacy Act 1988, limiting data collection. In Australia’s NBN scenario, an AI agent might only use your account number or service address to resolve the issue without grabbing unrelated personal data.
Customisable across regions and industries, a layered AI governance framework balances flexibility with rigour. For example, a health insurer may allow AI to discuss sensitive topics like mental health, while a retailer might prohibit it. In constitutional AI frameworks, supervisory AI models (smaller, distilled versions of larger AI systems) oversee larger systems and act as ethical gatekeepers. As Australia’s Department of Industry Science and Resources works to introduce mandatory AI guardrails, businesses should understand that these will target high-risk tools like healthcare chatbots that seem benign but can provide harmful responses if unchecked.
Ethical agentic AI policies are essential. Policymakers, businesses, and technology leaders must collaborate on multi-faceted frameworks, starting with global principles to prevent bias and hallucinations, allowing for regional adjustments such as local laws, industry-specific rules, and unique operational needs.
Why You Can’t Fake Agentic AI
Right now, we’re in a defining moment with AI. One road leads to predictable automation with the other guiding us toward true agentic AI that reasons, adapts, and acts independently within smart guardrails.
If your AI isn’t making independent decisions beyond pre-scripted workflows, it’s just another overhyped tool. Australian businesses deserve better. They deserve true agentic AI that delivers real intelligence, not buzzwords that disappoint. And with 83% of ANZ consumers surveyed for Genesys’ report believing a company is only as good as its service, the stakes for getting this right are high. Build it right, and agentic AI becomes your personal digital strategist. Fake it, and it’s just another overpriced script.
- About Glenn Nethercutt
- About Genesys
Glenn Nethercutt is the Chief Technology Officer and a Technical Fellow at Genesys, where he oversees cloud architecture strategy: scalability, microservices, cloud-native design, fault tolerance, disaster recovery, service consistency, new technology evaluation, and continuous delivery mechanisms. He has a background in telecommunications, complex event stream processing, application performance management, OS development, data visualisation, and continuous delivery principles. Glenn lives in Raleigh, North Carolina where he is an avid hiker and runner.
Genesys empowers organizations of all sizes to improve loyalty and business outcomes by creating the best experiences for their customers and employees. Through Genesys Cloud, the AI-Powered Experience Orchestration platform, organizations can accelerate growth by delivering empathetic, personalized experiences at scale to drive customer loyalty, workforce engagement, efficiency and operational improvements. Visit www.genesys.com.

Techedge AI is a niche publication dedicated to keeping its audience at the forefront of the rapidly evolving AI technology landscape. With a sharp focus on emerging trends, groundbreaking innovations, and expert insights, we cover everything from C-suite interviews and industry news to in-depth articles, podcasts, press releases, and guest posts. Join us as we explore the AI technologies shaping tomorrow’s world.