1. What are the biggest challenges organisations face in achieving real-time data interoperability, and how can they overcome them?
One major challenge in achieving interoperability is a world of constantly changing data formats and APIs. Batch-based systems introduce latency and tight coupling. Streaming platforms like Apache Kafka shift teams toward an event-driven architecture, where components exchange immutable events on a shared log. This decouples producers from consumers, simplifies schema evolution and enables backward-compatible data flows across versions.
2. How critical is real-time data in building AI-driven applications, and what are the best practices for ensuring data quality, reliability, and accessibility?
Real-time data is essential to building AI-driven applications. Yet many of today’s organisations continue to operate on rigid and outdated architectures that take far too long to deliver data, rendering it stale and inconsistent by the time an LLM ingests it. Imagine a fraud detection system working with incomplete transaction data and slow to respond to fraudulent events, or an AI-powered travel booking system operating on old flight schedules—both would fail to deliver reliable and effective outcomes.
This is why data streaming has become indispensable for enterprises developing AI solutions. Unlike batch processing, data streaming enables data to be continuously ingested, processed and analysed, providing AI models with accurate, accessible and up-to-the-moment information. When this data is used for AI initiatives, it can help organisations find more insights and innovation. Take AI-powered customer support as an example. A generative AI system integrated with real-time data streaming can pull in live conversation intelligence from multiple sources, enabling predictive support and hyper-personalised recommendations as a customer speaks.
3. What are the key considerations for enterprises when implementing governance frameworks to maintain data trust and compliance across systems?
When it comes to implementing an effective data governance strategy, cross-functional alignment across the organisation is crucial. There’s no simple ‘on’ and ‘off’ switch, and only funding governance initiatives isn’t enough. Rather, effective governance involves aligning distributed teams across the company, from finance to legal, marketing, and IT, around a unified vision and coherent data processes. Active leadership involvement can also be helpful for fostering long-term adoption and helps embed governance practices into the team’s culture.
From an operational perspective, effective data governance involves adopting asset management practices to catalogue key business data and assigning domains to clearly defined leaders or stewards. Data stewards play a crucial role as they are often in charge of developing and enforcing a consistent policy across the team, and take responsibility for enabling their respective teams or departments.
Lastly, it’s important to set realistic, measurable goals and establish metrics to assess the effectiveness of a data governance framework. With data continuing to grow in speed, velocity and complexity, teams need to carefully monitor the suitability of their data governance frameworks over time and ensure they remain fit for purpose as teams and the technologies they use evolve. Stream Governance is now a required capability in organisations embracing real-time information to ensure quality, integrity and discoverability of streaming data.
4. How do you see the evolution of data platforms shaping the next generation of AI applications, and what role does open-format data storage play in this transformation?
The next generation of AI, especially agent-based systems, requires more than just data access. It demands real-time, event-driven infrastructure. At Confluent, we see data streaming as the nervous system for AI, with Kafka delivering a continuous flow of events.
Additionally, agents need to process, coordinate, and respond. That’s where stream processing comes in as an orchestration and execution layer for agents. Apache Flink enables agents to run continuously, react to events, make decisions, and collaborate with other agents in real time.
And to make that data durable and shareable, open-format storage like Apache Iceberg and Delta Tables bridge the gap between streaming and analytics, giving agents a consistent, versioned view of the world they operate in.
Together, streaming, processing, and open formats form the foundation for scalable, production-grade AI.
5. What strategies can organisations adopt to ensure their data infrastructure is optimised for AI-driven decision-making?
Before an organisation can optimise for AI, the team must first understand a foundational truth: many AI challenges are fundamentally data challenges. This means if their data isn’t reliable, traceable and in motion, then their AI strategy will inevitably be on shaky ground.
One of the biggest hurdles is the divide between operational systems (where data is generated) and analytical systems (where that data is turned into insights). These environments often function in separate silos, making it challenging for teams to marry real-time data with other systems in a meaningful way. Confluent’s partnership with Databricks tackles this very issue. By integrating Confluent’s Tableflow with Databricks’ Unity Catalog, teams can seamlessly govern data across both systems, accelerating the path to AI readiness.
It’s also important to ensure that AI-generated insights can flow back into real-time applications efficiently. By tightly welding the right technologies and pipelines, organisations can automate decisions rather than reacting hours or days later.
Finally, to be able to fully utilise AI-powered decision-making, the data moving in either direction must be explainable, traceable and compliant. Integrating real-time data streaming with strong governance and accessibility controls not only prevents data silos and flawed work but means companies can confidently scale their AI applications while maintaining trust and transparency.
6. What steps should enterprises take to eliminate data silos and enable seamless collaboration between data scientists, engineers, and business teams?
As discussed above, governance has a key role to play in breaking down data silos and ensuring information is accessible, reliable and trustworthy. But to achieve this, especially with the speed necessary for businesses to maintain a competitive edge, organisations must prioritise ‘shifting left’. That is, to shift governance and processing closer to the teams that produce the data. This approach underpins effective “data products” — each team in the company owns a piece of software that has data and publishes a real-time stream that is well-structured and reliable, so other teams can subscribe to what they need when they need it. This is far more efficient and useful than a system where data constantly needs to be requested and reworked into a structured, easy-to-understand format ad-hoc after being pulled from a data lake or warehouse.
Other key steps for improving data accessibility include ensuring your organisation has a solid data accessibility strategy, data catalogue, and policies outlining data life cycle management. Clear guidelines on where data comes from, how it’s stored, and when it should be deleted or archived help make sure information remains accessible and secure throughout its life cycle.
Ultimately, when governance is done well, it makes accessing data easier by creating clear and secure pathways for usage. This balance is essential for organisations that want to democratise data access without risking sensitive information or breaking regulatory rules. Governance must also remain flexible to adapt to changing business needs. For example, teams might need workflows for sensitive data requests or expedited access in urgent situations. The key is to stay agile while keeping core security principles intact.
7. As organisations scale their AI initiatives, what architectural considerations should they prioritise to maintain efficiency and performance?
As businesses look to integrate and scale their AI initiatives, their data streaming architecture and processes must evolve along with it. This is because data streaming adoption is a journey, one that starts with basic awareness and moves up a curve, ending with an event streaming platform acting as the central nervous system of the enterprise. We call this the data streaming maturity curve. For organisations looking to get started, moving to early production small use cases and making the shift to event-centric thinking is essential.
- About Andrew Foo
- About Confluent
Andrew Foo is the Vice President for Customer Solutions Group in Asia Pacific & Middle East at Confluent, where he drives customer success, solutions engineering and professional services functions across the region. His team supports customers in ensuring successful adoption and value realisation of Confluent’s data streaming platform.
With over twenty years’ experience in technology, Andrew’s experience spans business strategy, consulting services, pre-sales and product management. Prior to Confluent, Andrew held leadership roles at a number of global software companies including Rubrik, Cloudera, Hortonworks, IBM and SAS.
Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Our cloud-native offering is the foundational platform for data in motion — designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, our customers can meet the new business imperative of delivering rich, digital customer experiences and real-time business operations. Our mission is to help every organization harness data in motion so they can compete and thrive in the modern world.

Techedge AI is a niche publication dedicated to keeping its audience at the forefront of the rapidly evolving AI technology landscape. With a sharp focus on emerging trends, groundbreaking innovations, and expert insights, we cover everything from C-suite interviews and industry news to in-depth articles, podcasts, press releases, and guest posts. Join us as we explore the AI technologies shaping tomorrow’s world.