1.As children spend more time in online gaming environments, what new risks have emerged that traditional moderation methods can’t keep up with?
Traditional moderation tools typically rely on keyword filters or user reports, which means harmful behavior often goes unnoticed until after damage has been done. The reality is that risk in online games doesn’t always come from a single bad word. It’s about the context of the conversation. Grooming can start with a friendly question that seems harmless and scams can hide behind what looks like a fair trade or a free in-game offer.
At Kidas, we go beyond keyword filtering to understand the full context of what’s happening. Our ProtectMe software monitors voice and text chats in real time, analyzing what is said while factoring in the in-game situation and even what’s appropriate for a child’s age. That enables us to flag threats such as grooming, scams and exposure to explicit content as they happen, giving parents a chance to intervene faster.
2.Can you explain how AI is changing the approach to child online safety, from responding after harm occurs to detecting threats in real time?
Until recently, most online safety tools worked after the fact, meaning something harmful had to happen before anyone knew to step in. AI allows us to change that model by analyzing live interactions and spotting when something is becoming unsafe.
With ProtectMe, alerts can be sent while the child is still in the game, giving parents the opportunity to react quickly. This can shorten the time between risk and response, which makes a difference in keeping kids safe.
3.Beyond harmful content, how does your technology distinguish between normal playful banter and interactions that could be dangerous?
Our AI is trained to recognize the difference between playful in-game chatter and interactions that signal real risk. It factors in what’s happening in the game, the flow of the conversation and the tone being used. That way, a comment that makes sense in the context of the match isn’t treated as a genuine threat.
4.How do you see parents partnering with technology to stay informed without crossing into invasive monitoring of their children’s digital lives?
Technology should help parents feel confident, not make kids feel like they’re being watched. ProtectMe focuses on alerting parents when there’s a meaningful risk and provides context so they know what kind of issue needs attention.
We also encourage families to talk openly about why the software is in place and how it helps keep gaming safe. When kids understand that the goal is protection, not punishment, it builds trust and makes it easier for them to come forward if something feels wrong. The end result is that children can enjoy gaming freely, while parents stay informed enough to step in when it truly matters.
5.How do you measure the effectiveness of your platform in reducing harm and building trust among parents and players?
We measure our impact by how many people we’re helping stay safe and how quickly we can surface serious risks. So far, our technology has analyzed more than 87 million conversations and safeguarded nearly one million users across both ProtectMe and ProtectMe Bot.
On the trust side, we listen closely to parents, coaches and server managers. When we receive feedback from them, it validates how our alerts help them address problems early and feel more comfortable letting kids and players stay engaged in their communities.
6.What does the future look like in the next 3–5 years as gaming platforms, regulators, and AI technologies evolve?
Over the next few years, we expect safety to move from being an afterthought to being built into gaming platforms from the ground up. Regulators are already taking steps to hold tech companies more accountable, and we believe the gaming industry will face similar expectations for transparency and proactive protection — much like what’s happening in social media today.
AI will continue to get better at understanding context across voice and text, which means fewer false alarms and faster, more accurate alerts. But technology can’t solve this alone. The future has to include real collaboration between safety tech providers, game developers and policymakers to create standards that protect players everywhere.
- About Fabio Zaniboni
- About Kidas
Fabio Zaniboni, Founder and Chief Executive Officer at BubblyNet, is a technology leader with over two decades of experience in the Internet of Things (IoT), digital transformation, and sustainable innovation, particularly in the lighting industry. His career, including roles at Emerson Electric and Comau Robotics, has given him a global perspective and market insights. Leading an R&D team, Fabio integrates advanced technologies to enhance building efficiency, sustainability, and user experience. His research on how factors like light, sound, and air affect well-being is driving smarter, more sustainable building solutions. Known for transforming complex technologies into scalable applications, Fabio partners with global organizations to foster digital innovation and sustainability in the built environment. For more about BubblyNet visit https://bubblynet.com/.
Kidas is a technology company developing AI-powered tools to protect consumers and end users from scams, fraud and other digital threats. Its white-label product suite provides protection across SMS, email, voice and chat, helping partners detect and block scams in real time, including advanced threats such as deepfake content.
Additionally, Kidas also offers ProtectMe, an AI-driven safety tool for children’s PC games that monitors in-game voice, text and screen activity for predatory actions and privacy concerns, and ProtectMe Bot, the first and only Discord bot with voice and chat moderation to reduce toxicity and support safer server environments. Kidas was named a winner of the 2025 AI Breakthrough Awards for “Best Overall Use of AI in Gaming,” recognizing its innovative approach to creating safer digital environments. For more information, visit www.getkidas.com.

Techedge AI is a niche publication dedicated to keeping its audience at the forefront of the rapidly evolving AI technology landscape. With a sharp focus on emerging trends, groundbreaking innovations, and expert insights, we cover everything from C-suite interviews and industry news to in-depth articles, podcasts, press releases, and guest posts. Join us as we explore the AI technologies shaping tomorrow’s world.