Artificial Intelligence (AI) is rapidly expanding in almost every sector of the economy, including public safety, healthcare, retail, financial services, and transportation. AI governance has, therefore, assumed a significant role and is receiving more attention.
AI governance is the legal framework that ensures that AI and machine learning technologies are studied and developed with the intention of adopting and using these systems morally and responsibly. The primary objective of AI governance is to bridge the ethical and accountability gaps inherent in technological development.
The telecommunications sector has sparked unmatched innovation through AI. However, a strong trust and governance strategy is necessary to realise the potential of AI-powered solutions fully. This is particularly relevant while exploring generative AI, a technology that has the potential to change how several jobs are performed.
Guaranteeing safe AI implementation requires satisfying the strict governance needs of auditors, risk managers, compliance officers, and regulators. To ensure trust, telcos and application developers must regularly assess their AI models and decision-making processes.
Evolution of AI Governance: Implementing a Trust and Governance Discipline
Several countries are establishing AI governance frameworks to guarantee the proper application of AI. Telcos must keep up to date on country-specific AI rules to ensure compliance. It can be beneficial to work with local AI governance bodies, data protection agencies, standardisation organisations, auditing and compliance experts, and employee training providers that focus on local and regulatory compliance.
As per the World Economic Forum, protecting our society against unexpected consequences resulting from the rapid development of generative AI systems is imperative. As a result, they support responsible AI development and application strategies, which call for all parties involved to standardise language by using precise terminology when referring to the capabilities, limitations, and assessment of generative AI models.
Public and private stakeholders should enhance public understanding by making the terminology related to generative AI models understandable. Developers must discuss truthfulness, human values, and preferences when creating AI models. They should also maintain AI accountability by conducting thorough benchmarking. For a more comprehensive analysis, diverse teams must be formed by adding individuals with varying genders, ethnicities, experiences, and viewpoints.
Policymakers and developers should align on the significance of setting up formal auditing and evaluation frameworks for traceability across the complete AI life cycle. They should also emphasise truthfulness, transparency, consistency, and managing user expectations to increase trust in AI systems. As AI/ML plays a more crucial role, building strong governance and trust within an organisation’s AI/ML team becomes essential. Establishing specialised teams with experts in data science, data engineering, application development, ethics, compliance, and risk management are the first steps organisations should take toward responsible AI.
Building a solid trust and governance framework is essential to leveraging AI’s revolutionary potential in the telecom industry, particularly concerning developments like generative AI. Thorough ethical standards for data use, model training, and decision-making must be established, along with the system’s ability to handle delicate topics like bias, justice, and discrimination in its rules.
Strategic Business Recommendations and Model Framework for AI Governance
An AI governance framework should cover operations management, internal governance structures and procedures, stakeholder interaction and communication, and de-risking strategies for AI across the entire business to minimise large-scale failures. It should also specify the degree of human involvement in AI decision-making.
Building Robust Internal AI Governance Structures
Ensuring thorough control of an organisation’s use of artificial intelligence requires strong internal governance structures and procedures. Review boards should be set up to address risks and incorporate ethical issues. Organisations may explore elements for their internal governance frameworks, like distinct roles and duties for ethically using AI. When a centralised strategy proves to be less than optimal, a decentralised governance system may offer a better solution, incorporating ethical considerations into regular decision-making.
Establishing monitoring and reporting systems, utilising risk management frameworks for risk assessment and management, and defining roles, duties, and training for persons involved in AI governance are important activities. Regular evaluations guarantee the ongoing applicability and efficiency of internal AI governance frameworks.
Navigating Human Involvement in AI
It is recommended that organisations ascertain the extent of human impact in the process before implementing AI solutions. The different levels may include active human oversight, with AI providing recommendations or input to the humans driving the process. In some situations, there might not be any human oversight since an AI system has complete control without the option of human override. Furthermore, some models include human oversight to the extent that the person is acting in a supervisory capacity and can take control in an unexpected circumstance. When an algorithm is in use, humans can change some parameters. Examples include product suggestions, GPS navigation systems, and AI-assisted medical diagnosis.
Data Accountability and Bias Mitigation
Organisations using AI algorithms must iterate through model development until they get the best outcomes for their use case. Data and algorithms/models must interact, and datasets from various sources, both private and public, are essential for the effectiveness of the AI solution.
It is crucial to follow data accountability procedures, which include minimising inherent bias, guaranteeing data quality, and comprehending data lineage. Companies need to be aware of the origin of the data and address factors affecting data quality. Using varied datasets, recognising biases in datasets, and using distinct datasets for training, testing, and validation are all important steps in minimising inherent bias. For accuracy, quality, and dependability, it is advised that datasets be reviewed and updated regularly. Following good data accountability standards while training AI models with non-personal data is still appropriate.
Stakeholder Communication in AI Governance
Clear end-to-end communication with various stakeholders—including developers, executives, regulators, users of internal business applications, external AI tool customers, and more—is essential to effective AI governance. To do this, concise and accessible AI documentation must be provided, model biases and gaps must be addressed, and suitable use cases must be specified. For users to comprehend and interact, transparent communication, easy-to-use user interfaces, opt-out mechanisms, and feedback channels are essential. It’s critical to stay current with the latest developments and explore collaborations on enhancing AI for optimal governance.
Shared Human Values
One thing that will never change as we traverse the AI revolution is how crucial it is to uphold our shared human values. Everything comes down to the fundamental principle of respecting moral principles like honesty and integrity. It’s critical to ensure that these principles guide our actions in AI and that we find common ground amidst different perspectives and interests.
International harmonisation of standards and regulatory frameworks is of utmost importance. For enterprises and investors alike, enforcing laws without considering global implications will lead to a fragmented and unpredictable landscape. International businesses looking to enter the market should be required to follow these guidelines regarding AI regulation to maintain trust. Collaboration is also essential when discussing AI development ethics, regulating AI technology, or distributing AI benefits fairly. By cooperating across borders and boundaries, we can create a future where AI benefits humanity morally and responsibly.
Sanjay Kottaram is the Chief Product and Technology Architect of Tecnotree. With a robust background in innovation, technology, and product leadership, Sanjay is a former IBM employee and Chief Architect for IBM Watson Labs. His passion for AI technology stems from its potential to benefit individuals, organizations, and society. However, he also recognizes the importance of addressing the risks associated with AI for both businesses and societal structures. Tecnotree is transforming AI’s role —from a mere complex, obscure algorithm to a reliable, clear, and trustworthy technology that enriches human experiences and improves quality of life. Sanjay excels in integrating this strategic vision with technical product architecture and is committed to challenging conventional thinking to deliver groundbreaking products to the market.