Establishing AI governance best practice guidelines for your business
Colin Redbond, SVP Product Strategy, SS&C Blue Prism
AI and intelligent automation (IA) have become synonymous with daily life. As businesses are now deploying chatbots to answer and interact with customers, schedule appointments and improve the overall customer experience – governance of AI deployments is required to confirm that key ethical, legal and societal implications have been considered before deployment.
The world’s first comprehensive law to safeguard user rights against AI, the EU’s AI act, has set initial guidelines to ethically regulate the ever-evolving needs of AI-application developers.
As many countries throughout the world, and within the EU, begin to set groundbreaking AI legislation, there is an opportunity for businesses to get one step ahead, by preparing their own roadmap for AI governance success.
Ahead ofproactive AI governance, there are a number of measures that can be taken to assess business workflows and identify where AI technology should be used and the potential business risks.
Time is of the essence
AI governance will soon impact everything from digital manufacturing automation to customer chatbots and apps mimicking back-office tasks of human workers. Central and regional government offices, law and healthcare industries using AI to extract data, fill in forms, or move files will also need to comply. Rules Engine APIs, microservices and low-code apps are also affected.
So if your business uses robotic operating, basic process improvement and macros for workflow management, intelligent character recognition converting handwriting into computer-readable text, or deep level AI and machine learning experiences, you need to comply.
Transparency and authenticity are also hugely important to the way consumers view and interact with their brands, especially Gen Z customers. Making up 32% of the global population and with a spending power of $44bn, Gen Z have high expectations of their brands and will only support and work for those that share their values.
Aspects of automation will also be covered by future AI legislation, so companies need to closely examine how they use intelligent automation execution, and ensure teams meet regulatory needs as they continuously discover, improve, and experiment with automated tasks/processes, BPM data analysis, enhanced automations, and business-driven automations.
Intelligent automation is the ideal vehicle for AI by being able to create an auditable digital trail across everything in a business. Its ability to increase efficiencies across workflows are well known, and being able to have full, auditable insights in actions and decisions is a superpower in itself.
Creating an AI governance roadmap
As always, whether it’s data retention or how a business application uses AI, safeguards are required across the AI lifecycle, including record keeping documenting the processes where it is used to ensure transparency.
By having a robust AI governance framework in place, organisations can instil accountability, responsibility, and oversight throughout the AI development and deployment process. This, in turn, fosters ethical and transparent AI practices, enhancing trust among users, customers, and the public.When it comes to governance, everyone should be responsible.. It starts with ensuring internal guidelines for regulatory compliance, security, and adherence to your organisation’s values. There are a few ways to establish and maintain an AI governance model:
Top-down: Effective governance requires executive sponsorship to improve data quality, security, and management. Business leaders should be accountable for AI governance and assigning responsibility, and an audit committee should oversee data control. You may also want to appoint a chief data officer from someone with expertise in technology who can ensure governance and data quality.
Bottom-up: Individual teams can take responsibility for the data security, modellingand tasks they manage to ensure standardisation, which in turn enables scalability.
Modelling: An effective governance model should utilise continuous monitoring and updating to ensure the performance meets the organisation’s overall goals. Access to this should be given with security as an utmost priority.
Transparency: Tracking your AI’s performance is equally important, as it ensures transparency to stakeholders and customers, and is an essential part of risk management. This can involve people from across the business.
Guidelines for an AI governance framework
Those disregarding AI governance run the risk of data leakage, fraud, and bypassed privacy laws, so any organisation utilising AI will be expected to maintain transparency, compliance, and standardisation throughout their processes – a challenge as technical standards are still in the making.
The field of AI ethics and governance Is still evolving, and various stakeholders, including governments, companies, academia, and civil society, continue to work together to establish guidelines and frameworks for responsible AI development and deployment.There are several real-world examples of AI governance that while they differ in terms of approach and scope, address the ethical, legal, and societal implications of artificial intelligence. Extracts from a few notable ones are here:
The EU’s GDPR, while not exclusively focused on AI, includes data protection and privacy provisions related to AI systems.
Additionally, the Partnership on AI and Montreal Declaration for Responsible AI – developed at the International Joint Conference on Artificial Intelligence both focus on research, best practices, and open dialogue in AI development.
Many tech companies have developed their own AI ethics guidelines and principles. For instance, Google’s AI Principles outline its commitment to developing AI for social good, avoiding harm, and ensuring fairness and accountability. Other companies like Microsoft, IBM, and Amazon have also released similar guidelines.
14-Steps to ensuring complete AI governance
Ensuring AI governance in your organisation involves establishing processes, policies, and practices that promote the responsible development, deployment, and use of artificial intelligence.At the very least, government departments and companies using AI will be required to include AI risk and bias checks as part of regular mandatory system audits. In addition to data security and forecasting, there are several strategic approaches organisations can employ when establishing AI governance.
Development guidelines: Establish a regulatory regime and best practices for developing your AI models. Define acceptable data sources, training methodologies, feature engineering and model evaluation techniques. Start with governance in theory and establish your own guidelines based on predictions, potential risks and benefits, and use cases.
Data management: Ensure that the data used to train and fine-tune AI models is accurate and compliant with privacy and regulatory requirements.
Bias mitigation: Incorporate ways to identify and address bias in AI models to ensure fair and equitable outcomes across different demographic groups.
Transparency: Require AI models to provide explanations for their decisions, especially in highly regulated priority sectors such as healthcare, finance and legal systems.
Model validation and testing: Conduct thorough validation and testing of AI models to ensure they perform as intended and meet predefined quality benchmarks.
Monitoring: Continuously monitor the performance metrics of deployed AI models and update them to adapt to changing needs and safety regulations. Given the newness of generative AI, it’s important to maintain a human-in-the-loop approach, incorporating human oversight to validate AI quality and performance outputs.
Version control: Keep track of the different versions of your AI models, along with their associated training data, configurations, and performance metrics so you can reproduce or scale them as needed.
Risk management: Implement security practices to protect AI models from cybersecurity attacks, data breaches and other security risks.
Documentation: Maintain detailed documentation of the entire AI model lifecycle, including data sources, testing, and training, hyperparameters and evaluation metrics.
Training and Awareness: Provide training to employees about AI ethics, responsible AI practices, and the potential societal impacts of AI technologies. Raise awareness about the importance of AI governance across the organisation.
Governance board: Establish a governance board or committee responsible for overseeing AI model development, deployment and compliance with established guidelines that fit your business goals. Crucially, involve all levels of the workforce — from leadership to employees working with AI — to ensure comprehensive and inclusive input.
Regular auditing: Conduct audits to assess AI model performance, algorithm regulation compliance and ethical adherence.
User feedback: Provide mechanisms for users and stakeholders to provide feedback on AI model behaviour and establish accountability measures in case of model errors or negative impacts.
Continuous improvement: Incorporate lessons learned from deploying AI models into the governance process to continuously improve the development and deployment practices.
The future of AI governance
AI governance is an ongoing and long process, and this branch of legislation will continue to evolve and adapt to the technology. Therefore, there needs to be a solid commitment from leadership and an alignment with organisational values to ensure correct AI governance. A well-planned AI governance, when combined with a willingness to adapt to changes in technology and society, is essential in today’s technological evolution. A thoroughly planned governance roadmap will ensure your businesses fully comprehend the legal requirements for adopting intelligent automation and machine learning technologies.
Deploying safety regulations and governance policy regimes is vital to keep data secure, accurate and compliant. If businesses follow these steps then they can ensure that their business develops and integrates AI responsibly and ethically.