By DEREK SMITH
Artificial Intelligence (AI) has transformed industries by improving decision-making, streamlining operations and enhancing customer experiences. The risks associated with AI, however, are also substantial. This is evidenced by the fact that AI takes centre stage (28 AI exhibitors and 4 AI-themed professional sessions) at international professional conferences such as the ACAMS Assembly in Las Vegas.
The ACAMS Assembly in Las Vegas is a leading anti-financial crimes (AFC) and anti-money laundering (AML) conference that this columnist is currently attending while writing this article. Companies operating in, or doing substantial business with, European Union (EU) companies must prepare for strict extra-territorial compliance standards under the EU’s AI Act. The Harvard Review noted: “Put simply, the Act is akin to Europe’s General Data Protection Regulation (GDPR), passed in 201 6, but for artificial intelligence.”
This article will provide insight into the steps various stakeholders must take to ensure robust AI governance in anticipation of imminent legislative roll-outs, referencing the EU’s AI Act for context. Boards of directors and management executives must act now to implement governance best practices that protect their companies from regulatory and reputational harm.
Governance inclusive of AI Risks
As Europe’s most significant artificial intelligence regulation, the AI Act imposes penalties of up to 20m euros or 6 percent of a company’s global turnover for serious violations. These penalties echo the punitive measures of the General Data Protection Regulation (GDPR), demonstrating the EU’s commitment to holding companies accountable for AI misuse.
Fines are not the only concern. Reputational damage caused by unethical AI practices could have lasting effects on any company. The real challenge is aligning AI strategies with broader ethical responsibilities rather than merely achieving compliance. First, a comprehensive gap analysis of current governance frameworks is essential. In order to effectively mitigate AI-related risks, an assessment of existing structures, policies, workflows and technologies is required.
The Board’s responsibility: Asking the right questions
AI governance is more than a technical issue. It is a strategic one that requires the attention of those at the top of a company. It is the responsibility of Boards to ensure that their companies are prepared for the operational and ethical challenges AI presents, even if they currently feel unqualified to engage deeply in AI issues. A critical mistake would be to avoid these issues as “too technical”.
To fulfill their oversight role, Board members should ask the following questions:
* Who within management is responsible for AI compliance and risk management?
* Are training programmes in place to help employees identify ethical or regulatory AI risks?
* What metrics will track compliance, ethical practices and the success of AI initiatives?
In addition, Boards must ensure that AI models are regularly reviewed and that they are adapted to changing risks. It is important to note that the absence of ethical breaches does not guarantee a company’s safety, especially since new AI technologies or partnerships can introduce unknown risks. It is important for Boards to stay vigilant, ensuring their companies are not simply compliant but also ethically sound.
The executive suite’s role: Operationalising AI governance
While Boards provide strategic oversight, executive management is responsible for executing AI governance. It is important to begin with a gap analysis, which identifies areas where the company’s existing risk management structures are inadequate. To build an effective AI governance framework, cross-functional collaboration is essential between IT, legal, data science and risk management departments.
Although the article emphasises the importance of people and processes, identifying AI-related risks requires the use of technology, which is more than just an after-thought. Automation platforms, AI auditing tools and data analysis systems can provide ongoing oversight and ensure that AI governance processes are scalable. In short, for AI governance to be effective, people, processes and technology must all be balanced.
It is also imperative that management establish clear key performance indicators (KPIs) and objectives for measuring the effectiveness of AI governance. These metrics should not just focus on compliance but also assess the broader ethical and operational impacts of AI deployments. It is crucial to assign a single executive to oversee AI governance, whether that is a chief risk officer or a newly-appointed chief AI ethics officer, to ensure accountability and avoid conflicts of interest.
The managerial imperative: Implementing ethical AI
An integral part of AI governance is integrating it into day-to-day operations by managers. In light of the fact that the EU AI Act was not written by operations experts, much of the responsibility for operationalising compliance rests with the companies themselves.
Throughout the AI lifecycle, managers should remain vigilant for any changes in AI risk. It is possible for an AI model designed for one purpose to be used in a way it was not intended, raising new ethical concerns. Continuous monitoring and reassessment of AI systems are essential to prevent such risks.
In short, as AI continues to shape industries, governance structures must evolve to ensure compliance and ethical responsibility. Companies must act now to assess their AI governance frameworks, ensuring they are prepared not only for regulatory compliance but also for the ethical challenges AI poses. Boards and management executives should prioritiae comprehensive AI governance strategies to protect their company’s reputations and bottom lines. Ethical AI is not just a regulatory requirement - it is a business imperative.
• NB: About Derek Smith Jr
Derek Smith Jr. has been a governance, risk, and compliance professional for more than 20 years with leadership, innovation, and mentorship record. He is the author of ‘The Compliance Blueprint’. Mr Smith is a Certified Anti-Money Laundering Specialist (CAMS) and the Assistant Vice President, Compliance and MLRO for CG Atlantic’s family of companies (member of Coralisle Group Ltd.) for The Bahamas, St Vincent & The Grenadines, St Lucia and Curaçao.
Comments
Use the comment form below to begin a discussion about this content.
Sign in to comment
OpenID