UK Finance acknowledges tension over AI system transparency

[ad_1]

Currently, a range of legislation and regulation applies to AI – such as data protection, consumer protection, product safety and equality law, and financial services and medical devices regulation – but there is no overarching framework that governs its use. The government has said this is a problem because “some AI risks arise across, or in the gaps between, existing regulatory remits”, and has acknowledged some businesses have concerns over “conflicting or uncoordinated requirements from regulators [that] create unnecessary burdens” and “unmitigated” risks left by regulatory gaps that could harm public trust in AI and slow adoption of the technology as a result.

To address this, the government issued a white paper in which it proposed to retain the existing sector-by-sector approach to regulation but introduce a cross-sector framework of overarching principles that regulators will have to “interpret and apply to AI within their remits”. The five principles are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. The five principles would be issued on a non-statutory basis, but the government has proposed to place regulators under a statutory duty to have due regard to the principles when exercising their functions.

The government intends to provide “central support” for the new system of regulation, including monitoring and evaluating the framework’s effectiveness and adapting it if necessary; assessing risks arising from AI across the economy; conducting horizon scanning and gap analysis to inform a coherent response to emerging AI technology trends; and establishing a regulatory sandbox for AI to help AI innovators get new technologies to market.

UK Finance said it supports the “sectoral, risk-based approach [to the regulation of AI], based around regulatory guidance”, but it cautioned against an unnecessary “AI overlay” of existing regulation where it “addresses risks adequately”.

UK Finance said: “It is currently unclear whether the government expects: each regulator to produce an ‘AI guidance book’ with AI-specific guidance on each of the principles, or each regulator to reflect on the AI principles and ensure that their rules and guidance adequately cover AI risks. In our view, it should be made clear that [the latter] is an acceptable approach. This would enable authorities to rely on their generic (technology neutral) rules and guidance when these are sufficient, leveraging existing laws to avoid duplication and confusion.”

Citing examples of existing guidance relevant to AI, UK Finance referred to guidance produced by the Financial Conduct Authority (FCA) and Prudential Regulation Authority on matters such as model risk management, fairness and protecting vulnerable customers, and it also mentioned the ‘consumer duty’ and associated guidance recently introduced across UK financial services as relevant to AI too.

“It should not be necessary for each regulator to layer on top an additional AI guidance book when the existing tech-neutral guidance and rules adequately address key risks,” UK Finance said. “Similarly, when cross-sectoral guidance covers an AI risk effectively, it should not be expected that individual regulators apply their own layer. This would contradict the outcomes-focused approach, and risks duplication and unnecessary complexity. Of course, where there are specific gaps or points of uncertainty relating to certain types of model, system or use case, it would be logical to fill these.”

In its consultation response, UK Finance also highlighted risks arising from the proliferation of new generative AI systems that are open to the public.

For example, UK Finance said that there is a risk that developers of generative AI systems would be “responsible for consumer outcomes” but not necessarily aware of “exactly how the tool is being used by each consumer and what requirements therefore apply”. It further warned of the risk that generative AI systems are used by companies for purposes not anticipated by the developer and where no “normal procurement and due diligence process” has been applied. It also cited the risk of generative AI systems being used by “bad actors” to “create misinformation, to generate materials for defrauding consumers, or to defeat legitimate firms’ customer authentication tools”.

UK Finance said generative AI systems that are open to the public therefore “warrant accelerated and focused attention from policy makers and regulators”.

Luke Scanlon of Pinsent Masons, who specialises in technology law and contracts in financial services, said: “It is imperative that senior managers at financial entities upskill for AI to ensure they have the necessary knowledge and understanding of how the technology works to sufficiently satisfy themselves that those systems are being implemented in a manner that complies with relevant law and regulation. There is a particular challenge in this since the development and use of AI will need to be mapped to the fragmented regulatory framework as it currently applies – such as data protection and consumer protection rules, not just financial services rulebooks – and this means senior managers will need regular training and other briefings on the evolving technology and regulatory framework.”

[ad_2]

Source link