OPINION: From principles to laws: Consumer protection can galvanize AI regulation

[ad_1]

August 3 – By Azish Filabi

NEW YORK(Thomson Reuters Regulatory Intelligence) – U.S. Senator Chuck Schumer recently presented a framework for how the Senate should approach AI regulation. The SAFE Innovation Framework outlines Schumer’s five central policy objectives(LINK: https://www.democrats.senate.gov/imo/media/doc/schumer_ai_framework.pdf). It has been criticized as light on details and kicking the can down the road, with a “listening tour” planned for the fall.

In recent years, numerous principles, handbooks, codes, and guides have been released on how governments and organizations can tackle widespread AI risks. An online resource from AI Ethicists has an impressive list of more than 70 AI principles from governmental, non-profit, and industry organizations(LINK: https://www.aiethicist.org/ai-principles). It includes high-profile institutions, such as the United Nations, the Organization for Economic Co-operation and Development, the UK government, and U.S. Department of Defense.

The White House published its Blueprint for an AI Bill of Rights in October 2022, as a “guide for a society that protects all people from [AI] threats.”(LINK: https://www.whitehouse.gov/ostp/ai-bill-of-rights/)

It was accompanied by a “handbook…for anyone seeking to incorporate these protections into policy and practice.” Leaders from seven companies have agreed to abide by voluntary guardrails on their AI outputs, including watermarks on AI-generated content, U.S. President Joe Biden announced on July 21.

The public-interest call for AI regulation is loud and clear, yet governments continue to focus on principles, rather than laws. The question is what it would take to implement concrete action on AI regulation.

AI presents a unique set of hurdles. Overcoming the following can help lawmakers take necessary steps toward adopting consumer protections suitable for a new era, despite trust in government reaching historically low levels. Government is the least trusted sector of society, according to research from the American College Maguire Center for Ethics(https://insights.theamericancollege.edu/ethic-trust-study-2022/). Focusing on consumers as a primary area of concern is a great place to start.

AI IS HERE AND EVERYWHERE

AI use is already widespread. Consumers experience it in services where they would expect AI, such as social media, chatbots, travel, credit cards, and internet television, but also in unexpected areas, including policing, public benefits, and public and private housing, among others.

AI has implications for almost every area of law and policy, from telecommunications to criminal law, copyright law, civil rights, privacy, financial regulation, and the list goes on. It is the ultimate interdisciplinary research topic. A “Team Science” approach is needed – lawyers working with computer scientists, ethicists, political scientists, and engineers(LINK: https://www.nationalacademies.org/our-work/the-science-of-team-science).  

The ubiquity of AI means there are embedded impacts in numerous areas of consumer interaction, as well as technology development risks. This tension came to light during Senate Judiciary Committee hearings in May.

Sam Altman of OpenAI called for a new centralized agency that licenses algorithms and ensures compliance with a set of standards. Christina Montgomery of IBM, on the other hand, presented the company’s view that “precision regulation” was needed because different use-cases had different risks that should be triaged by potential impact. For instance, the use of AI in policing and credit-worthiness decisions is more concerning and thus should be regulated more precisely.

REGULATORY LOCUS

AI, like other businesses, relies on a supply-chain of inputs, including data-harvesting services, hardware suppliers, software developers, and AI engineers, among others. If something goes wrong, it is reasonable to ask who is accountable.

Consider an example involving an algorithm whose unexpected behavior causes harm to consumers. On one hand, a regulator could take enforcement action against the algorithm’s developers, since they led each step of the decision-making process. On the other hand, it could target the financial company that used the algorithm to interact with consumers, since the firm is ultimately accountable for the technology’s performance.

Algorithm developers should be accountable, because the systems they create are not value-neutral, and thus the consequences of their activities should remain with the creator of the system, Professor Kirstin Martin wrote(https://link.springer.com/article/10.1007/s10551-018-3921-3).

In financial services regulation, the buyer of the technology — the financial company who ultimately provides services to the end user — should be the target of regulation(LINK: https://content.naic.org/sites/default/files/JIR-ZA-40-08-EL.pdf). It is already accountable to regulators for its activities, including those facilitated by AI, and is responsible to consumers for the decisions made in its products and services.

Another locus of regulation could be the auditors of algorithms. There have been consistent calls for algorithm audits to adhere to standards, such as those developed by the National Institute of Standards and Technology. Developing a system to license independent auditors, rather than AI developers, could provide public benefit while encouraging innovation, Professor Anjana Susaria said(https://theconversation.com/how-can-congress-regulate-ai-erect-guardrails-ensure-accountability-and-address-monopolistic-power-205900).

Yet the audit-approach is challenging, because algorithms are often deemed proprietary by the developer. California has attempted to address this in the property and casualty insurance industry, for example, by indicating that the rules governing underwriting decisions be available for public inspection.

The question is whether such an approach be broadened to other, less-regulated domains of business. Extending this approach to transparency to other areas will require a coordinated, agency-by-agency approach.

OPACITY

Transparency is a consistent theme among ethical AI principles and frameworks.

Deep learning AI systems are often described as making decisions in a “black box.” That is, their powerful capabilities derive from their ability to correlate numerous sources of big data to generate outputs relating to prompts and queries. For example, deep-learning neural networks can review x-ray images to determine the likelihood of disease.

But technologists are still puzzling over how these systems can better explain the rationale behind their decision processes. This is called the “explainability” challenge of AI, and it is a growing area of computer science. Technical solutions are nascent and still in the testing and development stages(https://arxiv.org/abs/2212.08073).

Transparency in other stages of the technology could also be mandated. Professor Gary Marcus, one of the presenters at the May 2023 Senate Judiciary Committee hearings on AI, proposed that developers be required to disclose the data inputs used to train algorithms. AI models are developed using training data, which is integral to a model’s performance and kept confidential.

This approach to transparency is also in the proposed European AI Act, which the European Parliament aims to finalize before the end of 2023.

Transparency of data inputs could have a powerful public benefit by enabling governments and users to analyze the assumptions and biases embedded in a system. Technologists reportedly believe this is an unworkable solution that likely invites more copyright challenges.

Opacity is not only a technological problem; it is also a skill-set challenge. Lawmakers and regulators need to climb the learning curve on this topic. The need for continuing education may be one reason for the continued emphasis on frameworks, listening tours, and industry self-regulation.

CLOSING THOUGHTS

It is time to evolve from generic frameworks to precise solutions. Senators Elizabeth Warren and Lindsey Graham recently announced their plan to introduce the Digital Consumer Protection Commission Act, which would create a new regulatory agency to license and police tech companies(LINK: https://www.nytimes.com/2023/07/27/opinion/lindsey-graham-elizabeth-warren-big-tech-regulation.html?searchResultPosition=1). The Act would regulate AI and other tech-company business practices, including differential product pricing.

Several agencies already regulate the effects of AI in their domains. Existing rules do govern this technology, even if the methods of doing business are new, Lina Khan, chair of the Federal Trade Commission (FTC) wrote(LINK: https://www.nytimes.com/2023/05/03/opinion/ai-lina-khan-ftc-technology.html?te=1&nl=dealbook&emc=edit_dk_20230516). The FTC has initiated an investigation into ChatGPT relating to consumer harms from potentially unfair or deceptive practices(LINK: https://www.wsj.com/articles/chatgpt-under-investigation-by-ftc-21e4b3ef?page=1).

A joint statement by the FTC, Consumer Financial Protection Bureau, Department of Justice civil rights division, and Equal Employment Opportunity Commission further emphasizes that automated systems cannot circumvent existing anti-discrimination rules(LINK: https://www.consumerfinance.gov/about-us/newsroom/cfpb-federal-partners-confirm-automated-systems-advanced-technology-not-an-excuse-for-lawbreaking-behavior/). The statement highlights that existing regulations governing equal access to credit, anti-fraud protections, and other areas of financial regulation will be enforced, even if that task is now more complex because of AI.

These efforts do not supplant the need for comprehensive laws, yet they highlight the immediacy of the demand for action.

(Azish Filabi is the executive director of The Maguire Center for Ethics and an associate professor at The American College of Financial Services in New York City. She is a lawyer by training and has worked in multiple sectors throughout her career, including regulatory/government, the private sector, and academia.)

*To read more by the Thomson Reuters Regulatory Intelligence team click here: http://bit.ly/TR-RegIntel

(This article was produced by Thomson Reuters Regulatory Intelligence – http://bit.ly/TR-RegIntel – and initially posted on July 31. Regulatory Intelligence provides a single source for regulatory news, analysis, rules and developments, with global coverage of more than 400 regulators and exchanges. Follow Regulatory Intelligence compliance news on Twitter: @thomsonreuters)

Our Standards: The Thomson Reuters Trust Principles.

[ad_2]

Source link