[ad_1]
Good morning.
Risk management around generative A.I. is creating anxiety for some corporate leaders.
What in particular is keeping them up at night? “One thing is the fear of missing out and falling behind,” says Reid Blackman, author of the book Ethical Machines. “A lot of leaders are feeling pressure to figure out, how do we use this new technology to innovate, increase efficiencies—make money or save money? The other concern is, how do we do this safely?”
Blackman is also the founder and CEO of Virtue, a digital ethical risk consultancy. He advises the government of Canada on federal A.I. regulations and corporations on how to implement digital ethical risk programs. We talked about what he’s hearing from corporate leaders, and his new article in Harvard Business Review, titled, “Generative AI-nxiety.”
“The anxiety is justified when you don’t actually have the systems and structure in place to adequately account for these risks,” Blackman tells me. “It’s not like risk management is a new thing to enterprises. This is just another area of risk management that they haven’t done before.”
When ChatGPT was launched by OpenAI on Nov. 30, 2022, generative A.I. was thrust into the forefront. Some companies had already been working with large language models (LLM), which specialize in processing and generating text, Blackman says. But ChatGPT, and subsequently Microsoft’s Bing, and Google’s Bard, made generative A.I. available to everyone within their organizations, not just data scientists, he says.
That’s a “double-edged sword,” Blackman explains. “Lots of different people can see different ways of using technology to drive the bottom line,” he explains. But in using it without parameters, “anyone could potentially do damage to the brand,” he says. “This makes leaders nervous.”
Still, many companies are moving in the direction of generative A.I. PwC’s latest pulse survey of more than 600 C-suite leaders found that, overall, 59% intend to invest in new technologies in the next 12–18 months. Fifty-two percent of CFOs surveyed plan to prioritize investment in generative A.I. advanced analytics. But challenges to companies’ ability to transform include achieving measurable value from new tech (88%), the cost of adoption (85%), and training talent (84%).
“What CFOs should be aiming for is how do we create what you would call a responsible A.I. program or an A.I. ethical risk program?” Blackman says. “How do we as an organization, from a governance perspective, manage these risks?”
Creating policies and metrics
Standard ethics in A.I. include a focus on bias, privacy violations, and black box problems, Blackman says. But in his article, he points to at least four cross-industry risks he says are unique to generative A.I.:
—The hallucination problem: For example, when a chatbot creates a response that may sound plausible but is factually incorrect or unrelated to the context.
—The deliberation problem: Generative A.I. does not deliberate or decide. It just predicts next-word-likelihood. It may fabricate reasons behind the outputs, which to an unsuspecting user, looks genuine.
—The sleazy salesperson problem: You could potentially undermine the trustworthiness of your company if it develops an LLM sales chatbot that’s very good at manipulating people, for example.
—The problem of shared responsibility: Generative A.I. models are built by a small number of companies. So a feasibility analysis is necessary when your company is sourcing and then fine-tuning a generative A.I. model. Part of that analysis should include what the benchmarks are for “sufficiently safe for deployment.”
The remedies for the hallucination and deliberation problems are due diligence processes, monitoring, and human intervention, according to Blackman.
“When we build A.I. ethical risk programs, it goes all the way from high-level statements to augmenting governance structures, new policies, procedures, workflows, and then crucially, we have metrics and KPIs to track the rollout compliance and impact of the program,” he says.
What are some examples of KPIs that may be useful in this process? Blackman offers a few:
—How many A.I. models were deployed (for example, in the past quarter) that caused discriminatory outcomes?
—How many A.I. models caused concerns about insider trading?
—How many A.I. models were deployed that caused short-term financial gain but long-term damage to reputation/stakeholders?
One thing companies shouldn’t do is ban generative A.I. use, Blackman says.
“There are probably genuine opportunities to innovate in ways that are meaningful to the company,” he says. And an outright ban assumes that people aren’t really going to use it. “They will,” Blackman says. “It’s best to train people on how to use it safely.”
And potentially ease some anxiety.
Sheryl Estrada
sheryl.estrada@fortune.com
Big deal
Demand for environmental, social, and governance (ESG) data is on the rise, according to a new Bloomberg and Adox Research survey. Ninety-two percent of executives surveyed plan to increase their ESG spending by at least 10%, with 18% planning to increase their spend by 50% or more. The findings are based on a survey of more than 100 portfolio managers, climate risk executives, and data management executives.
The top three areas where firms prioritizing this spend are ESG benchmarks and indices (29%), company-reported data (23%), ESG scores (20%), and sustainable debt (19%), according to the report. Data quality ranked first, followed by breadth of coverage, regarding the criteria most important for selecting an ESG data provider.
Going deeper
A new working paper by the International Labour Organization provides a global analysis of the potential exposure of occupations and tasks to generative A.I., particularly GPT-4, and implications for job quantity and quality. The research predicts that the “overwhelming effect of the technology will be to augment occupations, rather than to automate them,” according to the report.
Leaderboard
Shannon Eisenhardt was named CFO at Reckitt, effective Oct. 17. Jeff Carr, CFO and executive director will retire as of March 31, 2024. Eisenhardt will also be appointed to the board as an executive director. She will join Reckitt from NIKE, Inc., where she currently serves as CFO of NIKE Consumer, Brand and Marketplace. She also holds leadership responsibility for global business planning, including corporate financial planning and analysis, and the Converse brand. Before taking on her current role, Eisenhardt led finance for NIKE North America, and NIKE Emerging Markets. Before joining NIKE, Inc. in 2015, she spent close to two decades at P&G in a range of finance roles working at corporate, country and regional levels.
Tony Iskander was named interim CFO at Advance Auto Parts, Inc. (NYSE: AAP), an automotive aftermarket parts provider, effective Aug. 18. Iskander succeeds Jeff Shepherd, who departed from Advance, effective Aug. 18. Iskander has more than 25 years of finance and accounting experience and served as the company’s senior vice president, finance and treasurer since 2020. Before joining Advance in 2017, he spent more than a decade at Hillrom in various finance roles. A search is being initiated, with the assistance of a leading executive search firm, to identify the company’s next CFO. The company will also have a new CEO. Shane O’Kelly was named president and chief executive officer, effective Sept. 11. O’Kelly will succeed Tom Greco, who is planning to retire.
Overheard
“Our economists believe A.I. should make workers more productive, increasing corporate revenues. Alternatively, A.I. adoption could allow some companies to generate the same amount of revenues but with lower labor costs that would increase margins.”
—Goldman Sachs analysts, led by vice president of U.S. equity strategy Ryan Hammond, said they remain bullish on A.I. in a Monday research note, Fortune reported.
[ad_2]
Source link