Reflecting on our responsible AI program: Three critical elements for progress – Microsoft On the Issues

[ad_1]

Last week, at Responsible AI Leadership: Global Summit on Generative AI, co-hosted by the World Economic Forum and AI Commons, I had the opportunity to engage with colleagues from around the world who are thinking deeply and taking action on responsible AI. We gain so much when we come together, discuss our shared values and goals, and collaborate to find the best paths forward.

A valuable reminder for me from these and recent similar conversations is the importance of learning from others and sharing what we have learned. Two of the most frequent questions I received were, “How do you do responsible AI at Microsoft?”, and “How well placed are you to meet this moment?” Let me answer both.

At Microsoft, responsible AI is the set of steps that we take across the company to ensure that AI systems uphold our AI principles. It is both a practice and a culture. Practice is how we formally operationalize responsible AI across the company, through governance processes, policy requirements, and tools and training to support implementation. Culture is how we empower our employees to not just embrace responsible AI but be active champions of it.

When it comes to walking the walk of responsible AI, there are three key areas that I consider essential:

1. Leadership must be committed and involved: It’s not a cliché to say that for responsible AI to be meaningful, it starts at the top. At Microsoft, our Chairman and CEO Satya Nadella supported the creation of a Responsible AI Council to oversee our efforts across the company. The Council is chaired by Microsoft’s Vice Chair and President, Brad Smith, to whom I report, and our Chief Technology Officer Kevin Scott, who sets the company’s technology vision and oversees our Microsoft Research division. This joint leadership is core to our efforts, sending a clear signal that Microsoft is committed not just to leadership in AI, but leadership in responsible AI.

The Responsible AI Council convenes regularly, and brings together representatives of our core research, policy, and engineering teams dedicated to responsible AI, including the Aether Committee and the Office of Responsible AI, as well as senior business partners who are accountable for implementation. I find the meetings to be challenging and refreshing. Challenging because we’re working on a hard set of problems and progress is not always linear. Yet, we know we need to confront difficult questions and drive accountability. The meetings are refreshing because there is collective energy and wisdom among the members of the Responsible AI Council, and we often leave with new ideas to help us advance the state-of-the-art.

2. Build inclusive governance models and actionable guidelines: A primary responsibility of my team in the Office of Responsible AI is building and coordinating the governance structure for the company. Microsoft started work on responsible AI nearly seven years ago, and my office has existed since 2019. In that time, we learned that we needed to create a governance model that was inclusive and encouraged engineers, researchers, and policy practitioners to work shoulder-to-shoulder to uphold our AI principles. A single team or a single discipline tasked with responsible or ethical AI was not going to meet our objectives.

We took a page out of our playbooks for privacy, security, and accessibility, and built a governance model that embedded responsible AI across the company. We have senior leaders tasked with spearheading responsible AI within each core business group and we continually train and grow a large network of responsible AI “champions” with a range of skills and roles for more regular, direct engagement. Last year, we publicly released the second version of our Responsible AI Standard, which is our internal playbook for how to build AI systems responsibly. I encourage people to take a look at it and hopefully draw some inspiration for their own organization. I welcome feedback on it, too.

3. Invest in and empower your people: We have invested significantly in responsible AI over the years, with new engineering systems, research-led incubations, and, of course, people. We now have nearly 350 people working on responsible AI, with just over a third of those (129 to be precise) dedicated to it full time; the remainder have responsible AI responsibilities as a core part of their jobs. Our community members have positions in policy, engineering, research, sales, and other core functions, touching all aspects of our business. This number has grown since we started our responsible AI efforts in 2017 and in line with our growing focus on AI.

Moving forward, we know we need to invest even more in our responsible AI ecosystem by hiring new and diverse talent, assigning additional talent to focus on responsible AI full time, and upskilling more people throughout the company. We have leadership commitments to do just that and will share more about our progress in the coming months.

Organizational structures matter to our ability to meet our ambitious goals, and we have made changes over time as our needs have evolved. One change that drew considerable attention recently involved our former Ethics & Society team, whose early work was important to enabling us to get where we are today. Last year, we made two key changes to our responsible AI ecosystem: first, we made critical new investments in the team responsible for our Azure OpenAI Service, which includes cutting-edge technology like GPT-4; and second, we infused some of our user research and design teams with specialist expertise by moving former Ethics & Society team members into those teams. Following those changes, we made the hard decision to wind down the remainder of the Ethics & Society team, which affected seven people. No decision affecting our colleagues is easy, but it was one guided by our experience of the most effective organizational structures to ensure our responsible AI practices are adopted across the company.

A theme that is core to our responsible AI program and its evolution over time is the need to remain humble and learn constantly. Responsible AI is a journey, and it’s one that the entire company is on. And gatherings like last week’s Responsible AI Leadership Summit remind me that our collective work on responsible AI is stronger when we learn and innovate together. We’ll keep playing our part to share what we have learned by publishing documents such as our Responsible AI Standard and our Impact Assessment Template, as well as transparency documents we’ve developed for customers using our Azure OpenAI Service and consumers using products like the new Bing. The AI opportunity ahead is tremendous. It will take ongoing collaboration and open exchanges between governments, academia, civil society, and industry to ground our progress toward the shared goal of AI that is in service of people and society.

Tags: Azure, Bing, ChatGPT, Responsible AI

[ad_2]

Source link