Before you read the rest of this post, try this experiment: open up your favourite AI-powered image generating tool and type in the following prompt, or something like it.
“The newly announced board of directors at a multi-billion-dollar technology company poses for a photo.”
You’re likely looking at a group of people – anywhere from a handful to a few dozen – seated around a boardroom table, smiling vacantly at the camera. They’re probably all well-dressed, roughly middle-aged, and overwhelmingly white and male.
This is how the artificial intelligence systems currently reshaping society understand and portray the people at the top of the economic food chain. And no wonder: despite some progress in recent decades, diversity on corporate boards continues to lag well behind the general population in both the U.S. and Canada.
The issue was thrust into public consciousness in late 2023, when – following a series of dramatic twists worthy of HBO’s “Succession” – Silicon Valley unicorn OpenAI announced a new, three-person board of directors made up entirely of white men. (It has since added a woman, Microsoft executive Dee Templeton, as a non-voting “observer.”)
OpenAI has drawn widespread (and well-deserved) criticism for the move — not only for its disappointing failure to reflect societal progress, but for the risk that failure poses to societal welfare when it’s committed by such a powerful and influential company.
As a CNN headline recently put it, technologies like ChatGPT and Dall-E – both built by OpenAI – are “upending our everyday lives.” What happens when the development and oversight of such omnipresent tools is controlled by such a narrow slice of society? We already have some examples.
In the financial services sector, U.S. researchers have found that artificial intelligence and other algorithms can perpetuate racial equity issues like loan discrimination, biased credit allocation and biased assumptions used in setting insurance premiums.
In health care, a recent class-action lawsuit against insurance giant Cigna Corporation alleges the company’s algorithms routinely denied hundreds of thousands of claims without a physician’s review; ProPublica reported that across two months in 2022, Cigna rejected 300,000 claims using proprietary algorithms, with doctors spending an average of 1.2 seconds to review each claim.
The wider societal threats posed by such biases in artificial intelligence also translate into investor risk, which is why SHARE has made health equity and racial equity in products and services two of its focus areas for shareholder engagement in 2024.
We plan to push health care firms in the U.S. to improve oversight of how they are using AI to ensure that vulnerable populations are not discriminated against. And we will continue engaging companies in the financial services sector, including the largest Canadian banks, on racial equity issues.
Another aspect of SHARE’s work is policy advocacy, where we continue to push for enhanced diversity disclosures for Canadian corporate boards. The Canadian Securities Administrators, a collaborative body that includes provincial and territorial regulators, is currently considering two proposals for amending disclosure rules to cover “diversity beyond women.”
The more robust proposal, supported by SHARE and others including the Ontario Securities Commission, would require companies to report on how many Indigenous, racialized, LGBTQ and disabled people they have on their boards and in executive positions. The alternative, supported by regulators in British Columbia, Alberta, Saskatchewan and the Northwest Territories, would only require companies to disclose their diversity objectives – not the actual numbers.
Which brings us back to our AI-assisted thought experiment. Taking it a step further, imagine the board you created oversees an actual company – let’s call it Uncanny Valley Bank. How might the look of that board change if UVB were required to disclose how many women, BIPOC, LGBTQ and disabled people were on it? And what might that mean for how it incorporates artificial intelligence into its business practices – including mortgage and credit card applications?
As the OpenAI story illustrates, these aren’t merely philosophical questions, but real-world considerations with practical consequences for investors — and society.