The investor case for human rights due diligence
It’s difficult to comprehend how quickly artificial intelligence (AI) has become an inescapable fact of life. In just a few short years, while the world was still reeling from the impact of a global pandemic, a technology that is staggering in both reach and potential went from a niche digital tool to an everyday reality – and a mainstay of business operations for companies across industries.
Despite this rapid proliferation, there is very little to be certain of as the technology continues to evolve. While AI undoubtedly offers great opportunities, the public discourse around it – even from creators and leading experts – speaks to a limited understanding of its potential impacts on human rights.
A need for safeguards
According to the World Benchmarking Alliance, “AI may increase the risk of harms such as bias and discrimination; invasion of privacy; denial of individual rights; and non-transparent, unexplainable, unsafe, and unjust outcomes.” Notably, poor oversight and inadequate safeguards on the development and use of AI technology may lead to reputational, legal, and regulatory risks to companies, and by extension, its shareholders.
Since 2021, SHARE has engaged with Google’s parent company Alphabet on AI-driven targeted advertising and the risks that such technology may pose to the Company, its users, and its shareholders. On June 7, investors at Alphabet will be voting on a shareholder proposal filed by SHARE on behalf of the United Church of Canada Pension Plan. The proposal directs the Company’s board to undertake and publish an independent third-party Human Rights Impact Assessment (HRIA) of its AI-driven targeted advertising policies and practices.
“As shareholders, we are increasingly concerned about the adverse impacts and risks that stem from Alphabet’s AI-driven targeted advertising practices. In our view, Alphabet’s existing policies and practices are just insufficient to identify, assess, and mitigate these risks. A robust HRIA is the logical next step for the company to manage risks and protect long term shareholder value.”
– Sarah Couturier-Tanoh, Director of Shareholder Advocacy, SHARE
What is an HRIA?
The first step in the human rights due diligence process, a Human Rights Impact Assessment is a comprehensive process for identifying both the actual and potential human rights risks stemming from a company’s business practices and operations. These risks may be regulatory, financial or legal in nature, and are of particular interest to long-term investors, as the value of their portfolios is reliant on long-term, consistent returns that can be vulnerable to systemic risk.
The United Nations Guiding Principles on Business and Human Rights (UNGPs) explicitly state that companies must conduct human rights due diligence on their products and services, particularly if the scale and scope of the impacts are likely to be important. Alphabet has publicly committed to supporting these principles for some time – so it seems reasonable to expect the company to align its practices and actions with these principles.
Risk and responsibility
Much of Alphabet’s advertising operations are based in algorithmic systems that rely on AI to determine what users see and to maximize ad reach. Alphabet itself recognizes the risk of these evolving systems; the Company’s 2023 annual report cites potential “risks related to harmful content, inaccuracies, discrimination, intellectual property infringement or misappropriation, defamation, data privacy, cybersecurity, and other issues.” Research has shown that such technology can negatively impact human rights, including violating privacy and freedom of expression, and perpetuating systemic discrimination and inequality.
There is also a growing concern among civil society experts, academics, and policymakers that targeted advertising can lead to the erosion of human rights. Growing regulatory trends in the United States and Europe have the potential to severely restrict or even ban targeted ads – largely due to concerns about underlying algorithms and risks to users. Canada is currently working to increase corporate accountability with updates to the Artificial Intelligence and Data Act.
The rapidly changing regulatory landscape that dictates the use of AI technology makes Alphabet vulnerable to legal and regulatory challenges over its practices. As policymakers continue to increase guardrails governing the use of AI, companies that take their bottom line seriously should be ensuring that the development, use, and deployment of such technology is aligned with international human rights standards.
The bottom line for shareholders
According to Alphabet’s annual report, online advertising accounted for more than 75 percent of the Company’s revenue in 2023. Its overall ad business — including Google Search, YouTube Ads, and Google Network — has grown significantly in recent years, reaching more than USD $237 billion in 2023. Given how much of the Company’s overall value is tied up in advertising, it’s reasonable for investors to expect Alphabet to manage the risks faced by that aspect of its business – especially when the risks are this evident.
A growing number of shareholders believe a robust HRIA will allow Alphabet to effectively manage the risks associated with its targeted advertising technology. This can help guide management’s approach to respect the human rights of its users – and enable shareholders to make well-informed investment decisions.