Ga direct naar de inhoud
Photo: Markus Spikse, Unsplash
4 min read time

A human-centred architecture for AI

29 oktober 2024

Update on our collaborative work on human rights within the World Benchmarking Alliance AI initiative

For several years, Storebrand has been working with digital rights as one of its focus areas, including issues such as the ethics of artificial intelligence (AI) technologies. Through this experience, we have found that it is often most productive for investors to engage them through collective initiatives. This is based on the broad, complex and far-reaching range of the issues, along with the scale and influence of the companies that must be engaged in order to have a reasonable chance of making an impact.

New phase begun in 2024

Since September 2022, members of WBA’s Ethical AI Collective Impact Coalition have been engaging companies assessed by WBA’s Digital Inclusion Benchmark on ethical AI, focussing initially on companies that did not yet have publicly available ethical AI principles.

In February 2024, the second phase of the Collective Impact Coalition for Ethical AI was launched, supported by investors such as Storebrand Asset Management. In total, the investors involved represent over US$ $8.5 trillion in assets under management.

In the current phase, we in the WBA AI initiative are encouraging companies to implement policies and mechanisms to ensure the ethical development and application of AI, guided by respect for human rights and the principle of leaving no one behind.

Progress in latest assessment

The latest assessment by the WBA shows that of the 200 largest digital companies, 71 companies, a third of them, now have AI principles in place, up from 52 companies a year ago.  More than half of the principles established include human rights considerations, also a positive finding.

To some degree, companies have made progress on some dimensions. The development of comprehensive ethical AI documents has seen notable growth. Sixty-six companies now have AI principles that they developed themselves (as opposed to endorsing third-party principles), and 60 of those companies have released standalone documents outlining their commitments.

That said, progress in this area has been slower than expected and needed. While the number of companies with ethical AI principles has grown, the portion of those that define and include explicit human rights considerations is relatively small, and many companies haven’t integrated these considerations into their AI frameworks. 

Of the 71 companies that now have ethical AI principles, only 29 actually publicly disclose how they implement these principles. Other findings from the assessment include a steady but slow growth in the, the number of companies with relevant internal governance structures, such as ethical AI committees, that would help convert conceptual commitments into tangible action in operation.

Of most concern is the mere 16 companies that actually conducted human rights impact assessments (HRIAs) in 2024. This points to huge risks, given that new regulations such as the EU Artificial Intelligence Act, require Fundamental Rights Impact Assessments (FRIAs) for high-risk AI systems, from 2026 onward.

What's next?

While these commitments are a positive step, much remains undone. The next challenge lies to track how companies implement these principles. Many companies’ reporting on their AI operations lacks transparency, making it difficult to assess whether they are truly living up to their ethical AI commitments.

Through the Collective Impact Coalition for Ethical AI, we will also be continuing to push companies to move beyond symbolic statements, to show real progress in operationalizing their AI principles. One major obstacle in this regard is the lack of comprehensive, clear guidelines for conducting HRIAs in the context of AI systems. Developing these guidelines is therefore an urgent next step.

These steps, along with much national-level legislation by countries, are needed to secure ethical AI becomes a reality, and we will be working towards getting them in place.

Human Rights

In Focus: Mitigating Conflict Risk

How should investors respond to the growing risk of involvement in conflict-related harms? Read the article now

More about Human Rights

Closing the gap at Amazon.com

Continued engagement and voting escalation regarding risks on labour rights

Women’s Day 2024: Accelerating the march for progress

Perspectives from our colleagues on women’s role in finance, business, and beyond.

PDD Holdings Inc. excluded

Holding company owner of retail platform Temu excluded based on risk of violations of human rights.

Historical returns are no guarantee for future returns. Future returns will depend, inter alia, on market developments, the fund manager’s skills, the fund’s risk profile and management fees. The return may become negative as a result of negative price developments. There is risk associated with investing in funds due to market movements, currency developments, interest rate levels, economic, sector and company-specific conditions. The funds are denominated in NOK. Returns may increase or decrease as a result of currency fluctuations. Prior to making a subscription, we encourage you to read the fund's prospectus and key investor information document which contain further details about the fund's characteristics and costs.