Rise of AI could see "a lot of people living as second-class citizens" warns Sony's Alice Xiang

Killer robots and god-like computers are far from the most pressing threats posed by artificial intelligence, AI ethicist Alice Xiang tells Dezeen in this interview.

Instead she argues the industry must first focus on tackling the more immediate, insidious harms AI is already causing by entrenching societal biases and inequalities.

"As an AI ethicist, I often get the question, 'is your job to prevent killer robots?'," said Xiang, a leading researcher and global head of AI ethics at technology company Sony.

"And I'm personally less concerned about that scenario where it's very obvious what the harm is," she added. "I'm more concerned about some of the invisible harms that tend to fly under the radar."

People might have "no recourse" against biased AI

There have already been several highly publicised cases in which algorithms were found to be biased against marginalised groups such as women and people of colour – often down to the skewed data on which they are trained.

And as AI becomes increasingly ubiquitous, especially in contexts as high-stakes as healthcare, employment and law enforcement, Xiang warns these biases could compound upon themselves to create an increasingly unequal society.

"It's very possible that over time, a lot of people could be living as second-class citizens in a society of AI, where systematically models might not work well for them or might be biased against them," she said.

"And they might have no idea or no recourse to actually do anything about this, especially if the AI models work well for people in power."


Read:

Will architects really lose their jobs to AI?

Designing out these existing algorithmic biases, Xiang argues, should take precedence over more far-off threats such as human-competitive algorithms, which several industry open letters and op-eds published in recent months have warned might one day "outnumber, outsmart, obsolete and replace us".

"Compared to a lot of existential threats, there are a lot of decisions that would have to be made in order for AI to destroy humanity," Xiang said.

"I think we aren't actually at the point where we need to be focusing primarily on the very long-term speculative harms," she added.

"The mechanism by which we prevent those is by starting in the here and now, in terms of identifying these very concrete, currently existing harms and mitigating or preventing them."

Algorithmic bias "still not systematically fixed"

The issue of algorithmic bias first entered the public discourse in the second half of the last decade, most famously when Google's Photos app was found to be mislabelling photos of Black people as gorillas in 2015.

Since then, similar issues have sprung up across all different industries, with Amazon forced to ditch its recruitment algorithm because it systemically favoured male candidates over female ones, while a Nature study exposed how millions of Black people had suffered from the "rampant racism" of healthcare algorithms in US hospitals.

Similarly, an algorithm used by US courts to predict defendants' likelihood of re-offending – and therefore help determine their sentencing – was found to be biased against Black people, mislabelling them as "high risk" at nearly twice the rate as white defendants.

AI applications such as ChatGPT presents new ethical concerns

In the hopes of tackling these issues, major industry players including Amazon, Google, Facebook and Microsoft banded together to form the non-profit Partnership on AI in 2016, with the aim to set out ethical best practices for the development of artificial intelligence.

But today, eight years after the Google Photos incident, Xiang says the industry has still not found a systematic fix. The only solution offered by Google was to stop anything from being labelled as a gorilla – including an actual gorilla.

"It's still not been systematically fixed," said Xiang, who served on the leadership team of the Partnership on AI in 2020.

"There's been a lot of progress in terms of our understanding of these issues and research into it, but there aren't any silver bullets yet," she explained. "Bias is going to continue to be a problem that we're going to have to chip away at."

"I think for certain areas, the solution is: there are things that AI maybe should not be delving into at the moment, given its current abilities."

Image generators are not exempt 

The rise of generative AIs, including chatbots like ChatGPT and text-to-image generators such as DALL-E and Midjourney, is also opening the door to a new kind of representational bias amongst algorithms.

That's because they tend to reproduce any racist and sexist stereotypes found in the texts and images used to train them, studies suggest.

"For example, if you put a simple prompt like 'firefighter' into an image generator, are all the images that are generated of men and maybe particularly Caucasian men?" Xiang said.

"Pretty much all of them suffer from these kinds of problems," she added. "And as more people are using image generators for inspiration in creative fields, if they aren't aware of these biases and actively combating them, then they, in turn, might be influenced by them."

This image of a firefighter was generated by Dall-E

Efforts to prevent these kinds of harms are still in "very early stages", according to Xiang, with companies largely left to govern themselves with varying levels of rigour in the absence of comprehensive government oversight.

That means some are leaving decisions up to individual developers, while others have entire AI ethics teams such as the one Xiang heads at Sony, where she's responsible for ensuring that any new technologies use AI in an ethical way.

"It's very much a mixed bag," Xiang said. "Oftentimes, these really complicated legal or ethical problems are being solved at the level of AI developers right now."

"Most of these folks have a computer science or electrical engineering background," Xiang continued. "This is not really what they signed up to do."

AI companies should learn from civil engineers

To prevent biased algorithms from wreaking havoc on society, Xiang says there must be a more concerted effort from companies to place AI ethics at the core of their development process from the very start.

Crucially, this also involves investing more time and resources into the chronically underfunded, undervalued field of data collection to ensure that the datasets used to train AIs are unbiased and respect people's privacy and copyrights.

"My personal hope is that this is a growing-up point for AI," Xiang said. "If we compare AI to other engineering fields like civil engineering, there's a much longer tradition of safety and people thinking very carefully about those aspects before they actually build something in the real world."

"Whereas in AI, there's long been more of a fast-and-loose culture, just based on the newness of the technology and given that it's also often not in the embodied world, so people think about the harms as being less severe."


Read:

"Only by embracing AI can you be involved in controlling it" says LookX founder Wanyu He

Regulations such as the European Union's AI Act, set to be ratified by the end of this year, will play a crucial role in bringing about these changes, according to Xiang, much like the General Data Protection Regulation (GDPR) did for data privacy.

But regulation can only do so much, she warns, due to the rapid speed at which AI is developing and a lack of international agreement on what constitutes "ethical" AI.

"There's not really one singular conception of ethics," she said. "Different cultures, different kinds of people will be aware of different possible issues."

"And insofar as AI is being deployed on a global scale but only developed by people in a few countries representing a few demographics, it's unlikely to capture all of the possible harms that will actually happen in the deployment."

The portrait is courtesy of Sony.

Illustration by Selina Yau

AItopia

This article is part of Dezeen's AItopia series, which explores the impact of artificial intelligence (AI) on design, architecture and humanity, both now and in the future.

Dezeen In Depth

If you enjoy reading Dezeen's interviews, opinions and features, subscribe to Dezeen In Depth. Sent on the last Friday of each month, this newsletter provides a single place to read about the design and architecture stories behind the headlines.

The post Rise of AI could see "a lot of people living as second-class citizens" warns Sony's Alice Xiang appeared first on Dezeen.

https://urbhy.com/rise-of-ai-could-see-a-lot-of-people-living-as-second-class-citizens-warns-sonys-alice-xiang/?feed_id=18770&_unique_id=64fad1e43365d

Comentários