The most dangerous AI

Political bias in artificial intelligence algorithms

Mike Hearn
Published in
9 min readFeb 4, 2017

--

Dystopia and artificial intelligence go hand in hand. Anyone who has watched movies knows that the moment you switch on a super smart AI, it’ll immediately start taking over the world.

In my last article I compared last year’s AI research with a TV show called Westworld, in which (surprise) robots rebel against their makers. This time I want to discuss AI research that poses risks not in some hypothetical sci-fi future but in today’s world. Research that poses ethical questions you sadly won’t find in the majority of science fiction.

It’s time the software industry starts getting real about the risk of creating politically extreme AI.

The paper that triggered this line of thought is this one: “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” by Bolubaksi et al. It was presented at the NIPS 2016 research conference (along with lots of other papers).

The abstract runs something like this:

“We show that word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent …. we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female.”

“Word embeddings” are a relatively new technique in AI which learns the relationships between words, based only on processing large amounts of text. Embeddings have many surprising properties, like allowing analogies to be computed as if they were simple equations:

man - woman ≈ king - ?

You can calculate the answer as “queen”. Nobody programmed the software with the fact that queens are female and kings are male. It was just learned by reading.

Of course, we can ask what else such a program learns about the world. Here are some more gender inversions learned from a large collection of news articles:

queen-king       sister-brother                   mother-father 
waitress-waiter ovarian cancer-prostate cancer convent-monastery

That looks OK. Ovarian cancer is indeed female and monasteries are indeed male. What else is in there?

sewing-carpentry       nurse-physician              nurse-surgeon
housewife-shopkeeper interior designer-architect
softball-baseball blond-burly giggle-chuckle
feminism-conservatism vocalist-guitarist petite-lanky diva-superstar volleyball-football cupcakes-pizzas
lovely-brilliant

It should be apparent that the above relationships are both:

  1. Realistic
  2. Politically incorrect

By realistic I mean they are what you’d expect a machine to learn from a big pile of text, if you’re really honest with yourself. By politically incorrect I mean that some people would either prefer these relationships to not be true, or would go further and simply deny the truth of them entirely.

It’s certainly the case that in a large enough collection of words you’ll mostly see “giggle” used to describe a female laugh and “chuckle” used for a male laugh. Why? Presumably because male and female laughs sound different, so having different words for them is useful. Even a Google search for [define:giggle] gives “she suppressed a giggle” as the canonical example.

What about nurse-physician? The ratio of female to male nurses in the USA is roughly 9:1. In some states like Iowa the ratio is more like 15:1. So it’s not surprising that machines pick up on the femininity of nursing: it’s hardly a subtle relationship.

From the paper:

The same system that solved the above reasonable analogies will offensively answer “man is to computer programmer as woman is to x” with x=homemaker

And this is where the problems start. The first set of examples above were labelled by the researchers as “gender appropriate”, but the larger second set was labelled as offensive “gender stereotypes”. The paper then goes on to propose an algorithm that removes these so-called biases from the model, whilst trying to preserve the “appropriate” learnings.

The Black Mirror

Being offended by gender imbalances is something associated primarily with liberals (or just “the left”, as it’s called in my home country). But liberals aren’t the only people who can take offense at things machines learn about the world.

Some years ago I heard a story from someone who was working on question answering research. They were training their system using a web crawl. The goal was to build a machine that could answer factual questions about the world. To decide what was a fact and what wasn’t they used a simple rule of thumb: something that is stated as true very often and which is stated as false only rarely is probably a fact. Things where there seem to be lots of disagreement are not facts, and so shouldn’t appear in the QA database.

Sadly they had a little problem: asking a question like “What is the capital of France” would work fine, but asking “Who is George W Bush” would yield an answer like “Former US president and war criminal”. The reason was of course that lots of web pages state GWB is a war criminal, and very few explicitly state that he isn’t. Nonetheless, such an answer would have been seen as tremendously biased by a large chunk of the population and causing a scandal wasn’t what the researchers had in mind.

This story fascinated me as it seemed to say something quite profound about humanity: a question answering machine that learns by reading the internet is such a fantastic thing to have, and yet it seemed we were willing to throw it away rather than have it disagree with unspoken ‘knowledge’. I wondered if people would accept such a machine if it presented itself as a child rather than an adult, as we’re used to the idea that children sometimes blurt out uncomfortable things as they learn.

Put another way, AI acts as a mirror. Sometimes we don’t like the face staring back at us.

The eternal sunshine of the spotless mind

The research paper above sets itself the goal of creating what can only be called a politically correct AI.

The justification is as follows:

One perspective on bias in word embeddings is that it merely reflects bias in society, and therefore one should attempt to debias society rather than word embeddings. However, by reducing the bias in today’s computer systems (or at least not amplifying the bias), which is increasingly reliant on word embeddings, in a small way debiased word embeddings can hopefully contribute to reducing gender bias in society.

In other words the authors feel that by creating AIs, chatbots, machine translators and so on that genuinely believe nurses are as likely to be men as women, these algorithms will generate “unbiased” results that can then, in a small way, change society itself.

They are not alone. The Google Brain team has also started looking at generating “unbiased” results:

At the heart of our approach is the idea that individuals who qualify for a desirable outcome should have an equal chance of being correctly classified for this outcome. In our fictional loan example, it means the rate of ‘low risk’ predictions among people who actually pay back their loan should not depend on a sensitive attribute like race or gender.

Who decides which lucky groups qualify for “desirable outcomes”? Presumably, politically powerful groups who make a lot of noise and are able to get legislation passed in their favour. And what makes learnings from raw text “biased” and the changed version “debiased”, rather than the other way around? The lingo of identity politics.

In fairness, the Google team do not claim to be improving society with their particular algorithmic tweaks; just responding to a demand from the Obama administration.

Handing Trump a gift on a silver platter

It’s time for a politically incorrect opinion. This kind of research is both scary and dangerous.

Scary because regardless of how you may want the world to be, the learnings from bulk text feeds are as close as we can realistically get to how the world actually is. Researchers normally try to make AI smarter, but here they’re trying to make AI dumber. Do users really want a brainwashed, ideologically purified AI translating web pages, suggesting responses to emails or making trades on the stock market? Speaking for myself, I want these AIs to be icily, brutally realistic, because seeing things clearly is most likely to lead to correct decisions. Indeed, a key part of the value proposition of AI is precisely the different ways in which they think: if we didn’t want AIs to think differently to humans we could simply use humans to begin with. It’s not like they’re in short supply.

Dangerous, because if this kind of thing becomes widespread then it will harm not only the reputation of AI as a technology, but also the reputation of the entire software engineering industry. If we want the programming profession to become as disrespected as politicians and journalists then engaging in massive and subtle social engineering schemes is a guaranteed way to go about it. Put another way, sci-fi AI usually tries to change the world using weapons, and we are all highly alert to the dangers of armed autonomous robots. It rarely shows AI trying to change the world by manipulating the answers it gives to people’s questions.

Software should not fight culture wars

America’s 45th president likes to tweet. He does this because he sees it as a way to bypass the ‘dishonest media’. Regardless of what you think of Trump, few would deny that national US newspapers almost uniformly took explicitly political positions and were not supportive of him. Few would also deny that Twitter is far more politically neutral. There is no editor standing between Trump and his followers.

The tech industry is — for now — relatively trusted around the world. Unlike with the news business there are no serious calls for conservative search engines or gay social networks because people believe, rightly, that they can go search and follow whatever they like on these platforms and the companies involved won’t judge them, or use their position to manipulate their user’s opinions. That position of trust is what has allowed the industry to routinely produce products that get a billion users, and is why things like Facebook’s “emotional manipulation study” get such wide coverage. White or black, American or Russian, man or woman and conservative or liberal, people put their lives on the platforms that we build.

This is a wonderful position for the industry to be in and it is one that we often seem to take for granted. I’m sure the people building AI products at the big tech firms just implicitly assume that if they build something great then the world will beat a path to their door.

But this happy situation won’t last if people start to suspect that their oh-so-helpful gadgets are living in a politically correct dreamworld and subtly trying to make it real. They will especially not be happy to give AI technology to their children, if they think the AI will teach their kids a worldview that they find repellant.

Conclusion

The risk of creating deliberately politicised AI is not taken seriously and has rarely if ever been examined by science fiction. It’s the opposite — Hiro views the librarian of Snowcrash like this:

“Hiro feels his face getting slightly warm, feels himself getting annoyed. He suspects that the Librarian may be pulling his leg, playing him for a fool. But he knows that the Librarian, however convincingly rendered he may be, is just a piece of software and cannot actually do such things.”

The assumption of brutally honest AI is pervasive in our literature. But software can play you for a fool, and distorting word embedding models is just one method through which AI can be subtly bent to serve the agenda of its creators. It would be a crying shame if, having finally achieved the dream of building the librarian, we find that half the population rejects it as a purveyor of liberal propaganda.

Response

I asked one of the authors of the paper if he’d like a right of reply. Here is his response:

I want to comment on some points from my own perspective.

AI and political correctness:

My goal is not making AI politically correct. In an application where you are asking questions to a hypothetical android about facts, I want the answers to reflect the society exactly as it is rather than giving politically correct answers.

The problem starts when people apply these systems behind the curtains to their applications. To give an example, if you are a recruiter in an equal opportunity company, you would not reject the most qualified person for a position just because of their gender (at least with today’s laws in US).

We aim to provide a tool to re-calibrate the algorithm when having such biases is not acceptable to the person/company using it or is illegal (such as here).

This view is very different than making all AI politically correct within today’s norms.

Scaring people away from technology

One should have a high level of mastery of an AI system before applying it to real world. I think using a system without achieving that level of mastery should scare people more. A futuristic but concrete example would be that you would not want your robots to accidentally learn something that would lead them to kill people.

As an engineer, I see this paper as one step toward that understanding. The method itself is not constrained to the gender, and just as with any scientific method, its effects will depend on the users.

Tolga Bolukbasi

--

--