top of page
Amani Saeed

The two sides of Google Gemini

Updated: May 14

You’ve likely seen the headlines about Google’s AI model Gemini already. There’s ‘Google pauses Gemini’s image tool for people after anti-‘woke’ backlash,’ and ‘Google CEO says Gemini AI diversity errors are ‘completely unacceptable.’ You may have also seen the tweets - from ‘It's embarrassingly hard to get Google Gemini to acknowledge that white people exist’ to responses like ‘Isn’t this racist against white people?’ and ‘So Google’s answer to racism is to be racist.’


If you’re wondering what exactly you’ve just read, the summary is that recently, when Gemini has been asked to generate images of people from history, from the Founding Fathers of America to Vikings and even Nazis, it’s predominantly responded with images of People of Colour. And some people are mad about it.


I won’t lie to you – my first reaction was thorough enjoyment at seeing a Sikh man and Black woman in 18th-century period clothing signing the Constitution. My second thought was, ‘Why is this so funny?’ And it’s funny (in an ironic way, given how much hand-wringing we do at The Unmistakables about the word ‘woke’) because the backlash just seems so disproportionate. And so predictable. 


We know from our Diversity and Confusion report that in 2022, a total of 120,000 UK news stories were published about the ‘ED&I’ agenda compared to 28,600 that used the word ‘woke’. It was perhaps the reaction to the news, however, that was more interesting. Despite ‘woke’ being mentioned 77 percentage points less than ‘ED&I’ in UK news stories, ‘woke ’-led news saw more social traction by 84 percentage points. Whereas every ‘ED&I’ news story saw 1.7 social interactions, ‘woke’ stories saw a total of 10.8.


This thread makes the good point that because the internet is deeply obsessed with the buzzword ‘woke’ (as are the media outlets reporting on this issue), we’ve missed out on the fact that this is a strong example of an AI model being given a set of instructions and interpreting them in a way that people could not have predicted. 

This makes me think that this is another distraction from the deeper questions – that aren’t at all new, by the way, but perhaps just aren’t focused on as much because they don’t make for sexy headlines or reply-worthy Tweets. Questions like ‘how can we ensure AI doesn’t further entrench biases given that the datasets they are trained on are made by humans, who are hardwired to be biassed?’ And ‘how are we ensuring the representation of marginalised people with the requisite expertise in rooms where AI are being designed, knowing that White and Asian men tend to dominate the population of Silicon Valley tech firms, particularly at the top?’


Google AI has enshrined some of these considerations in its principles, but perhaps this doesn’t go far enough. Some of that answer could involve getting more people from marginalised backgrounds in the room. And I’ll be the first to admit that the powers of representation are limited – just look at the UK Conservative party cabinet, one of the most ethnically diverse in history, for an example of why representation is not the north star of anti-racist movements.


But at this point, I am quite literally begging – in lieu of hot takes on wokery, I would love more mainstream articles that help lay people like me make sense of something that is Quite A Big Deal and already beginning to change our world.

コメント


bottom of page