Meta announced Wednesday the creation of a AI Advisory Board with only white men on it. What else would we expect? Women and people of color have been speaking out for decades about being ignored and excluded from the world of artificial intelligence, despite being qualified and playing a key role in the evolution of this space.
Meta did not immediately respond to our request for comment on the diversity of the advisory board.
This new advisory board differs from Meta’s actual board of directors and supervisory board, which is more diverse in terms of gender and racial representation. Shareholders did not elect this AI board, which also has no fiduciary duty. Meta told Bloomberg that the board of directors will offer “insights and recommendations on technological developments, innovation and strategic growth opportunities.” He would meet “periodically.”
It is telling that the AI advisory board is made up entirely of entrepreneurs and entrepreneurs, not moralists or anyone with an academic or deep research background. While one could argue that current and former Stripe, Shopify, and Microsoft executives are well-placed to oversee Meta’s AI product roadmap, given the sheer number of products they’ve brought to market between them, it’s proven repeatedly that AI is not like other products. It’s a risky business, and the consequences of getting it wrong can be far-reaching, particularly for marginalized groups.
In a recent interview with TechCrunch, Sarah Myers West, managing director at the AI Now Institute, a nonprofit that studies the societal implications of artificial intelligence, said it’s important to “scrutinize” institutions that produce AI to “make sure the needs of the public [are] is served.”
“This is an error-prone technology, and we know from independent research that these errors are not evenly distributed, disproportionately harming communities that have long borne the brunt of discrimination,” he said. “We should set the bar much, much higher.”
Women are much more likely than men to experience the dark side of AI. Sensity AI found in 2019 that 96% of deep fake AI videos online were non-consensual, explicit sex videos. Genetic AI has become much more widespread since then, and women are still the target of this offending behavior.
In a high-profile incident in January, Taylor Swift’s non-consensual, pornographic deepfakes went viral on X, with one of the most shared posts receiving hundreds of thousands of likes and 45 million views. Social platforms like X have historically failed to protect women from these conditions — but since Taylor Swift is one of the most powerful women in the world, X stepped in by banning search terms like “taylor swift ai” and taylor swift deepfake.
But if this happens to you and you’re not a global pop sensation, then you might be out of luck. There are many References middle and high school students clearly deepfaking their classmates. Although this technology has been around for a while, it’s never been easier to access – you don’t need to be tech-savvy to download apps that are specifically advertised to “undress” photos of women or swap their faces into porn. In fact, according to a report by NBC’s Kat Tenbarge, Facebook and Instagram hosted ads for an app called Perky AI, which described itself as a tool for creating clear images.
Two of the ads, which reportedly escaped Meta’s detection until Tenbarge alerted the company to the matter, showed photos of celebrities Sabrina Carpenter and Jenna Ortega with their bodies blurred out, prompting customers to ask the app to remove their clothes. The ads used an image of Ortega from when she was just 16 years old.
The mistake of allowing Perky AI to advertise was not an isolated incident. Meta’s Supervisory Board recently opened investigations into the company’s failure to handle reports of AI-generated sexual content.
It is imperative that the voices of women and people of color be included in AI product innovation. For so long, such marginalized groups have been excluded from the development of world-changing technologies and research, and the results have been disastrous.
An easy example is the fact that until the 1970s, women were excluded from clinical trials, meaning that entire fields of research were developed without understanding how it would affect women. Black people, in particular, see the effects of technology built without them in mind — for example, self-driving cars are more likely to hit them because their sensors can they have a harder time black skin detection, according to 2019 study was made by the Georgia Institute of Technology.
Algorithms trained on already discriminatory data only bring back the same biases that humans have trained them to adopt. In general, we already see AI systems perpetuating and reinforcing racial discrimination in employment, housing and criminal justice. Voice assistants struggle to understand different accents and often flag the project by non-native English speakers as generated by AI from, as noted by Axios, English is the AI’s native language. Facial recognition systems mark blacks as potential matches for criminal suspects more often than whites.
Current AI development embeds the same existing power structures of class, race, gender, and Eurocentrism that we see elsewhere, and it seems that not enough leaders are addressing it. Instead, they reinforce it. Investors, founders, and tech leaders are so focused on moving fast and knocking things down that they don’t seem to understand that genetic artificial intelligence—the hot AI technology of the moment—could make problems worse, not better. According to a report by McKinsey, AI could automate about half of the jobs that don’t require a four-year degree and pay more than $42,000 a year, jobs in which minority workers are overrepresented.
There is cause for concern as to how a group of all-white men at one of the world’s most prominent tech companies, engaged in this fight to save the world using artificial intelligence, could ever advise products for all people when only one limited demographic represented. It will take a huge effort to create technology that everyone—really everyone—could use. In fact, the layers required to create safe and inclusive AI—from research to cross-sector societal understanding—are so complex that it’s almost obvious that this advisory board won’t help Meta get it right. At least where Meta falls short, another startup could emerge.
We’re launching an AI newsletter! Sign up here to start receiving it in your inbox on June 5th.