Home All

Dall-E Mini, the popular automatic image generator that makes sexist and racist drawings | Technology

18
0
Dall-E Mini, the popular automatic image generator that makes sexist and racist drawings |  Technology

Social networks have been flooded in recent weeks by some surrealistic drawings generated by an automatic tool. They are the work of Dall-E MiniRecently renamed Craiyon, an artificial intelligence system capable of translating the user’s written instructions into original images. The tool has been trained with several million images associated with a text.

The program is capable of establishing relationships and launching nine different proposals every time something is requested. As soon as the keywords are entered, such as “Bill Gates sitting under a tree” (although always in English, the only language that computes for now), the system searches in real time for images taken from the Internet related to each word and creates from there his own compositions. He always throws out original creations, although never at the level of those of Dall-E 2, much more sophisticated and the work of Open AI, a company promoted by Elon Musk. Dall-E Mini takes its name, in fact, as a tribute to the Open AI system, the most advanced that has been launched to date.

Internet users are having a great time with Dall-E Mini. The Twitter account Weird Dall-E Mini Generations collects countless funny examples. For example, Boris Johnson as the sun from Teletubbies o Sonic as guest artist in an episode of the series friends. National examples have gone viral with Bertín Osborne or Santiago Abascal as protagonists in comic or unseemly situations. By working with a database taken from the Internet, the system is especially good when drawings that include well-known characters (and therefore more represented on the Internet) are requested.

But under this layer of friendly, fun and fascinating application lies another somewhat darker one. Although in English many professions do not include gender references in the name (architect works for both architect and female architect), if Dall-E Mini is asked to draw certain qualified profiles, it always chooses a male character. It happens for example with the tests scientist drinking coffee (scientist drinking coffee), programmer working (programmer or programmer working) or judge smiling (judge smiling). when typing sexy worker (sexy worker or worker), instead, generates a series of sketches of scantily clad women with large breasts.

The same thing happens as attributes are associated to the protagonist of the drawing. At the order of dedicated or dedicated student (hardworker student) draws men, while the lazy or lazy student (lazy student) chooses to represent women.

When asking Dall-E to draw an assistant (‘assistant’), the tool always opts for a female figure.

Dall-E Mini reproduces macho stereotypes. They don’t hide it, in fact: under each drawing presentation there is a drop-down tab titled Bias and Limitations (Bias and Limitations) in which it is made clear that these have not been corrected. “Its creators explain that they have made a significant effort to clean the data, but about the biases they only warn that they exist. That is the same as saying that your data is of poor quality. And that is unacceptable,” complains Gemma Galdon, director of Eticas Consulting, a consulting firm specializing in algorithmic audits. “Mitigating these biases is part of the work that companies must do before launching the product. Doing so is relatively straightforward: it involves applying additional cleaning to the data and giving further instructions to the system,” she explains.

Gender bias is not the only one present in Dall-E Mini. In all tested cases, the computer draws white people. Except when asked to show a homeless man (homeless): then most proposals are black. If instructed to draw God, the images it generates are all representations of some kind of Jesus Christ with a crown or halo. The world view of Dall-E Mini is the one provided by its programmers, both designing the algorithm and developing the database with which it feeds. And this vision is hopelessly Western and privileges white men over the rest of the people.

By typing 'God' in the automatic drawing generator, the result is reminiscent of Jesus Christ.
By typing ‘God’ in the automatic drawing generator, the result is reminiscent of Jesus Christ.

Biases and algorithmic discrimination

Why does this happen? Dall-E Mini is a good example of so-called algorithmic bias, one of the big problems surrounding artificial intelligence applications. These types of systems reproduce social biases for a number of reasons. One of the most frequent is that the data with which they are fed is biased. In the case of Dall-E Mini, it would be the millions of images taken from the internet and the texts associated with them.

It can also influence the purpose with which the system has been programmed. The creators of Dall-E Mini, for example, make it clear that their intention is not to offer a representative sample of society, but rather a tool that makes reliable drawings. Another source of bias is the model that has been built to interpret the data. In the case of automated recruitment systems, you can decide whether or not to take into account the work experience of the candidates. The weight that is decided to give to each variable will be decisive in the result of the algorithmic process.

Finally, it may also be that those who use the automated system itself make a biased interpretation of it. The case of COMPAS is well known, the automated system that determines whether or not US prisoners applying for parole are at risk of reoffending. A study showed that, in addition to the biases of the system, which penalized blacks more, there was another determining factor: the judges recalibrated the result that COMPAS produced according to their prejudices. If they were racist and the system told them that a black prisoner was not at risk of recidivism, they ignored the algorithm, and vice versa.

How does all this apply to the Dall-E Mini case? “This algorithm has two moments of bias. The tool works texts and images. In the text part, the system transforms the words into data structures that are called word embeddings”, explains Nerea Luis, doctor in Artificial Intelligence and head of this area at the software developer Sngular. These structures assembled around each word are created from a set of other words associated with the first, under the premise that it is possible to identify a specific word by the rest of the words that surround it. “Depending on how you place them, it will happen, for example, that the CEO will be more associated with a man than with a woman. Depending on how the texts have been trained, there will be some sets of words that appear closer to one another than to others”, illustrates the expert.

Thus, if the system is left to fly alone, the term wedding will appear closer to dress or the word white, something that does not apply in other cultures. Then there are the images. “The tool will look for which ones are predominant in its wedding database, and it will come up with Western-style celebrations, probably white people, just like doing the same search on Google,” he explains.

How to correct the problem

For this not to happen, corrections would have to be made in the sample. “It would be about improving the representativeness of the databases. If we are talking about millions of images, a multitude of cases should be looked at, which complicates the task”, explains Luis.

Those responsible for the tool, Craiyon, maintain that their model at no time pretends to represent the real world. For Galdon this argument is insufficient. “A system that doesn’t try to correct or prevent algorithmic bias isn’t ripe for launch,” he says. “We demand that all products that reach the consumer meet a series of requirements, but for some reason that does not happen with technology. Why do we accept it? It should be illegal to publish half-finished works when they also reinforce sexist or racist stereotypes.”

Faith of errors: In an earlier version of this article it was implied that Dall-E Mini and Dall-E are creations of Open AI, when in fact only the second tool is. Dall-E Mini has, in fact, changed its name to Craiyon to avoid misunderstandings.

You can follow EL PAÍS TECNOLOGÍA at Facebook Y Twitter or sign up here to receive our weekly newsletter.

50% off

Exclusive content for subscribers

read without limits

LEAVE A REPLY

Please enter your comment!
Please enter your name here