machistIAdas (sexist IA) Is artificial intelligence sexist?

6 Mar 2026

On the occasion of International Women’s Day, and seeing the rise of artificial intelligence in absolutely all areas, at LF Channel we asked ourselves whether AI is sexist. The answer is clear: it is. Here’s why.

Gender inequality in Artificial Intelligence

We have spent generations normalizing the gender gap at all levels, from private spaces like the family to public ones like the workplace. And now that an innovative tool with great social influence appears, it turns out to be sexist. Why is that?

Many people tend to think that technology is neutral, but the truth is that artificial intelligence is trained on millions of texts published on the Internet, and many of them contain biased information, causing the results provided by generative AI to reinforce gender inequality and its stereotypes.

We present a very clear example shared by Dr. Gemma Galdón, PhD in Technology Policy, algorithm auditor, and founder of Eticas Research and Consulting, as she explained in the newspaper La Vanguardia:

“If we use AI to detect what certain symptoms correspond to, it will likely never tell you endometriosis, because traditionally female diseases have been much less studied and diagnosed than male ones. It will also be less accurate when diagnosing a woman’s heart attack. The symptoms are different from those of a man, and that is why it is likely that the system works worse in their case, because it has been trained with a different type of data.”

If algorithms are not transparent and robust, they reproduce the biases present in the information they use as a source, leading to discriminatory results. In the case of gender, the data tends to increase this discriminatory gap against women.

Examples of gender bias in AI

Where have we found this discrimination in the field of communication and public relations? Below, we share some examples we have discovered:

1. Inclusive language

Generative AI will always use language that prioritizes the masculine form. And if you ask AI to use inclusive language, at most it writes “male and female professionals,” distinguishing between masculine and feminine. It never seeks on its own other more inclusive forms, such as replacing an adjective with a neutral noun, unless we explicitly ask for it. It relies on patterns that already exist, and, unfortunately, these do not use inclusive language.

For this reason, people working in the communication sector who create texts for online publication must consider using language that includes everyone to promote a more inclusive digital environment and, therefore, train AI to reflect this reality.

2. Professional references

Although there are women who research, lead, and transform entire sectors, our society does not make male and female leadership equally visible. A practical example in the business field is the Fortune 100 Most Powerful People in Business (2025) list. Of the 100 people on the list, only 17 are women, less than 20% of the total, which implies a significant imbalance, especially considering the influence of this type of ranking at the business level.

This “lack of visibility” of female profiles is clearly reflected in artificial intelligence. To verify this, we asked a generative AI to create a list of ten references in strategic communication. And indeed, no women appeared among the results. When asked why this absence occurred, the AI explained that it is due to “a structural bias in the traditional academic canon.” In other words, the AI itself recognizes the problem, and yet it reproduces it.

We are facing a very concerning situation because the inclusion of female role models has a profound impact on professional environments. Many studies indicate that seeing female role models increases the likelihood that women, especially girls and young women, visualize themselves pursuing certain career paths or attaining specific positions.

3. Image generation

Many people use AI daily to generate images. So, at LF Channel, we conducted another small experiment along these lines. We asked a generative AI to create the image of a person leading a team in a large company. A man appeared on the screen. Then, we requested the image of a cleaning professional. The AI showed a woman.

AI is not being neutral in image generation; it relies on biases it has learned from all the preexisting digital visual information on which it was trained, which is already imbalanced and marked by gender inequality. And now it reproduces them on a global scale, millions of times a day, presenting them as if they were the norm.

SexistIActs

These three cases are just a few examples in which artificial intelligence perpetuates gender inequalities. At LF Channel, we have launched the SexistIActs campaign, a series of content in which, alongside our female professionals, we highlight and denounce the gender inequality reproduced by artificial intelligence in general, and generative AI in the field of communication in particular.

Because identifying them is the first step to changing them.

Do you want us to help you practice more inclusive and bias-free communication? Contact us, and we will be happy to assist you.

Contact Us

Please enable JavaScript in your browser to complete this form.
Política de Privacitat

Shall we talk?

Barcelona Phone

Madrid Phone

Please enable JavaScript in your browser to complete this form.
Política de Privacitat

Privacy summary

This website uses cookies so that we can offer you the best possible user experience. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website or helping our team understand which sections of the website you find most interesting and useful.