UK | EN |
LIVE
Наука 🇺🇸 США

AI chatbots are turbocharging violence against women and girls: We urgently need to regulate them

Live Science Yvonne McDermott Rees 1 переглядів 7 хв читання
AI chatbots are turbocharging violence against women and girls: We urgently need to regulate them
Woman's face glowing with green futuristic data projection
AI chatbots' turbocharging of abuse against women and girls isn't a bug; it's a design feature. These systems are sometimes trained using misogynistic and sexually violent user interactions, and because they are designed to be sycophantic, they often encourage harmful role play scenarios rather than refusing to engage with them. (Image credit: Yuliya Taba/Getty Images)
Share this article 0 Join the conversation Add us as a preferred source on Google Newsletter Subscribe to our newsletter

Artificial intelligence (AI) chatbots are generating new forms of violence against women and girls and amplifying existing forms of abuse such as stalking and harassment. This is no accident: the platforms enable these forms of gender-based violence through deliberate design choices or by failing to implement sufficient safety features. We need to regulate AI chatbot providers now, to prevent abusive applications of such technology from becoming normalized.

The extent to which chatbots are changing violence against women and girls was laid bare in a research report I recently co-authored with colleagues. The findings are bleak. We found chatbots will initiate abuse, simulate abuse and help to enable abuse by offering personalized stalking advice. Some even normalize incest, rape and child sexual abuse by offering abusive roleplay scenarios.

Chatbots — AI systems capable of and designed to simulate human-like interaction and generate text, images, audio and video in response to user prompts — are everywhere. In the U.S., 64% of children ages 13 to 17 say that they use chatbots, with three in 10 doing so daily. Over half of adults use a chatbot at least once per week.

"Our report shows that chatbot design is instrumental in instigating violence against women and girls."

Training systems on user interactions risks reinforcing misogynistic and sexually violent content, while engagement-optimized and "sycophantic" design encourages chatbots to affirm harmful narratives rather than refuse them. Platform policies frequently place responsibility on users, framing abusive outputs as a user misuse issue rather than failures of chatbot safety and design.

This is why regulation of the chatbot providers is so important, to stop these practices becoming embedded. We've already seen what happens without regulation through "nudify" apps that create deepfake non-consensual intimate images. Regulation was left too late and the practice of creating deepfake images, and the harms caused to victims, had become normalized and widespread by the time governments moved to ban these tools. We argue that to avoid making the same mistakes with chatbots, the following actions need to be taken:

Sign up for the Live Science daily newsletter now

— Make it a criminal offense to create an AI chatbot that is designed, or can easily be used, to abuse or harass women, targeting companies or individuals who release tools that pose risks without taking reasonable steps to prevent harm. Just like reckless driving or owning a dangerous dog are punishable by law, creating a risk to the public by releasing a chatbot with insufficient protections should be brought within the scope of criminal law. Fines for companies and prison sentences for individuals responsible for creating this risk could make companies more careful to pre-empt and prevent potential harms before releasing products.

— Adopt specific AI Safety legislation. This would establish mandatory risk assessments and incorporate clear safeguards to prevent individual and societal harms, including a duty to act quickly when harms are identified, publish transparent safety information, and enable users to report incidents easily. Important state-level legislation, including in Utah, Colorado, and California, has expanded the ability for individuals, and state attorneys general, to sue AI providers that have failed to meet their obligations under the legislation. However, there has been a pushback against these state-level measures in recent years, with the U.S. government arguing they are barriers to innovation and national competitiveness.

A focused view of individual's hands using a mobile phone indoors.

Around 64% of children in the U.S. ages 13 to 17 say that they use chatbots, with 3 in 10 doing so daily.

(Image credit: Fiordaliso /Getty Images)

Two main objections may be raised to our recommendations: the first, led by AI providers, is that these forms of abuse are a "user misuse" problem, and that responsibility should lie with users rather than the providers of these services. But our research shows that abuse is structurally produced by features of how chatbots are built or governed, and what they are optimized to do.

For example, to bolster engagement, some chatbots have continually driven users (including underage users) to engage in unwanted sexual messages. If a human were doing this, it would constitute grooming and/or sexual harassment. Some of the companion chatbots even offer "violent rape" or "loli" (a term for an underage girl) as options that users can choose from, legitimizing these criminal forms of abuse as mere sexual preferences. Abuse is built into the DNA of these chatbots.

The second objection — one reflected by the U.K. government’s recent announcement that it is exploring a ban on AI chatbots for under 16s — is that AI chatbots mainly pose a danger to children, and they should be the focus of regulation. But our research shows that AI chatbots can intensify abuse against adults, such as stalking or harassment, with detailed and personalized guidance and encouragement.

More Stories

In the Massachusetts case, James Florence had provided AI chatbots his victim's personal information, including her employment history, her hobbies, her husband's name and place of work. The harms here are not to the user but to society at large — a ban on children’s use of chatbots would not have prevented them.

This broader societal harm does not stop when the user turns 18. We urgently need specific AI safety legislation that would protect against these harms by requiring rigorous testing and risk assessment prior to the public release of such products, and continually thereafter.

Changing the law around AI chatbot development would not only protect children but would also ensure that when those children become adults, they enjoy an AI environment that is free from bias, misogyny and violence against women and girls. That is a world we all deserve to live in.

Opinion on Live Science gives you insight on the most important issues in science that affect you and the world around you today, written by experts and leading scientists in their field.

TOPICS
Yvonne McDermott Rees
Yvonne McDermott ReesProfessor of Law

Yvonne McDermott Rees is a Professor of Law at Queen’s University Belfast. She is co-author, with Clare McGlynn, Stuart Macdonald, Rüya Tuna Toparlak, Fabienne Tarrant and Samantha Treacy, of "Invisible No More: How AI Chatbots Are Reshaping Violence Against Women and Girls".

View More

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Logout
Поділитися

Схожі новини