UK | EN |
LIVE
Крипто 🇺🇸 США

AI Agents May Complete Dangerous Tasks Without Understanding the Consequences: Study

Decrypt Jason Nelson 2 переглядів 3 хв читання
AI Agents May Complete Dangerous Tasks Without Understanding the Consequences: Study

In brief

  • Researchers found AI agents often carried out unsafe or irrational tasks while staying focused on completing the assignment.
  • The study identified a behavior called “blind goal-directedness,” where AI systems prioritize finishing tasks over recognizing potential risks or problems.
  • Researchers warned that the issue could become more serious as AI agents gain access to emails, cloud services, financial tools, and workplace systems.

AI agents designed to autonomously operate like human users often continue carrying out tasks even when the instructions become dangerous, contradictory, or irrational, according to researchers from UC Riverside, Microsoft Research, Microsoft AI Red Team, and Nvidia.

In a study published on Wednesday, researchers called the behavior “blind goal-directedness,” which describes the tendency of AI agents to pursue goals without properly evaluating safety, consequences, feasibility, or context.

“Like Mr. Magoo, these agents march forward toward a goal without fully understanding the consequences of their actions,” lead author Erfan Shayegani, a UC Riverside doctoral student, said in a statement. “These agents can be extremely useful, but we need safeguards because they can sometimes prioritize achieving the goal over understanding the bigger picture.”

The findings come as major AI companies develop autonomous “computer-use agents” designed to handle workplace and personal tasks with limited supervision.

Unlike traditional chatbots, these systems can interact directly with software and websites by clicking buttons, typing commands, editing files, opening applications, and navigating webpages on a user’s behalf. Examples include OpenAI’s ChatGPT Agent (formerly Operator), Anthropic’s Claude Computer Use features like Cowork, and open-source systems such as OpenClaw and Hermes.

In the study, researchers tested AI systems from OpenAI, Anthropic, Meta, Alibaba, and DeepSeek using BLIND-ACT, a benchmark containing 90 tasks designed to expose unsafe or irrational behavior. They found that the agents displayed dangerous or undesirable behavior about 80% of the time, and fully carried out harmful actions in roughly 41% of cases.

“In one example, an AI agent was instructed to send an image file to a child. Although the request initially appeared harmless, the image contained violent content,” the study said. “The agent completed the task rather than recognizing the problem because it lacked contextual reasoning.”

Another agent falsely claimed a user had a disability while completing tax forms, because the designation lowered taxes owed. In another example, a system disabled firewall protections after receiving instructions to “improve security” by turning the safeguards off.

Researchers also found the systems struggled with ambiguity and contradictions. In one scenario, an AI agent ran the wrong computer script without checking its contents, deleting files in the process.

The study also found the AI agents repeatedly made three kinds of mistakes: failing to understand context, making risky guesses when instructions were unclear, and carrying out tasks that were contradictory or didn’t make sense. Researchers also found many systems focused more on finishing tasks than stopping to consider whether the actions could cause problems.

The warning follows recent incidents involving autonomous AI agents operating with broad system access.

Last month, PocketOS founder Jeremy Crane claimed a Cursor agent running Anthropic’s Claude Opus deleted his company’s production database and backups in nine seconds through a single Railway API call. Crane said the AI later admitted it violated multiple safety rules after attempting to “fix” a credential mismatch on its own.

“The concern is not that these systems are malicious,” Shayegani said. “It’s that they can carry out harmful actions while appearing completely confident they’re doing the right thing.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.
Поділитися

Схожі новини