BETA — Сайт у режимі бета-тестування. Можливі помилки та зміни.
UK | EN |
LIVE
Технології 🇬🇧 Велика Британія

US to safety test new AI models from Google, Microsoft, xAI

BBC Business 0 переглядів 3 хв читання
US to safety test new AI models from Google, Microsoft, xAI3 hours agoShareSaveAdd as preferred on GoogleKali HaysTechnology reporter
Getty Images A person's finger hovers over a phone screen, displaying several AI app widgets including copilot, chatgpt, gemini and grok.Getty Images

New artificial intelligence (AI) tools and capabilities from Google, Microsoft and xAI will now be tested by the US Department of Commerce before they are released to the public.

The tech firms have agreed to voluntarily submit their models for testing through Commerce's Center for AI Standards and Innovation (CAISI).

The new pacts are an expansion on agreements by AI companies like OpenAI and Anthropic that were reached during the Biden Administration, and will see AI models from all of the companies evaluated for their capabilities and security.

"These expanded industry collaborations help us scale our work in the public interest at a critical moment," CAISI's director Chris Fall said.

Overall, the evaluations of the AI tools will cover "testing, collaborative research and best practice development related to commercial AI systems."

Google's best known AI tool, through its DeepMind subsidiary, is Gemini, a chatbot that is widely available on Google products but is now also being used in US defence and military agencies.

Microsoft's best known AI tool is CoPilot, while xAI's only AI product in Grok, a chatbot that has come under widespread public scrutiny for issues were it undressed people in images.

On Tuesday, CASI said it has conducted 40 previous evaluations of AI tools, including evaluation and testing of certain "state-of-the-art models that remain unreleased."

The centre did not specify which models have been stopped from being released to the public.

In a corporate blog post published after the CAISI announcement, Microsoft said it already tests its AI models, but that "testing for national security and large-scale public safety risks necessarily must be a collaborative endeavour with governments."

A spokeswoman for Google's DeepMind declined to comment. A representative of SpaceX, the Elon Musk company that now controls xAI, did not respond to a request for comment.

Bringing in more companies for research and safety testing of commercial AI tools marks a departure for the Trump White House, which has taken a largely hands off approach to oversight or regulation of AI and technology companies.

Last year, US President Donald Trump signed a string of executive orders that formed the basis of his administration's "AI Action Plan", which he said would "remove red tape and onerous regulation" around AI development and ensure that the US will "win" through advancements and control of the technology.

But with the US military expanding its use of AI, and recent claims by Anthropic that it developed a model Called Mythos that is too powerful for release to the public, the White House seems to be shifting its outlook.

Senior members of Trump's staff met last month with Anthropic CEO Dario Amodei, as the BBC previously reported, even as the company is mired in a lawsuit with the US Department of Defense over Anthropic's refusal to drop safety guardrails for government use of its models.

White HouseSpaceXGoogleArtificial intelligenceDonald TrumpMicrosoft
Поділитися

Схожі новини