Meta Expands AI Training Through Keystroke Monitoring of Employees Across Major Platforms
Meta Expands AI Training Through Keystroke Monitoring of Employees Across Major Platforms
Meta has developed a comprehensive employee monitoring initiative that tracks digital activity across hundreds of websites and applications, including Google, LinkedIn, and Wikipedia, as revealed in internal communications obtained by CNBC. The surveillance program aims to gather behavioral data for artificial intelligence model development.
Tracking Tool Details
The monitoring system, officially named the Model Capability Initiative (MCI), captures employee keystrokes and mouse movements from work computers, according to a Reuters report published Tuesday. The tracked platforms extend beyond external services to encompass Meta's own applications, including Threads and Manus, as well as enterprise software such as GitHub, Slack, and Atlassian tools. The monitoring list initially included third-party AI applications like ChatGPT and Claude before being refined.
Internal discussions regarding MCI intensified following a memo distributed by a Meta Superintelligence Labs representative attempting to address worker privacy concerns about the initiative.
Strategic Context
The data collection effort aligns with CEO Mark Zuckerberg's strategy to advance Meta's position in generative artificial intelligence, an area where the company significantly lags behind competitors including OpenAI, Anthropic, and Google. In response, Zuckerberg initiated a major investment and recruitment campaign beginning the previous summer, bringing on Scale AI's Alexandr Wang to establish a specialized team focused on developing new foundational models.
This month, Meta introduced Muse Spark, its inaugural significant AI model under Wang's oversight within the Superintelligence Labs division. The company is pursuing development of AI agents capable of executing routine office and software development tasks traditionally performed by professional workers.
Company Justification
A Meta representative acknowledged the program while explaining its purpose: "If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus. To help, we're launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose."
According to the internal memo, MCI requires a "big and unbiased" dataset reflecting authentic employee computer usage patterns to train models effectively.
Employee Concerns and Privacy Issues
Staff members have responded negatively to the initiative. Multiple employees described the monitoring program as "dystopian" in internal messages reviewed by CNBC. Specific concerns raised include:
- Potential exposure of passwords and login credentials
- Disclosure of confidential product development information
- Unintended capture of personal employee data regarding immigration status, health, and family members
Safeguards and Assurances
The Superintelligence Labs memo outlined several protective measures, stating that MCI would observe only visible screen content and would "not read in files or attachments." The statement further assured that "any incidental personal information in your corporate email that may get captured from the screen, will not be learned by the model, due to the mitigations above."
The memo suggested that concerned employees could manage their exposure by avoiding personal activities on work computers.
Схожі новини
Giants top Dodgers despite impressive start by Shohei Ohtani
AI export control measures aimed at China gain steam in U.S. House
Head-On Train Collision Near Copenhagen Leaves Multiple Passengers Injured