Meta Employees Using Company’s AI Chatbot to Write Performance Rev
News THE ECONOMIC TIMES, livelaw.in, LAW, LAWYERS NEAR ME, LAWYERS NEAR BY ME, LIVE LAW, THE TIMES OF INDIA, HINDUSTAN TIMES, the indian express, LIVE LAW .INInternal AI Tool Sparks Debate Within Meta as Workers Question Accuracy, Bias, and Confidentiality in Automated Evaluations
Menlo Park, November 10, 2025, Monday

In a development that highlights the growing influence — and controversy — of artificial intelligence in corporate settings, Meta employees have begun using the company’s internal AI chatbot to assist in drafting performance reviews. The tool, built on Meta’s proprietary LLaMA (Large Language Model Meta AI) architecture, is designed to help employees summarize achievements, identify growth areas, and provide peer feedback. However, according to insider reports, many staff members remain skeptical about the fairness, accuracy, and long-term implications of allowing AI to play a role in professional evaluations.
Meta’s Push Toward AI-Driven Productivity
Meta introduced its AI-assisted review tool earlier this year as part of CEO Mark Zuckerberg’s “Year of Efficiency 2.0” initiative, which seeks to streamline internal processes and boost productivity across departments. The chatbot is trained on internal documentation, project data, and company-specific communication patterns to generate concise, structured summaries that fit Meta’s performance review format.
Employees can input bullet points about their quarterly progress, and the AI generates polished narratives that align with company guidelines. The feature reportedly saves time, especially for engineers and product managers juggling multiple deliverables.
A Meta spokesperson described the chatbot as “a productivity assistant, not a decision-maker,” clarifying that final reviews are still written and approved by humans.
Internal Division Over Trust and Transparency
Despite its intended purpose, many employees have voiced concerns over the chatbot’s reliability and potential biases. Some fear that the AI’s summaries could misrepresent contributions, downplay creative work, or overly prioritize measurable data — factors that could unfairly influence promotion and compensation decisions.
“People are afraid the AI will say what managers want to hear, not what truly represents our work,” said one Meta employee anonymously on internal forums.
Others worry about data confidentiality, since performance inputs could include sensitive project details or interpersonal feedback. Although Meta claims that all submissions remain encrypted and confined to its internal ecosystem, distrust persists — particularly following recent layoffs and restructuring within the company.
Meta’s Response: “AI Is an Assistant, Not an Evaluator”
In an internal memo, Meta’s HR division emphasized that the AI tool is optional and meant only to assist with writing clarity and tone. The company insists that the system neither scores nor ranks employees.
Meta’s Chief People Officer stated, “We believe AI can help employees articulate their work more effectively, especially in large, distributed teams. However, managers will always make the final assessment.”
The company has also added a disclaimer on the AI interface, reminding users that “outputs may contain inaccuracies and should be reviewed before submission.”
Mixed Reactions Among Employees
Initial reactions within Meta’s workforce are split.
- Supporters say the tool reduces stress during the performance review cycle, particularly for non-native English speakers and employees uncomfortable with self-evaluation writing.
- Critics argue it promotes homogenized, robotic feedback, undermining authenticity and nuance in professional reviews.
A few teams have already reverted to traditional manual reviews, citing the AI’s tendency to generate overly generic or inflated assessments, which HR reviewers later flagged as inconsistent.
Broader Implications for Workplace AI
Meta’s experiment comes amid a broader trend of tech companies embedding AI into internal operations — from Google’s Gemini for meeting summaries to Amazon’s “AI manager assistants.” Experts say the shift toward AI-driven workplace tools raises ethical questions about algorithmic influence in career progression and employee evaluation.
“AI can improve efficiency, but when it enters performance management, it risks reinforcing organizational biases unless carefully audited,” said Dr. Leena Kapoor, an AI ethics researcher at Stanford University.
The Irony: Meta Employees Don’t Fully Trust Meta’s AI
While Meta continues to position itself as a leader in enterprise AI, internal resistance to its chatbot underscores the trust gap between human employees and corporate AI systems. Ironically, some Meta engineers who build AI models are among the loudest skeptics of using them for performance reviews.
A leaked internal survey reportedly found that 42% of Meta employees expressed discomfort with AI-written evaluations, with one respondent writing, “It feels like being judged by your own code.”
The Bigger Picture
Meta’s move is part of a broader cultural transformation where AI becomes a workplace co-pilot, shaping not just products but internal processes. The company insists it will refine the chatbot based on feedback and ensure ethical guidelines are followed.
Still, the internal debate raises an important question: Can AI truly evaluate human effort without stripping away the personal, creative, and emotional dimensions of work?
Key Highlights
- Meta introduces AI chatbot to assist in employee performance reviews.
- Built on LLaMA architecture, the tool helps structure self-assessments.
- Employees divided over trust, bias, and data privacy concerns.
- Meta assures tool is optional and “not a replacement for human judgment.”
- Internal survey shows growing skepticism despite efficiency gains.
Source:
