The article cites a Harvard Business School study showing AI boosts output 12 percent faster and 25 percent more efficiently than humans, but errs 19 percent of the time. Adobe abandoned annual reviews in favor of ongoing check-ins, freeing 80,000 manager hours yearly. A Connext Global survey found 66 percent of workers engage in "productivity theater," with only 23 percent evaluated on outcomes and 45 percent rewarded for speed over impact. The specific details of how performance review tools from vendors like Lattice and AIHR are integrating AI into evaluation systems remain underdeveloped in available reporting.
For in-house counsel and employment lawyers, this signals a brewing tension between AI-driven efficiency metrics and legal exposure around performance management. As organizations automate review processes using AI tools, they risk embedding algorithmic bias into compensation and termination decisions—a documented liability under Title VII and state discrimination statutes. Seventy-seven percent of workers want outcome-based reforms, suggesting litigation risk if companies double down on speed-based metrics that disadvantage protected classes or older workers. Firms should audit whether their performance systems can withstand disparate impact analysis and whether they're documenting the human judgment layer that AI tools cannot replace.