The working paper “Beliefs about Bots: How Employers Plan for AI in White-Collar Work” by Eduard Brüll, Samuel Mäurer and Davud Rostam-Afschar investigates how firms update expectations once they see credible evidence about automation risk in professional services. Using a large randomized information experiment among tax advisory practices in Germany, the authors compare employers’ prior beliefs with expert assessments from the IAB Job-Futuromat and then track how these updated beliefs shape plans for hiring, wages, productivity, training and AI adoption. The setting is unusually strong because it surveys the entire official register of licensed tax advisors, a population that is typically missing from standard firm datasets.
Before receiving information, firms consistently underestimate automatability across four occupations. The expert benchmark indicates very high substitution potential for tax clerks and certified tax assistants and substantial exposure even for tax advisors and auditors. After the information treatment, employers revise beliefs upward, particularly for the routine-intensive roles, while adjustments for higher-skill roles are smaller. The design allows the authors to isolate belief updating and show that the shift is causal rather than noise.
Despite the larger perceived risk, short-run employment intentions do not move. Hiring and dismissal plans remain statistically unchanged, and record-linked vacancy data do not show a contemporaneous response. What does change is the financial outlook. Firms anticipate higher revenue and profit growth, while wage growth expectations stay negligible. In the regulated German tax advisory market, this pattern is consistent with productivity gains being realized by serving more clients per hour rather than charging higher prices, which implies limited rent sharing with employees in the near term.
Belief updates also tilt capabilities and investment. Managers expect new task bundles in legal tech, compliance monitoring and AI interaction, including prompt engineering, and they report stronger intentions to train and upskill staff in these areas. Plans to adopt AI tools increase and are accompanied by more active information seeking, yet these intentions do not translate immediately into more posted vacancies, reinforcing the picture of productivity upgrades preceding workforce reshaping.
For internal audit, corporate governance and HR, the contribution is practical. The paper makes clear which evidence to request and how to link belief shifts to decisions. Auditors can ask for documented task-level automatability assessments by role, trace revenue-per-hour assumptions that underpin profit forecasts, and test wage and rent-sharing rationales against market and regulatory constraints. Governance teams can review training roadmaps for digital skills, legal tech and AI interaction, and verify that adoption intentions mature into procurement, control design and monitored process change. Embedding these checkpoints in risk-based audit planning creates traceability from updated beliefs to financial expectations and people decisions.
The study also enriches the broader literature by providing an employer-centered view that captures anticipatory responses rather than only ex-post outcomes. It shows that information can move expectations and investment plans without triggering immediate headcount changes, a result that aligns with recent evidence on the measured near-term labor market effects of generative AI and helps reconcile high exposure with modest displacement in white-collar settings.
For readers who want the methodology, figures and full set of results, the working paper is available here.
