Can AI Choose Your Competitors Better Than You? A Deep Dive into Peer Identification with Generative AI

In their thought-provoking article “Can Generative AI Help Identify Peer Firms?”, Yi Cao, Long Chen, Jennifer Wu Tucker, and Chi Wan explore whether large language models (LLMs) like Bard or ChatGPT can match or even outperform traditional systems in identifying a firm’s product market competitors. Published in the Review of Accounting Studies, the study investigates how well generative AI can tackle this complex and vital task for investors, regulators, and researchers.

Peer identification is central to evaluating corporate performance, governance, and events such as mergers or acquisitions. Traditionally, the process has been resource-intensive and often described as “an art form.” Generative AI presents a compelling alternative. Its ability to ingest vast public datasets, respond in real time, and operate freely makes it a powerful tool to reduce information costs, especially for individual investors.

The authors focus on Bard’s API and use it to generate peer firms across nearly 350,000 firm-year observations from 2003 to 2022. These AI-generated lists are then benchmarked against human experts and three established systems: the text-based TNIC system, analyst-mention-based peers, and compensation peers disclosed in proxy statements. Impressively, the overlap between LLM peers and expert-selected peers was 42.1 percent, far above chance and significantly outperforming TNIC-based comparisons.

To validate the economic relevance of LLM-generated peers, the authors examine correlations in future stock returns and accounting fundamentals such as sales growth and profit margins. In both cases, LLM peers show stronger alignment with the focal firms than peers identified by TNIC or SIC systems. Notably, for large firms which have richer public data, LLMs perform particularly well, reinforcing the idea that more data leads to better machine inference.

Homogeneity among peers is another critical metric. LLM-identified peer groups exhibited lower deviation in both stock performance and accounting outcomes than their TNIC counterparts, suggesting that AI-generated peer lists are not only more relevant but also more consistent internally.

Beyond the technical evaluation, the study shows that LLM peers are less biased than firm-selected benchmarks in CEO compensation comparisons. Moreover, LLM-based peer selection strengthens the results of hypothesis testing in empirical research, for instance by showing stronger evidence that peer delistings harm the information environment for focal firms, a hypothesis previously tested with more conventional peer identification methods.

What distinguishes this paper is not merely its evidence in favor of generative AI but the broader implications it draws. Generative AI, when used responsibly, can democratize access to high-quality financial insights, offering capabilities that were once exclusive to sophisticated analysts or institutions. It also prompts a rethinking of what constitutes “expertise” in an era where machines can aggregate and contextualize massive data repositories within seconds.

The authors conclude that while LLMs are not perfect and require human oversight, they represent a meaningful step forward in financial analytics. With rapid advances in AI technology, the results presented here may already represent a lower bound on what is possible.

The article by Cao et al., “Can Generative AI Help Identify Peer Firms?”, was published in 2025 in the Review of Accounting Studies and is available online.