Citations have long been a cornerstone of how academic impact is measured, but their effectiveness in evaluating business school research is contested. They signal whose work is shaping debates, building new ideas and inspiring the next wave of research on subjects from the impact of biodiversity to executive pay. But when it comes to business school scholarship, are citations valued too much or not enough?
At Edhec Business School in France, for instance, citations are not only monitored but tied directly to career progression. “Citation counts are one of the criteria used in academic promotions, for example from assistant to associate professor, and from associate to full professor,” says Michael Antioco, dean o…
Citations have long been a cornerstone of how academic impact is measured, but their effectiveness in evaluating business school research is contested. They signal whose work is shaping debates, building new ideas and inspiring the next wave of research on subjects from the impact of biodiversity to executive pay. But when it comes to business school scholarship, are citations valued too much or not enough?
At Edhec Business School in France, for instance, citations are not only monitored but tied directly to career progression. “Citation counts are one of the criteria used in academic promotions, for example from assistant to associate professor, and from associate to full professor,” says Michael Antioco, dean of research.
He also stresses the importance of citations in signalling institutional reputation. “Highly cited research signals thought leadership, strengthens our position in international rankings and shows that we contribute to the global academic conversation,” he adds. Yet he cautions against reducing research to numbers alone, noting that citations vary by discipline and sometimes undervalue work that is foundational or ahead of its time.
At France’s Essec Business School, associate dean of research Ha Hoang agrees that citations matter, but warns against over-reliance. “It would be a mistake to focus solely on becoming highly cited or to rely solely on citation rates for academic evaluation,” she says. Instead, she encourages young scholars to broaden their visibility through conferences and social media.
More from the Research Insights ranking report
Making business school research relevant, plus the top 50 schools; the most cited, downloaded and used studies and teaching cases; ‘altmetrics’, sustainability and policy influence; opinions on engagement with industry and local economies
Prof Hoang sees potential for richer insights into how research is used with the emergence of AI-powered tools. “Understanding citation patterns could then be more useful for the individual scholar as an input to maintaining a productive research career,” she adds.
Gaizka Ormazabal, associate dean for research at Iese Business School in Barcelona, points to more fundamental limitations. He notes that citations often reward fashionable topics and large networks, while neglecting niche or slow-moving contributions.
“Research that changes how companies make decisions, guides new regulations or influences public debate may have substantial real-world consequences while generating relatively few academic citations,” says Prof Ormazabal. Citing the San Francisco Declaration on Research Assessment and the Leiden Manifesto for Research Metrics as examples of academics and journals challenging measurement criteria, he argues against using citations in isolation.
Still, Ormazabal acknowledges citations’ enduring appeal: “[They] remain the most common currency of academic impact,” he says. “They’re one of the few transparent, comparable and widely available indicators across disciplines.” He also highlights their role in recording “intellectual debts” across generations of research.
There is some consensus that citations matter a great deal, but are insufficient on their own. Prof Antioco points out the “economic responsibility”, stressing that institutions should invest in work that genuinely influences the field. Prof Hoang sees new technologies opening possibilities for scholars to track and leverage how their work circulates. And Prof Ormazabal insists that tenure, promotion and rankings must include qualitative judgments on originality, relevance and societal impact.
Evaluation of business school research is unlikely to abandon citations, but it will need to balance them with broader, more nuanced measures of academic and real-world influence.
Among the top 10 most-cited research articles according to the Scite tool (see table, below) were:
Nature’s risk premium
Biodiversity is declining at alarming rates around the world, posing not only ecological but also financial risks. Companies depend on ecosystem services such as pollination and clean water, while simultaneously causing biodiversity loss through land use, pollution and emissions.
In “The biodiversity premium”, Guillaume Coqueret (EMLyon), Thomas Giroux (CREST–Mirova) and Olivier David Zerbib (CREST Laboratory) examine how the loss of biodiversity affects asset pricing and investment risks and ask: do markets price in these risks?
Using data from US companies between 2012 and 2022, they construct “biodiversity factors” that distinguish low-impact (“green”) from high-impact (“brown”) companies. Overall, these biodiversity measures behave differently from traditional financial risk factors and from carbon-related measures. Since 2021, investors have started treating companies with greater biodiversity impacts — such as in agriculture, utilities and construction — as riskier, demanding higher future returns to hold them. Meanwhile, companies with lower impacts, previously more rewarding, are now expected to deliver lower returns because they are considered safer.
The study highlights that biodiversity risks are becoming financially significant and could influence the cost of capital, making preservation not just an ecological necessity but also an economic one.
From anger to understanding
Police separate anti-migration and counter-protesters in London in September. Tackling animosity is the theme of one top paper © Rasid Necati Aslim/Anadolu via Getty Images
Political divisions in the US have raised concerns about the strength of democracy. In the 2024 paper “Megastudy testing 25 treatments to reduce antidemocratic attitudes and partisan animosity”, more than 80 university and business school academic co-authors conducted one of the largest experiments of its kind, involving over 32,000 participants. They tested 25 strategies proposed by academics and practitioners to reduce partisan animosity, support for undemocratic practices and willingness to endorse political violence.
The results were striking: 23 of the 25 approaches lowered partisan hostility, often by highlighting sympathetic individuals from an opposing party or emphasising shared identities such as national belonging. Six strategies reduced support for undemocratic practices, particularly those that corrected false beliefs about political opponents or underscored the risks of democratic collapse. Five “treatments” also cut support for partisan violence, especially when political leaders affirmed democratic norms.
The findings showed that, while partisan animosity and antidemocratic attitudes are distinct, both can be reduced. The paper provides evidence-based tools that non-profits, educators and policymakers can use to promote healthier democratic engagement.
Seeing is not always believing
With the rise of realistic “deepfake” technology, there are fears that fake political videos may become undetectable. In “Human detection of political speech deepfakes across transcripts, audio, and video”, Matthew Groh (Kellogg School of Management), Aruna Sankaranarayanan, Nikhil Singh, Dong Young Kim, Andrew Lippman and Rosalind Picard (all MIT) investigate how well people can distinguish real and fabricated political speeches.
The authors ran five experiments with more than 2,200 participants, showing clips of real and fabricated speeches by US presidents Joe Biden and Donald Trump. Participants were asked to judge whether what they saw or heard was authentic.
The results showed that while participants’ judgments were better than random guessing, they were far from perfect. Accuracy was lowest when they read only transcripts and highest when they saw video with audio. But deepfakes created with advanced text-to-speech software were especially hard to detect, with the accuracy of participants’ judgments often close to random guessing.
The study suggests that people rely heavily on audiovisual cues — how something is said — rather than content alone. While deepfakes remain detectable, their growing realism raises challenges for democracy, media trust and defences against misinformation.
One-size-fits-all pay
Executive compensation is usually tailored to motivate leaders with a mix of salary, bonuses, stock awards, stock options, pensions and perks. Yet, since 2006, nearly a quarter of the variation in these packages has disappeared, meaning companies increasingly use the same formulas regardless of size, industry or profitability.
In “Executive compensation: The trend toward one-size-fits-all”, Felipe Cabezón, of Pamplin College of Business at Virginia Tech, explores how chief-executive pay structures have become strikingly uniform across US companies.
Prof Cabezón finds that this standardisation is driven by two forces: the growing influence of institutional investors and the recommendations of proxy advisory firms, which guide shareholder votes on pay. Expanded disclosure rules, requiring more detailed reporting of executive pay, also made it easier for businesses to mimic each other.
The catch? While uniformity may simplify oversight, it often reduces shareholder value. Standardised contracts prevent boards from tailoring incentives to each company’s unique circumstances. The study warns that what looks like “best practice” may, in fact, undermine performance, highlighting the unintended costs of a one-size-fits-all approach to CEO pay.
The most positively cited articles
Source: Scite, year to July 2025
Estimating discount functions with consumption choices over the lifecycle** **David Laibson, Sean Chanwook Lee, Peter Maxted, Andrea Repetto, Jeremy Tobacman
Signaling theory: state of the theory and its future** **Brian L Connelly, S Trevis Certo, Christopher R Reutzel, Mark R DesJardine, Yi Shi Zhou
Measuring innovation and navigating its unique information issues** **Stephen Glaeser, Mark Lang
Megastudy testing 25 treatments to reduce antidemocratic attitudes and partisan animosity** **More than 80 authors including Christopher J Bryan, Hanne K Collins, Charles Dorison, Aaron C Kay, Nour Kteily and Maytal Saar-Tsechansky
Human detection of political speech deepfakes across transcripts, audio, and video** **Matthew Groh, Aruna Sankaranarayanan, Nikhil Singh, Dong Young Kim, Andrew Lippman, Rosalind Picard
Executive compensation: the trend toward one-size-fits-all** **Felipe Cabezón
The biodiversity premium** **Guillaume Coqueret, Thomas Giroux, Olivier David Zerbib
The temporality of crisis and the crisis of temporality** **Lorenzo Skade, Elisa Lehrer, Yanis Hamdali, Jochen Koch
Spending and job-finding impacts of expanded unemployment benefits** **Peter Ganong, Fiona Greig, Pascal Noel, Daniel M Sullivan, Joseph Vavra
What if? The macroeconomic and distributional effects for Germany of a stop of energy imports from Russia** **Rüdiger Bachmann, David Baqaee, Christian Bayer, Moritz Kuhn, Andreas Löschel, Benjamin Moll, Andreas Peichl, Karen Pittel, Moritz Schularick
Adding context to citations
Scite is a research tool that analyses how scientific papers are cited by classifying citations as supporting, contrasting or mentioning, giving researchers contextual “smart citations” instead of just raw counts. A smart citation not only tells you that Paper A was cited by Paper B, but also indicates if Paper B is supporting, disputing or merely mentioning it.
Unlike traditional citation metrics, which treat all citations as equal, Scite classifies references by intent and location within the citing paper. Co-founder Josh Nicholson says this provides “more nuance and interpretation into impact”, allowing researchers to distinguish between evidence that supports, challenges or references earlier work. He adds that Scite can adapt existing measures, such as the “h-index”, which measures researchers’ productivity and citation impact, by using only supporting citations.
Scite’s applications extend beyond research evaluation. Nicholson points out that with the rise of large language models, smart citations are helping to ground AI outputs in verifiable research, offering users a more trustworthy way to discover and understand science.
The tool is not without limitations: Scite cannot yet classify non-English citations and may miss context if a paper’s overall intent differs from a single citation. “The metrics we use to evaluate research remain primitive,” says Nicholson. “It’s time to rethink how we measure impact and give researchers tools that enable smarter, faster decisions.”