Fabricating CSR authenticity: The Illusory Truth Effect of CSR communication on social media in the AI era

Corporate Social Responsibility (CSR) communication via social media offers significant opportunities for organizations. Posts by third-party stakeholders allow for critical evaluation of CSR efforts, fostering authenticity through the anonymous, collective sharing of personal experiences. The advent of Large Language Models (LLMs), which facilitate the rapid and cost-effective creation of bot-driven posts, raises concerns about whether an increasing number of fabricated CSR messages could linearly influence an audience’s perception of a company’s CSR authenticity. We base our hypotheses on the Illusory Truth Effect, suggesting that perceived authenticity can increase with exposure to more messages. However, this effect only continues up to a certain tipping point, after which it plateaus. We tested our hypotheses in a study with 480 participants, presenting AI-generated CSR testimonials about Shell to three groups: zero, low, and high exposure. We found a significant increase in perceived CSR authenticity in the low exposure group compared to the zero group, with the effect tapering off in the high exposure group. We conclude that LLMs can effectively replace human-written CSR messages for a fraction of a cent, yet the main strength of LLMs (sheer volume, leading to repeated exposure) is unlikely to become a concern.

Citation

Illia, Laura, Rafael Ballester-Ripoll, and Anika K. Clausen. "Fabricating CSR authenticity: The Illusory Truth Effect of CSR communication on social media in the AI era." Public Relations Review 51.3 (2025): 102588.

Authors from IE Research Datalab