Environmental Group’s AI-Written Submissions Cite Nonexistent Research
A prominent anti-renewables organization has been submitting AI-generated documents to government inquiries that reference nonexistent research papers, fabricated government authorities, and even imaginary wind farms, according to a Guardian Australia investigation. Rainforest Reserves Australia (RRA), a conservation charity that has gained influence among conservative politicians and media, has admitted to using artificial intelligence to prepare over 100 submissions to various government bodies since August 2024.
The organization’s submission writer, Anne S Smith, acknowledged using AI not only for preparing submissions but also for responding to media inquiries. This revelation raises serious questions about the integrity of public consultation processes and the potential for AI systems to generate convincing but completely fabricated evidence that could influence policy decisions.
Academic Experts Denounce Misrepresentation of Their Work
Two leading academics whose work was cited in RRA’s submissions told Guardian Australia that their research had been completely misrepresented. Professor Naomi Oreskes, a Harvard science historian and co-author of “Merchants of Doubt,” stated that her work was cited in a “100% misleading” manner regarding claims about net zero policies.
Similarly, Professor Bob Brulle of Brown University, an expert on climate change opposition networks, said citations of his work in RRA submissions were “totally misleading” and “absurd.” Neither academic’s research supported the claims attributed to them in the submissions, highlighting how AI-generated content can create sophisticated but false academic backing for political positions.
Nonexistent Research and Government Agencies
The investigation uncovered multiple instances of completely fabricated references in RRA’s submissions. Two papers supposedly published in the Journal of Cleaner Production and cited as evidence that renewable energy infrastructure releases “forever chemicals” were confirmed by publisher Elsevier to be “hallucinated” and nonexistent.
Additionally, submissions referenced government agencies that haven’t existed for years, including the “Queensland Environmental Protection Agency” (abolished in 2009) and entirely fictional bodies like the “Australian Regional Planning Commission” and “Queensland Planning Authority.” One submission even cited a contamination report from a wind farm in Oakey, Queensland – despite no such wind farm existing in that location.
AI Detection and Industry Implications
Dr. Aaron Snoswell of Queensland University of Technology’s GenAI Lab analyzed samples of RRA’s submissions using AI detection platforms, which identified “large portions of text that the platforms were very confident were AI generated.” He noted that inconsistent references represent “a classic mistake that’s made by AI systems,” though he emphasized that AI use itself isn’t problematic if properly verified.
The case highlights broader concerns about how recent technology might be misused in policy advocacy. As AI memory systems become more sophisticated, the potential for generating convincing but fabricated evidence increases, creating challenges for regulatory oversight and public trust in democratic processes.
Environmental Community Reacts
Cam Walker, campaigns coordinator at Friends of the Earth Australia, reviewed the RRA submissions and described them as containing “fabrications that corrupt the evidence base that decision-makers and communities rely on.” He expressed concern that such practices “poison the well for legitimate environmental concerns” and undermine genuine efforts to ensure renewable energy is properly planned.
Walker noted finding “multiple submissions across different renewable energy projects, all authored by the same person from RRA… all showing the same pattern of fake citations.” He emphasized that citing abolished government departments or nonexistent reports constitutes “misrepresentation” rather than legitimate community representation.
Broader Industry Context and Legal Precedents
This case emerges amid growing concerns about AI-generated content across multiple sectors. Recent legal battles over intellectual property and AI implementation in creative tools demonstrate the complex landscape organizations must navigate. Meanwhile, regulatory scrutiny of digital platforms continues to intensify as authorities grapple with the implications of AI-generated content.
The situation also reflects wider global tensions around technology and trade, where innovation must be balanced against accountability. As organizations across sectors adopt AI tools, maintaining transparency about their use becomes increasingly important for maintaining public trust.
Defense and Implications for Policy Processes
In her AI-assisted response to Guardian Australia, Smith defended the submissions as “entirely under my direction” and claimed the use of “AI-assisted literature searches, data synthesis, and document preparation” was appropriate. She maintained that citations of Oreskes and Brulle were fair and suggested that one missing Elsevier paper might have become “inaccessible” because it “contained findings that challenge dominant policy narratives.”
The case raises fundamental questions about how government inquiries can maintain integrity when facing AI-generated submissions. With increasing sophistication in AI-generated content, verification processes for public submissions may need significant strengthening to prevent fabricated evidence from influencing policy decisions. This represents a critical challenge for democratic processes worldwide as AI technology becomes more accessible and capable.
This article aggregates information from publicly available sources. All trademarks and copyrights belong to their respective owners.