AI facilitated investigation: Opportunities and risks
This meeting of the Open Data Working Group of the Countering Environmental Corruption Practitioners Forum explored the growing role of artificial intelligence (AI) in law enforcement investigations and its evidential reliability in court.
AI-driven tools are increasingly used to analyse large datasets, identify patterns and support investigative decision-making. Drawing on research at the intersection of forensic linguistics, technology-assisted investigation and digital evidence, our speaker will examine how AI and Large Language Models (LLMs) are reshaping the analysis of publicly available information.
Speaker
- Tim Grant, Professor of Forensic Linguistics, Aston University
Moderator
- Fred Ellis, Wildlife Trade Analyst and Financial Specialist, TRAFFIC
Takeaways
Advances in AI and large language models are transforming how large volumes of information can be processed and analysed. As these tools become more widely available, researchers, analysts and investigators are increasingly experimenting with ways to incorporate them into their work. At the same time, AI cannot replace the expertise, judgement and contextual understanding that investigators bring to a case. Instead, AI should be seen as a tool that can complement professional analysis, while human oversight remains essential to ensure that findings are interpreted accurately and responsibly.
Opportunities
- Managing large and complex datasets. Investigations into corruption and environmental crime often involve vast amounts of documentation, financial records and communications. AI tools can help review, organise and analyse these datasets more efficiently, helping investigators identify relevant information and navigate complex cases more effectively.
- Supporting research and analysis. Large language models can assist with tasks such as summarising lengthy documents, extracting key information and highlighting potential patterns across datasets. These tools can provide preliminary insights that investigators can then explore and verify through further analysis.
- Expanding access to specialised analytical tools. Some types of analysis that previously required specialised expertise, such as linguistic analysis of communications, may become more accessible with AI-assisted tools. This can help investigators explore new approaches and strengthen evidence analysis.
Risks
- Confident but inaccurate outputs. AI systems can produce responses that appear convincing but contain errors or misleading information. Without careful verification, these outputs could introduce inaccuracies into investigative work.
- Overreliance on AI-generated results. Because AI outputs are often presented in clear and authoritative language, users may be tempted to trust them too readily. Maintaining a critical approach and verifying results remains essential.
- Limited transparency raises accountability challenges. Many AI systems operate as “black boxes”, making it difficult to understand how they generate their conclusions. This lack of transparency raises important questions about reliability and accountability, particularly in investigative contexts where evidence and analysis must withstand
Relevant resources
- Language and Online Identities: The Undercover Policing of Internet Sexual Crime
- Cambridge Elements in Forensic Linguistics
- Writing Wrongs podcast
- The Idea of Progress in Forensic Authorship Analysis.
Organisers
The Countering Environmental Corruption Practitioners Forum is a joint initiative of World Wildlife Fund (WWF), the Basel Institute on Governance, Transparency International and TRAFFIC.
Details
The meeting was open to current members of the Countering Environmental Corruption Practitioners Forum and its Open Data Working Group, as well as those potentially interested in joining. The session was held in English via Zoom.
Date and time: 4 March 2026, 09.00 EST / 15.00 CET