Skip to content
GT
πŸ“‹ Sustainability / ESG Reporting in Practice
Writing the ReportLesson 7 of 710 min read

Using AI to Write (Without Producing AI Slop)

Using AI to Write (Without Producing AI Slop)

Let us be direct about this: most ESG consultants use AI: ChatGPT, Copilot, Gemini, Claude, whatever tool is at hand. There is nothing wrong with that. AI is a legitimate productivity tool that can help you draft sections faster, brainstorm structures, and get past the blank-page problem.

The problem is what happens when people stop there. When the AI-generated draft goes straight to the client, or worse, straight into the final report, without a human being actually reading, editing, and rewriting it. That is AI slop, and the industry has a growing problem with it.

The AI Slop Problem

You can spot AI-generated content in an ESG report within seconds. It has a distinctive voice: vaguely confident, relentlessly positive, oddly generic. Phrases like "the company remains steadfastly committed to its sustainability journey" and "leveraging innovative solutions to drive meaningful impact" appear in paragraph after paragraph. Every initiative is "robust." Every strategy is "comprehensive." Every achievement is "significant."

The content says everything and nothing at the same time. It sounds professional but communicates no specific information about the actual company. You could swap the company name and the same paragraphs would work for any organization in any industry.

AI slop is not just a quality problem: it is a credibility problem. Big Four firms and major consultancies have been penalized for submitting AI-generated content without adequate review. Rating agencies and discerning readers can identify AI-generated text, and it undermines trust in the entire report. If the narrative sounds like it was written by a machine, readers start questioning whether the data was verified by a human either.

This is not theoretical. Major consulting firms have faced real consequences for AI-generated deliverables that were not properly reviewed. When a client is paying for professional expertise and receives generic AI output, it damages the relationship and the firm's reputation. When a sustainability report reads like it was produced by a chatbot, the company's credibility on ESG commitments is undermined.

The Rule: AI Writes the Draft, You Write the Report

Here is the only approach that works: use AI to generate first drafts, then rewrite them yourself. Not "edit": rewrite. The AI output is raw material, not a finished product. It gives you a starting point, a structure to react to, sentences to improve. But the final version must sound like it was written by a person who knows the company, understands the industry, and has a point of view.

Think of AI like a research assistant who is enthusiastic, fast, and completely unfamiliar with your client. They hand you a draft that is grammatically perfect, structurally reasonable, and full of generic statements that could apply to any company. Your job is to take that draft and turn it into something specific, honest, and worth reading. The research assistant did the heavy lifting; you do the thinking.

What to Check When You Edit AI Output

When you sit down with an AI-generated draft, here is what to look for and fix:

Specificity. AI generalizes. It writes "the company has implemented various energy efficiency measures across its operations." You need to make it specific: "The company completed LED retrofits across 12 facilities and installed variable frequency drives on 45 HVAC units, reducing energy consumption by 21,500 GJ." Replace every generic claim with the actual details of what this company did.

Tone. AI defaults to a tone that is relentlessly upbeat and slightly formal. Real ESG reports need a tone that matches the company's communication style and acknowledges reality. If emissions went up, the report should say so and explain why, not bury it under optimistic language about "continued commitment to improvement."

Accuracy. AI confidently states things that may not be true. It will cite frameworks incorrectly, attribute targets that do not exist, or describe initiatives that sound plausible but never happened. Every factual claim in an AI draft needs to be verified against actual data and source documents.

Consistency. AI does not remember what it wrote three paragraphs ago. It might describe the company's emissions as "declining steadily" in one section and then present data showing a year-on-year increase in another. It might use different terminology for the same concept. It might switch between first and third person. You need to catch and fix all of this.

The "sounds smart but says nothing" problem. AI is excellent at producing sentences that sound substantive but communicate nothing. "The company continues to leverage its comprehensive sustainability framework to drive integrated value creation across stakeholder segments." What does that actually mean? If you cannot explain a sentence in plain language, delete it and write what you actually want to say.

AI draft: "The company has made significant strides in its water management practices, implementing innovative solutions to optimize water consumption across its operations. These efforts underscore the company's commitment to responsible resource stewardship and align with global best practices in water management."

After human editing: "Water consumption decreased by 11% in FY26, from 1.35 million KL to 1.2 million KL. The reduction was driven by two initiatives: a new recycling plant at the Chennai facility (now recycling 60% of process water) and rainwater harvesting systems installed at three sites, which captured 85,000 KL during the monsoon season. The company's target is a 50% water recycling rate by FY28."

The AI version is 44 words of nothing. The human version is 73 words of substance. The difference is not length: it is content.

The Workflow That Works

Here is a practical workflow for using AI effectively in ESG report writing:

The AI-Assisted Writing Workflow

Step 1: Gather your inputs first. Before you prompt the AI, collect the data, review past reports, read the annual report, and understand the company's story. You need to know what you want to say before you ask AI to help you say it.

Step 2: Use AI for structure and first drafts. Ask AI to draft a section based on specific inputs you provide. Give it the data points, the key messages, the company context. The more specific your prompt, the more useful the output.

A prompt template that actually works

Most people prompt AI with "write the environmental section of a sustainability report." That produces exactly the kind of generic slop described above. Here is a detailed prompt structure that forces better output:


"You are writing the [Environmental / Social / Governance] section of a sustainability report for [Company Name], a [industry] company headquartered in [location] with operations in [geographies].

Source documents attached: [list the specific documents - annual report, past sustainability report, data template, company policies]. Use ONLY the information in these documents. Do not invent data, initiatives, or targets that are not in the sources. If information is not available in the sources, say 'information not available in provided documents' rather than fabricating content.

Writing instructions:

  • Write in [first / third] person, [active / passive] voice
  • Do NOT use the following words or phrases: 'comprehensive,' 'robust,' 'innovative,' 'leveraging,' 'driving meaningful impact,' 'steadfastly committed,' 'holistic approach,' 'significant strides,' 'journey,' 'endeavor,' 'underscore,' 'pivotal'
  • Every claim must reference a specific number, initiative, or policy from the source documents
  • If performance declined, say so directly and explain why. Do not hide negative trends under optimistic language
  • Do not start consecutive paragraphs with the same word
  • Avoid sentences longer than 30 words
  • Do not end sections with vague forward-looking statements like 'the company remains committed to...' End with a specific target or next step instead

Data for this section: [paste the relevant data from the template]

Key messages to communicate: [list 3-4 specific points you want this section to make]

Tone: professional but direct. Think annual report language, not marketing copy. The reader is an analyst or investor, not a general audience."


The key principles in this prompt: it bans the most common AI crutch words, forces source-grounding by explicitly forbidding fabrication, requires specificity by demanding numbers and references, and sets a tone that avoids the relentlessly positive voice AI defaults to. Adjust the banned words list based on what you keep seeing in outputs, as every AI model has its favorite filler phrases.

Step 3: Print and read. Seriously, print the draft (or read it on a different screen than the one you wrote the prompt on). Reading in a different format helps you see the text with fresh eyes. Mark everything that sounds generic, vague, or wrong.

Step 4: Rewrite, do not edit. Instead of tweaking the AI's sentences, rewrite the sections that need it. Use the AI draft as a reference for structure and coverage, but let your own voice and the company's specific story come through.

Step 5: Verify every fact. Cross-check every number, every claim, every target mentioned in the AI draft against actual data. AI will sometimes fabricate plausible-sounding data points that do not exist.

Step 6: Read the final version as if you are the client. Would you feel confident presenting this to your board? Does it sound like it was written by someone who understands your business? If not, keep editing.

Never send AI output to a client without reading it yourself. Ever. This is not about being cautious: it is about professional standards. You are the expert. The AI is the tool. The client is paying for your judgment, not for what a language model can produce for free.

Where AI Adds Genuine Value

Despite the warnings, AI is genuinely useful in several parts of the ESG reporting process:

Breaking through writer's block. When you are staring at a blank page and do not know how to start a section, asking AI for a draft gives you something to react to. Even a mediocre draft is easier to improve than a blank page.

Generating alternative phrasings. When you have written a sentence that is technically correct but clumsy, AI can suggest cleaner versions. Use it as a thesaurus on steroids.

Summarizing source material. When you have a 50-page policy document and need to extract the key points for a two-paragraph summary, AI can do the first pass quickly.

Checking coverage. Ask AI "what topics should an Environmental section of a sustainability report cover for a cement manufacturer?" and compare its answer to your outline. It is a useful completeness check.

Formatting and structuring data. Converting raw data tables into formatted narrative paragraphs is something AI does well, as long as you verify the numbers in the output match the numbers in the input.

Where AI Fails

Understanding company context. AI does not know your client. It does not know their culture, their industry dynamics, their competitive positioning, or their internal politics. Every piece of context you do not provide, AI fills with generic assumptions.

Making editorial judgments. Should this section lead with the positive data or acknowledge the challenge first? Should the tone be cautious or confident? These are judgment calls that require understanding the audience and the client's communication strategy. AI cannot make them for you.

Ensuring consistency across the report. AI writes one section at a time. It does not track what was said in other sections. Ensuring that data is consistent, that terminology is uniform, and that the theme carries through is your job.

As of 2026, AI is firmly established as a drafting tool in the ESG reporting workflow. The key differentiator going forward is not whether you use AI (almost everyone does), but how well you humanize the output. The consultants and companies that produce the best reports will be those who use AI for efficiency while maintaining human judgment, specificity, and editorial quality. The bar for what is considered "acceptable" is rising as readers become better at recognizing AI-generated content. The reports that stand out will be the ones that clearly had a human being thinking about every sentence, not just prompting a model and shipping the result.

The Bottom Line

AI is a tool. A very good tool. Use it without guilt: it saves hours on first drafts, it helps with structure, it gets words on the page when your brain is stuck. But treat every piece of AI output as a first draft that needs substantial human work before it is ready for anyone else to see.

The ESG reporting industry's credibility depends on reports that are specific, accurate, and thoughtful. AI can help you get there faster. It cannot get you there on its own.

Key Takeaways

  • 1Treat every AI-generated draft as raw material - rewrite it, do not just lightly edit it, before it goes to anyone else
  • 2Check AI output for five specific problems: lack of specificity, overly positive tone, factual inaccuracies, internal inconsistencies, and sentences that sound smart but say nothing
  • 3Use detailed prompts that ban common AI filler words, require source-grounded claims, and explicitly forbid fabrication - generic prompts produce generic slop
  • 4AI adds genuine value for breaking writer's block, generating alternative phrasings, summarizing source material, and checking topic coverage
  • 5AI fails at understanding company context, making editorial judgment calls, and maintaining consistency across the full report - these remain human responsibilities

Knowledge Check

1.An AI-generated draft for a water management section reads: 'The company has made significant strides in its water management practices, implementing innovative solutions to optimize water consumption.' What is the core problem?

2.In the recommended workflow for using AI in ESG report writing, what should you do BEFORE prompting the AI to draft a section?

3.Why is AI slop in ESG reports described as a credibility problem, not just a quality problem?