Claim Validation Experiment
Table of Contents
This post documents the process behind one research-based article. It describes a single methodological experiment applied to one draft. It is not presented as a general recommendation for writing.
The initial aim was straightforward: to write a blog post grounded explicitly in published research rather than personal reflection or secondary summaries.
Drafting with Assistance
AI tools supported the early stages of drafting. Through iterative prompts I asked them to suggest relevant literature, propose claims linked to sources, refine language, and help organise the structure of the article.
The resulting draft was coherent and well structured. The citations appeared plausible, and the claims were presented with references to specific papers. On initial reading the article appeared properly supported.
At that stage, however, none of the substantive claims had yet been verified directly against the original papers.
What Verification Revealed
When I began checking the draft against the primary sources, several issues became visible:
- Statements overstated what a study had demonstrated
- Incorrect results or sample sizes were reported
- The wrong paper was used to support a claim
- Details about a paper were fabricated
- In some cases, the cited paper did not exist
Using AI to review its own outputs did not resolve these problems. It would confidently provide excerpts that did not exist. When asked directly for the abstract of a paper, it sometimes acknowledged that it did not have access to it.
This led to a further question. I gained confidence in the article by checking every claim against the original sources and correcting errors. A reader, however, cannot see that process. In the same way that I may approach AI-generated content with caution, readers may reasonably approach my writing with similar caution. If verification increases the author’s confidence, what increases the reader’s?
First Attempt: Quotes and Footnotes
My first response was to increase visibility by exposing more of the source material directly. I experimented with inserting longer quotations from the cited papers. While this increased transparency, it disrupted the flow of the article. Academic phrasing does not always translate well into a general blog format, and repeated quotations made the text dense and fragmented.
Footnotes were the next attempt. They allowed supporting detail to sit outside the main argument. However, combined with numbered citations, this created parallel reference systems that were difficult to navigate. It remained unclear which precise sentence was supported by which evidence, and how closely the wording matched the source. Transparency improved slightly, but traceability did not.
Revising the article paragraph by paragraph while cross-checking sources made it difficult to isolate which specific statements were supported and which were not.
Verifying Claims
At this point, I began recording verification separately in YAML. I asked AI to generate an initial structure, but each excerpt still had to be checked manually. Without providing verbatim text, the system frequently paraphrased or altered wording.
After multiple revisions and corrections, the entries reflected the source material accurately.
A typical entry looked as follows:
- claim_id: edmondson1999_psychological_safety_learning
claim: Teams with higher psychological safety were more likely to engage in learning behaviour.
sources:
- citation_id: edmondson1999
excerpt: Results of a study of 51 work teams in a manufacturing company, measuring antecedent, process, and outcome variables, show that team psychological safety is associated with learning behavior, but team efficacy is not, when controlling for team psychological safety.
Each claim in the final article was verified against primary sources.
Access as a Constraint
Verification depended on access to full texts. Abstracts were sometimes sufficient, but often they did not provide enough detail to confirm what a study actually reported. Several potentially relevant papers were behind paywalls. Where I could not verify a claim directly from the full publication, I removed or replaced it.
The final article therefore reflects not only the original draft, but also what could be checked against accessible primary sources.
Making the Structure Visible
The verified claims are stored in my validated-writing GitHub repository. The published post now also includes an interface that allows readers to view individual claims and inspect the associated source excerpts once the validation feature is enabled via the “Verify claims” button. The interface reads directly from the same claim structure used during verification.
The intention is not to present the article as definitive, but to make the evidence transparent.
Conclusion
Drafting with AI assistance was relatively straightforward. Ensuring that each claim accurately reflected the cited research required substantially more work. Plausible citations and confident summaries were not reliable indicators of correctness.
Confidence in the article increased only once each substantive statement had been checked against the original publication and recorded at the level of individual claims. The YAML format made it easy to then transform and visualise claims on the article itself.
The workflow may become more efficient over time. However, this experiment helped me understand what grounding an article in research requires in practice: not only citing studies, but checking each substantive claim against the original publication and making that structure visible.