We see AI as a legitimate and increasingly common tool in the writing process. Many authors now use it in practical ways—testing clarity, improving structure, tightening prose, or working through early drafts. That, in itself, is not a problem.
What matters to us is not whether AI has been used, but how it has been used.
A book manuscript must reflect the author’s own thinking, judgment, and responsibility for the arguments it makes. AI cannot assess the validity of claims, the sufficiency of evidence, or the ethical and contextual implications of what is being written—particularly in high-stakes, policy-relevant, or research-driven fields. That accountability always rests with the author.
What AI Is Good For
AI can be a useful support tool at various stages of the writing process, particularly when used to assist with clarity, structure, and refinement. Appropriate uses include:
- Clarifying language and simplifying dense or technical passages (e.g. “rephrase in plain English...”)
- Improving flow, structure, and transitions between sections
- Identifying repetition, inconsistencies, or unclear phrasing
- Testing whether an argument remains coherent when restated in different terms
- Helping organise ideas in early or exploratory drafts
- Assisting with outlines, section ordering, or chapter structure
- Acting as a sounding board when refining emphasis or tone
Used in these ways, AI supports the author’s thinking rather than replacing it.
Where AI Has Real Limitations
At its present level of maturity, AI has clear limitations and should not be relied upon without careful review. In particular, AI should not be used for:
- Generating original arguments, insights, or conclusions
- Assessing the accuracy, validity, or sufficiency of evidence
- Understanding domain-specific nuance, context, or professional judgment
- Evaluating ethical, legal, or policy implications
- Distinguishing between what sounds plausible and what is actually correct
- Taking responsibility for errors, omissions, or misleading claims
- Producing work that reflects lived experience, expertise, or accountability
- Suggesting or inserting citations or references (all references must be independently verified for accuracy and existence)
AI systems can still produce confident-sounding information that is incorrect, incomplete, or entirely fabricated—particularly when generating facts, examples, or citations. These hallucinations are not always obvious and may read as plausible to non-specialist readers. For this reason, any material produced or assisted by AI must be treated as a draft input, not as an authoritative source.
This is why we welcome thoughtful, transparent use of AI as part of the writing process—but expect the final work to remain clearly, demonstrably, and responsibly the author’s own.
Trust between an author and a publisher is built quickly—but it can be lost just as fast
Lazy or careless use of AI undermines that trust almost immediately. This includes submitting work that contains obvious factual errors, invented or inaccurate citations, over-generalised claims, or passages that have not been properly reviewed by the author. When these issues appear, they signal not just a technical problem, but a breakdown in authorial responsibility.
From a publisher’s perspective, fake references or unchecked AI-generated material raise immediate concerns about reliability, diligence, and accountability. Even a small number of such errors can cast doubt over the rest of the manuscript, slow the editorial process, and damage confidence in the work as a whole.
We are comfortable working with authors who use AI thoughtfully and transparently. What we cannot work with is material that appears unverified, careless, or effectively outsourced. Once trust is eroded, it is difficult to rebuild—and that affects whether a project can proceed at all.
Careful use of AI protects not only the integrity of the work, but the author–publisher relationship itself.