Honest critique from LLM personas, despite the training.
A Python pipeline for getting alpha-reader feedback on a fiction manuscript that pushes past RLHF sycophancy. Anti-sycophancy bindings, two-pass design, multi-model dispatch, reflexive Opus self-critique. ~$8-25 per full novel.
Read the methodology first.
"v1 produced 7/7 keep-reading verdicts on a manuscript that needed serious rework. v0.4 rebuilds the cohort to push past that."
Read METHODOLOGY.md for the four mechanisms (anti-sycophancy bindings, two-pass design, reflexive self-critique, multi-model dispatch). Then the personas snapshot for the 9-persona cohort and the 10 universal anti-sycophancy bindings. Then run new-splinter.py on a test manuscript.
The Pipeline — scaffolder + orchestrator + analyzer
Three Python scripts plus a scaffolder that turns "I have a manuscript" into "ready to fire" in ~2 minutes.
Methodology — how the pipeline pushes past sycophancy
Four overlapping mechanisms. Each is partial alone; together they produce noticeably more honest output.
Templates — per-project scaffolds
The scaffolder generates these for new projects. Reference copies for existing projects or manual setup.
Send a quick note.
Used the toolkit on your own manuscript? Hit a problem with the pipeline? Have findings on calibration vs human alpha readers? This form goes straight to the maintainer.
If you have a GitHub account, opening an issue is preferred. This form is the path for everyone else.