The newest form of “AI cheating” does not look like copy paste. Actually most applications block the ability to paste text to prevent the use of AI.
But here is what users do: a student reads text from ChatGPT on a phone/screen, then retypes it into an exam box. No paste. No clipboard. No obvious plagiarism trail.
The words may be new, but the user’s typing behavior gives cheating away.
We built a new signal that focuses on that gap. It estimates whether text was typed naturally, or retyped from another source, using typing biometrics.
How does it work?
It is simple in concept: we look at typing behavior, not the words. When someone writes naturally, their rhythm changes as they think, revise, and correct themselves. When someone retypes from another screen, the flow often looks more like transcription. Our model turns those behavioral patterns into a score and a label (Copy, Unsure, Not copy), so you can use it as an additional integrity signal without analyzing the content.
We trained and evaluated the model on thousands of sessions that included both natural writing and retyping, and the AI reached over 90% accuracy at distinguishing the two.
Why this matters beyond paste blocking and content analysis
Copy paste can be blocked. If a student reads an answer on a phone and retypes it into the exam box, paste blocking does nothing. Content analysis can still help, but it is probabilistic, misclassifies often, and it can be evaded by paraphrasing, short answers, or careful edits.
Retype Detection sits on top of those layers. It does not judge what the text says. It looks at how the text was produced, and adds a behavioral signal that can bring more clarity when the words alone are not enough.
Retyping is different from writing
When people write genuinely, they are composing. They pause to think. They revise. They change pace mid-sentence. They correct themselves.
When people retype, they are transcribing. They are not generating the content, they are copying it through the keyboard. That shift can show up in patterns like rhythm consistency, speed changes, pauses, and corrections.
This is not about judging the text. It is about how the text was produced.
Why this helps with AI assisted cheating
This is not a generic AI content detector. It does not try to guess whether text “sounds like AI.”
Instead, it detects a common behavior pattern that often happens when AI is involved: retyping an answer from another screen to avoid paste detection.
That makes it useful in:
- course exams and quizzes with typed responses
- online assignments with short answer fields
- certification and training tests where integrity matters
It can also help outside education, anywhere typed input has value and incentives exist to transcribe rather than compose.
Accuracy
Accuracy improves with more typed text, and improves again when a short baseline is available. To collect a baseline, ask the student/user to type a short, fixed text shown in a box. They can only transcribe what they see, which gives you a clean baseline sample.
As you see in the table below, our solution reaches beyond 90% on essay-style content.
| Setup | Accuracy |
|---|---|
| Short text (200 chars) | 79% |
| Short + baseline | 82% |
| Long text (600 chars) | 86% |
| Long + baseline | 91% |
How to use it responsibly
- This should be used as an integrity signal, not a punishment engine. Use a predicted “copy” to trigger a review, a second prompt, or a step-up check.
- Require enough text to make the signal meaningful (400+ chars work best).
Private preview
Retype Detection is currently in private preview for select organizations.
If you’re interested, please contact us.