Jonathan Haaswritingnowusesabout
emailgithubx
Jonathan Haaswritingnowusesabout

Forget Perfect Data: Building a Usable Voice Profile Extractor

June 26, 2025·3 min read

I shipped a voice profile extractor at 60% accuracy. Simple pattern matching outperformed ML for writing voice replication.

#ai#voice-ai#personality-replication#data-science

I shipped a voice profile extractor at 60% accuracy. It now runs at 80% and powers all AI-generated content on this blog. The entire system took a weekend to build because I ignored the obvious approach.

Pattern matching outperformed machine learning for this problem. The reason is structural, not accidental.

Why ML Fails Here

Voice replication is not a classification problem. You do not need to predict whether text belongs to a category. You need to extract measurable style dimensions and reproduce them.

ML models generalize. Voice replication needs the opposite: capturing idiosyncratic patterns specific to one person. An embedding that places my writing near "informal technical blog" in latent space tells you nothing useful. A regex that catches "Here's the thing most people miss" tells you everything.

The system tracks: contraction frequency, sentence length distribution, rhetorical question density, active voice ratio, and signature phrases. No BERT. No transformers.

Domain Markers Over Generic Features

Generic NLP voice analysis extracts "formality level" and "sentiment." These are useless for replication. They describe where text sits on universal axes. Replication requires the specific axes that make one voice different from another.

My extractor looks for: contrarian indicators, specific framework references, industry context markers, and experience-based framing patterns. These dimensions capture what makes a voice distinctive, not what makes it classifiable.

The Architecture

class VoiceProfileExtractor {
  extract(posts: string[]): VoiceProfile {
    return {
      tone: this.extractToneMarkers(posts),
      style: this.extractStylePatterns(posts),
      perspectives: this.extractBeliefs(posts),
      frameworks: this.extractFrameworks(posts),
      phrases: this.extractSignaturePhrases(posts),
    }
  }
}

Each method uses regex patterns and statistical counting against the corpus. The voice profile becomes a prompt engineering artifact injected at generation time -- not a model artifact requiring training.

Context injection at generation time is orders of magnitude faster than model training. Recent examples, framework references, domain knowledge, and signature phrases go into the prompt. The model handles the rest.

The 60% to 80% Gap

The 20-point improvement came entirely from production usage. User feedback identified which dimensions mattered most. Contractions and paragraph length had outsized impact -- roughly 3x the weight of other features. Weight tuning was iterative, driven by whether output "sounded right" to readers, not by any offline metric.

You cannot get these improvements in development. The feedback loop requires real output being read by real people.

Build Order

Most people build extraction first, then figure out validation. This is backwards. Build the validation system first: define what "sounds right" means quantitatively, create scoring rubrics per dimension, test manually on 20 examples. Then build extraction to hit those targets.

The scoring system penalizes academic language, hedge words, missing contractions, and generic advice without specific framing. These penalties matter more than matching exact phrases.

The Honest Limitation

The system writes like me on a good day -- focused and articulate. That is more valuable than perfect replication for structured content. But it cannot capture tangents, evolving thinking, or the messiness that makes a real voice fully authentic. For content where the thinking process matters more than the conclusion, pattern matching is insufficient.

share

Continue reading

Two Minds in the Machine: Shared Context Is the Only Thing That Matters

I added Gemini to a codebase that already had Claude embedded. The useful discovery was about shared context files, not model capabilities.

AI Detection Hysteria: When Human Creativity Gets Mislabeled

A photographer friend posted a sunset photo after three hours of waiting for the perfect light. Within minutes: 'Obvious Midjourney.' 'Nice prompt, bro.'

The CLI Renaissance: How AI is Driving the Command Line Revolution

AI coding assistants output shell commands, not GUI instructions. That single fact is reversing a decade of developer tooling trends.

emailgithubx