My first attempt: the inference engine
Before I embraced skills, I tried something else: an inference engine.
It was basic, but functional like a Ford Fiesta of reasoning systems. The idea was simple:
- User asks: “Who is Bart’s mom?”
- The system abstracts it to: “Who is *’s mom?”
- It recognises the pattern and applies the right rules.
This abstraction lets you give the LLM a tiny, laser‑focused prompt instead of a sprawling manifesto. Less ambiguity, better answers.
Of course, this is where language gets in the way. Americans say “mom.” Brits say “mum.” Australians say “Mum” but with an accent that sounds like they’re apologising for it.
To avoid mismatches, I normalised everything: mom → mother, pa → father, etc. so the engine could match patterns reliably. I’ll talk more about normalisation in the next post.
The inference engine worked, but it wasn’t the most elegant solution. Skills, on the other hand, gave me a cleaner, more scalable structure.
From inference to skills: the evolution
In genealogy, questions tend to fall into natural categories:
- Age‑related
- DNA‑related
- Relationship‑related
- Timeline‑related
- And so on…
So I split my prompts into semantically coherent chunks – “skills”.
My prototype currently has 23 skills, each one a tiny domain expert. The number will grow, but for now it’s a manageable set.
The real challenge wasn’t writing the skills. It was selecting the right one.
