Generative AI for Genealogy – Part VII

The Three Actors in Every LLM Conversation

Every prompt sent to an LLM involves three voices:

  • USER – the human asking the question
  • ASSISTANT – the LLM’s reply
  • SYSTEM – the rules, instructions, and tool results we feed into the model

Observability lets you see all three clearly. Without it, you’re left guessing which part of the conversation the model misunderstood. And trust me, it will misunderstand something.

Despite being rough around the edges, this observability layer ticks a crucial box: It tells me what actually happened.

And that alone has saved me hours of debugging, confusion, and existential dread.

Fun Fact: LLMs Don’t Care About Your Needy Human Line Breaks

It took me far too long to accept this. I had this naïve belief that formatting mattered, that line breaks were sacred, that whitespace conveyed meaning.

LLMs disagree.

They treat your lovingly crafted formatting like a toddler treats a sandcastle: with curiosity, enthusiasm, and absolutely no respect.

But that’s a story for another part.

Coming Up Next

In the next chapter, we’ll walk through the full app flow – how everything fits together from the moment the user starts the app to the moment it decides to update.

It’s messy. It’s fascinating. And it’s the heart of the whole project.

See: Generative AI for Genealogy – Part VIII

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *