To Bring It All Together
For an LLM to behave, it needs rules — firm, clear, unambiguous rules — because without them it will happily wander off like a toddler in a supermarket. One moment it’s answering your genealogy question, the next it’s writing a haiku about turnips.
A simple example: We want it to always create an answer() method. If it didn’t, we’d be rummaging around in its output trying to guess which function to execute, like archaeologists deciphering ancient runes.
So we give it freedom within structure:
- It can call tools
- It can run code
- It can loop back and call more tools
- It can think, reason, and chain steps
…but when it’s done, it must present a clean:
FINAL ANSWER: {answer}
To enforce this, we give it a prompt like:
If a tool requires code execution, make a c# function name "answer()" that returns the answer as a string. Generated code must be C# code. Some fields nullable in particular DateTime? such as date of death or birth - use "DateTime?".
You must only do this for tools that permit code. If you choose to generate code, it will be executed and output provided.
Use that output to either answer the user's question or to run additional code if needed.
When you have finished answer with [FINAL ANSWER: {answer}]. DO NOT REPEAT OR INCLUDE CODE WITH YOUR FINAL ANSWER.
If you read my LLMings post, you’ll remember the joy—and occasional despair—of getting an LLM to follow rules, execute code, and not immediately go rogue.
Sometimes it behaves beautifully. Sometimes it behaves like a cat: “I heard your instructions. I’m choosing not to follow them.”
Prompts matter. Tone matters. The phase of the moon probably matters too.
