Reference
Glossary
Key AI terms used throughout the playbook, defined in plain English.
- AI Agent
- Software that can take actions autonomously based on goals you set. Unlike a chatbot that only responds to prompts, an agent can plan steps, use tools, and work through a task without hand-holding at each stage.
- Fine-tuning
- Training an existing AI model on your specific data to make it better at your particular tasks. Think of it as teaching a general-purpose tool to become a specialist. Useful when prompt engineering alone doesn't give you consistent enough results.
- Hallucination
- When AI generates plausible-sounding but incorrect information. The model produces text that reads as confident and coherent but is factually wrong. This is why verification matters — Chapter 5 covers how to build it in.
- Large Language Model (LLM)
- The type of AI behind tools like ChatGPT and Claude. Trained on vast amounts of text, LLMs learn patterns in language that let them generate, summarise, translate, and reason about text. They're the engine under the hood of most AI tools you'll use.
- PAST Framework
- Purpose, Audience, Scope, Tone — four elements that make or break AI projects. Defining all four before you start a project dramatically improves your results. Covered in full in Chapter 2.
- Prompt Engineering
- The skill of crafting effective instructions for AI tools. A well-structured prompt gives the model context, a clear task, constraints, and the format you want back. Poor prompts produce poor results regardless of how capable the underlying model is.
- RAG (Retrieval Augmented Generation)
- Connecting AI to your own data sources for more accurate results. Instead of relying only on what the model learned during training, RAG lets it pull in relevant documents, databases, or content at query time — reducing hallucination and keeping answers current.
- SHAPE Framework
- Situation, Hypothesis, Action, Process, Evaluation — the implementation methodology at the core of this playbook. Where PAST helps you define what you're building, SHAPE guides how you build and test it. Covered in Chapter 3.
- Token
- The unit AI models use to process text — roughly three-quarters of a word. Models have a context window measured in tokens, which limits how much text they can consider at once. When you see pricing listed per thousand tokens, that's what's being counted.