AI Nodes in Practice
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 3: AI Nodes in Practice
n8n has a dedicated AI section. You’ll find it under the “Advanced AI” category when you add a node. The main pieces are the AI Agent node, the Basic LLM Chain node, and a set of memory and tool nodes that plug into them.
This module focuses on three things AI is actually good at inside a workflow: classification, summarisation, and extraction. Then it covers the question you should ask before reaching for any of them.
Before You Add an AI Node
The question is: does this step require judgement, or does it require transformation?
Transformation — renaming fields, combining strings, filtering by a value, formatting a date, routing based on a known condition — doesn’t need AI. The Set node, the IF node, the Switch node, and the Code node all do this reliably, cheaply, and without latency. Adding AI to a step that could be a simple condition is slower, more expensive, and introduces a layer of unpredictability.
Judgement — understanding the intent behind a message, classifying something into a category that isn’t determined by a keyword, summarising a variable-length document, pulling structured data from unstructured text — is where AI earns its place.
Ask the question honestly before you wire in an AI node. If you can describe the rule precisely enough that you could write it as an IF condition, write the IF condition.
Classification
The most common AI task in a workflow is classification: given this piece of text, which category does it belong to?
Examples:
- Is this contact form submission a sales enquiry, a support request, or spam?
- Is this RSS item relevant to my project or not?
- Is this log message an error, a warning, or informational?
The Basic LLM Chain node handles this well. Connect it after the node that produces the text you want to classify. Set up a system prompt like:
You are a classifier. Given the following message, return exactly one of these labels: sales, support, spam. Return only the label. No explanation.
Pass the text as the user message using an expression:
{{ $json.body.message }}
The key instruction is “return only the label”. LLMs will explain themselves if you let them. For classification inside a workflow you need a clean output you can route on, not a paragraph about what the model decided.
After the LLM Chain node, add a Switch node. Route based on the output: if the result is sales, go one way. If it’s support, go another. If it’s spam, end the workflow or log it quietly.
Connecting to Claude
To use Claude, create an Anthropic credential in n8n settings. You’ll need an API key from console.anthropic.com. In the LLM Chain node, set the model to “Anthropic Chat Model” and select the Claude model you want. Haiku is fast and cheap for classification. Sonnet for tasks that need more reasoning.
Connecting to GPT
Create an OpenAI credential. Same pattern — set the model to “OpenAI Chat Model” in the node configuration.
Connecting to a local model
If you’re running Ollama locally, n8n has an Ollama integration. Create an Ollama credential pointing to http://localhost:11434. Set the model to whatever you’ve pulled — llama3, mistral, qwen2.5-coder. Local models are slower than hosted APIs but cost nothing per call and keep your data local.
I use local classification for content routing where the data is sensitive. The classification request never leaves the machine.
Summarisation
Summarisation inside a workflow usually looks like: I have a long document, I want a short version before I route or store it.
Same setup as classification. Basic LLM Chain node. System prompt:
Summarise the following in three sentences. Be specific. No filler.
The user message is the document content. The output is the summary, which you can store, send as a notification, or use as input to the next step.
Where this is useful: I pull RSS items from several feeds as part of a daily newsletter aggregation workflow. Before routing to the “relevant” or “skip” branch, I have a summarisation step that condenses each item to two sentences. The classification then runs on the summary, not the full article. This is faster and cheaper, and the summary captures enough signal for the classifier to work accurately.
Extraction
Extraction is: given unstructured text, pull out specific fields.
Example: a contact form allows free-text input. You want to extract the sender’s company name, the product they’re asking about, and whether they mentioned a deadline.
System prompt approach:
Extract the following fields from the message and return them as JSON:
- company_name (string or null)
- product_mentioned (string or null)
- has_deadline (boolean)
Return only the JSON object. No explanation.
Tell the model to return JSON and nothing else. After the LLM Chain node, add a Code node with a JSON.parse call to convert the output string to an object:
const text = $input.first().json.text;
return [{ json: JSON.parse(text) }];
Then use the extracted fields in downstream nodes — write them to a database, route based on has_deadline, include them in a notification.
Extraction is less reliable than classification because the output structure depends on the model following your format instructions. Test with representative examples. If the model keeps including explanation text, add “Return ONLY the JSON object with no surrounding text” to the prompt. If it keeps using slightly different field names, enumerate the exact keys you expect.
Putting It Together
A realistic AI-augmented workflow:
- Webhook trigger receives a contact form submission
- Basic LLM Chain classifies the intent (sales / support / spam)
- Switch routes based on classification
- For sales: LLM Chain extracts company name and urgency signals, then Telegram node sends a formatted notification with those fields
- For support: LLM Chain summarises the issue, then it gets written to a database with the summary attached
- For spam: workflow ends silently
That’s six nodes. The two AI steps add judgement at the points where judgement is actually needed. Everything else — routing, formatting, storing, notifying — is handled by standard n8n nodes that are fast, cheap, and predictable.
The next module shows the complete workflows in production.
Check Your Understanding
Answer all questions correctly to complete this module.
1. What question should you ask before adding an AI node?
2. Why is 'return only the label' critical for AI classification?
3. When should you use a local Ollama model in n8n?
Pass the quiz above to unlock
Save failed. Please try again.