⬅️ Previous - Agentic Authoring Assistant
➡️ Next - Designing Right Agents
In this lesson, we move from architecture to implementation by building the tag extraction component of our Agentic Authoring Assistant. You’ll explore a real-world LangGraph system that combines three different extraction methods and uses an LLM to select the best tags. Along the way, we unpack the design choices, patterns, and tradeoffs that shaped the system — and reflect on where agents might fit in.
In the last lesson, we introduced our Week 6 project — an Agentic Authoring Assistant to help generate titles, tags, summaries, and references for AI publications. As a learning exercise, we scoped down to one essential subsystem: tag extraction.
We described the design challenge, outlined the system’s requirements, and encouraged you to think through your own solution.
If you haven’t seen that lesson yet — or want a refresher on the architectural context — we strongly recommend you check it out first. It sets the stage for everything we’re about to build.
Let’s start with a quick overview of the system we actually built.
The tag extractor is implemented as a LangGraph, with five key nodes:
A Start node that fans out to three parallel extraction methods:
An Aggregation node that collects the results and uses an LLM to select the top n final tags.
An End node to close out the workflow.
It’s a small graph — but it surfaces a surprising number of interesting design decisions.
The full code repository is linked with this lesson — and if you'd like a guided walkthrough, you’ll find a video explanation near the bottom of the page.
It covers the key components, node logic, and design decisions behind the system. Whether you prefer reading code or watching it in action, we’ve got you covered.
Now that you've seen it from above, let’s zoom in — not with code, but with answers to the kinds of questions a thoughtful system designer would ask.
We’ve grouped these questions into four themes:
Each one explores a different aspect of how (and why) we built the system the way we did. If you're building your own version or just curious how agentic thinking shows up in practice — this is where it gets interesting.
Let’s dive in.
Zero. That’s right — no agents here. And that’s okay. This particular task didn’t require multi-turn decision-making, autonomy, or dynamic behavior. Everything was deterministic and direct.
That said, we do have two LLM-powered steps — one for extraction and one for aggregation — both of which involve reasoning. But these are simple function calls, not autonomous agents.
Still, our results do show some performance gaps — especially in tag quality and classification accuracy. So while we didn’t use any agents in this version, it might make sense to introduce a reflection or verification agent in the future — one that reviews extracted tags, revises weak ones, or flags possible misses.
This isn’t a failure of agentic thinking — it’s an example of it. We start simple, test assumptions, and introduce agents only when the problem demands it.
Two:
That’s it.
It’s subtle. The LLM-based extractor does need to understand what makes a good tag — and that involves some reasoning. The aggregation step also balances clarity, coverage, and specificity.
But we’re not using explicit chain-of-thought or reflection techniques yet.
Yes — and that’s one of its strengths. Want to try a different NER model? Just update the spaCy node. Want to tweak the prompt? Change the config. Want to add a verification step? Add a node.
This is one of the advantages of graph-based design: it’s composable.
A lot more:
This is a learning scaffold, not a production stack — and that’s by design.
Here are a few ideas:
This lesson is a sandbox — take it further.
Prefer to see it in action?
Check out the video walkthrough below — it covers the full implementation and key design decisions behind the system.
This lesson showed how we translated an abstract architecture into a working system — combining traditional logic, ML models, and LLM-powered reasoning inside a modular LangGraph.
No agents this time. But still very much agentic.
It’s a reminder that good system design means using the right tools for the job — and leaving space to evolve when the job gets more complex.
Next, we’ll explore how more sophisticated agentic systems handle collaboration, delegation, and decision-making — going beyond standalone components into dynamic, multi-agent workflows.
⬅️ Previous - Agentic Authoring Assistant
➡️ Next - Designing Right Agents