⬅️ Previous - First LangGraph Project
➡️ Next - Intro to LangSmith
In this lesson, you’ll upgrade your LangGraph bot with a Writer–Critic loop powered by an LLM. One node writes a joke, another evaluates it, and only approved jokes are shown to the user — giving you your first taste of dynamic, agentic behavior.
A joke-writing loop with Writer and Critic agents—revealing what LLMs get right (and wrong) about being funny.
You’ll find the full code in the linked repo—but we strongly encourage you to first complete the lesson and try implementing it on your own. The graph structure is simple, and you’ll learn a lot more by building than by copying.
This lesson picks up where the last one left off — where we built a fun little joke bot using LangGraph and the pyjokes
library. No LLMs, no agents — just clean graph logic, a stateful menu loop, and a few good laughs.
If you haven’t already, check it out — then come back here to make your bot smarter, funnier, and a little more agentic.
In this lesson, you’ll upgrade the joke bot you built previously into an agentic system using LLMs.
Specifically, you’ll create a writer–critic loop.
In your system:
You’ll implement this logic using LangGraph, building on the same menu structure, nodes, and routing patterns you already used.
This is your first step into building systems that reason, reflect, and revise — not just respond.
Let’s dive in.
Your new graph builds on the structure from the previous lesson — but this time, we’ve added two key agentic nodes: a Writer that generates a joke using an LLM, and a Critic that decides whether it’s good enough to show the user.
Here’s how the flow works:
💡 If you haven’t already, take a quick look at the previous lesson’s graph
It was a simple, no-LLM loop powered by pyjokes. Comparing the two will help you appreciate how LangGraph supports more dynamic, adaptive behavior with just a few structural tweaks.
The fetch_joke
path now routes to the Writer, not directly to a joke function.
The Writer generates a joke and passes it to the Critic.
The Critic either:
show_final_joke
, orAfter a successful joke is shown, we loop back to the menu — just like before.
This new graph introduces the idea of internal feedback loops, where agentic components assess and refine their own output before surfacing it to the user.
Let’s break down how you’ll build this next.
Here's what you'll need to implement to complete the writer–critic loop:
Extend the state class from the previous lesson to track things like the latest joke, approval status, and retry count.
Create two new nodes:
Route the logic so rejected jokes go back to the Writer, and only approved ones reach the user.
Limit retries to avoid infinite loops — think 5 tries max.
Reset the evaluation state after each successful joke (or when the user changes the category).
Use the prompt builder utility you built earlier to generate both Writer and Critic prompts from YAML config.
This is your first real agentic loop — dynamic, self-evaluating, and designed to run until something “good enough” emerges.
You’ll be surprised how much power you get by adding just two nodes.
Ready to build? Go for it!
A reference implementation is available for this lesson, complete with working code.
But don’t jump into the repo just yet.
You’ll gain much more by following the instructions above and building the agentic loop yourself.
Once you've got it working (or hit a wall), come back and compare it with the official version.
Reuse your building blocks: You already have a working LangGraph with menu routing, category switching, and state management — don’t start from scratch. Extend what you’ve built.
Don’t overthink the Critic: It doesn’t need to give feedback — it just needs to say “yes” or “no.” Keep it simple. A short prompt with clear evaluation criteria is enough.
Limit retries: Add a retry counter in your state. If the joke gets rejected too many times (say, 5), just show the latest attempt and move on. Don’t let the bot spiral into infinite rewriting.
Modular prompts help: Use your prompt_builder
function with the YAML config. This keeps your Writer and Critic prompts clean, flexible, and easy to tweak.
This isn’t about perfect jokes: It’s about agentic behavior — generate, evaluate, revise. If your bot makes you groan instead of laugh, that’s still a win.
Since jokes are short (1–2 lines), we don’t try to revise or improve them. If the Critic rejects a joke, we simply generate a new one. This makes the loop simpler — it’s just a pass/fail check, not a feedback-driven revision cycle.
📌 In a future lesson, you’ll explore the Reflection Agent pattern, where the critic does provide feedback — enabling agents to revise, reflect, and improve over time.
Ready to go beyond the basics? Here are some fun challenges that push your implementation a bit further:
You may notice jokes occasionally repeat. That’s because the Writer (LLM) is stateless — it doesn’t remember past jokes.
Try implementing a simple mechanism to avoid repetition:
⚠️ Don’t rely on exact string matches — you’ll need semantic similarity (think embeddings or fuzzy matching).
Instead of having the Critic decide, why not let a human vote?
This is a great way to explore human-in-the-loop supervision — a powerful pattern in real-world agentic systems.
Why let good jokes disappear?
This adds a light persistence layer to your system — another step toward real-world utility.
In this lesson, you leveled up your LangGraph bot by adding a writer–criticloop — your first taste of agentic behavior powered by LLMs.
You now know how to:
And you did all of it inside a clean, inspectable LangGraph structure — no tangled if-else logic, no giant monolithic prompt.
You’ve just crossed the threshold into agentic system design: where outputs are generated, tested, and filtered internally before ever reaching the user.
Up next: you’ll explore LangSmith — the observability layer that lets you debug, trace, and improve flows like this with confidence.
Let’s keep going.
⬅️ Previous - First LangGraph Project
➡️ Next - Intro to LangSmith