A comprehensive learning path covering both LangChain and LangGraph, featuring 20+ practical implementations progressing from basic concepts to advanced agent architectures.
This repository provides a structured approach to learning modern LLM application development using both LangChain and LangGraph frameworks. The collection begins with fundamental concepts and gradually progresses to complex, specialized agent implementations and graph-based workflows. It serves as both an educational resource and a reference for implementing AI agents in real-world scenarios.
This work addresses several critical challenges in the field of AI and LLM integration, making significant contributions to both academic understanding and practical implementation.
Enterprise AI Integration:
Development Efficiency:
Cost Optimization:
Architecture Evolution:
Best Practices Development:
Innovation in Integration:
Enterprise Solutions:
Development Tools:
Data Processing:
Scalability:
Innovation:
Standards:
This work's significance extends beyond immediate technical implementation, influencing how organizations approach AI integration and contributing to the broader evolution of LLM-based systems. The patterns and practices established here serve as a foundation for future development in the field.
Modular Tool Integration: Demonstrated scalable patterns for integrating multiple tools with LLMs
State Management Solutions: Established effective approaches for managing agent state
Error Recovery Mechanisms: Designed robust error handling patterns
Performance Optimization:
Security Considerations:
Development Best Practices:
Scalability Solutions:
Monitoring and Observability:
Maintenance Strategies:
LangChain Optimizations:
LangGraph Advancements:
These findings and contributions provide practical solutions for building robust, production-ready AI applications using LangChain and LangGraph. The implementations demonstrate how to address common challenges while maintaining code quality and system reliability.
Clone this repository:
git clone https://github.com/timeless-residents/handson-langchain.git cd langchain-tutorial
This command creates a local copy of the repository. We use HTTPS cloning for broader compatibility and easier setup compared to SSH, especially for users behind corporate firewalls.
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
Virtual environments are crucial for project isolation. This prevents dependency conflicts between different projects and ensures reproducible environments. The venv
module is chosen over alternatives like virtualenv
because it's included in Python's standard library since Python 3.3.
Install common dependencies:
pip install langchain langchain-openai langchain-community langgraph python-dotenv
We install these specific packages because:
langchain
: Core framework for building LLM applicationslangchain-openai
: OpenAI-specific implementationslangchain-community
: Community-contributed componentslanggraph
: Graph-based workflow managementpython-dotenv
: Secure environment variable managementSet up your OpenAI API key:
Create a .env
file in the root directory with the following content:
OPENAI_API_KEY=your_api_key_here
We use environment variables instead of hardcoding API keys for security best practices. The .env
file is included in .gitignore
to prevent accidental exposure of sensitive credentials.
The repository follows a progressive learning path, with each directory serving a specific educational purpose:
langchain-tutorial/
├── step1.py # Basic LLM usage with LangChain
├── step2.py # Multi-tool agent implementation
├── steps/ # Additional introductory steps (optional)
│ ├── step3.py
│ └── ...
├── usecase-001/ # Basic Calculator Agent (LangChain)
│ ├── main.py
│ ├── README.md
│ └── requirements.txt
└── ...
This structure is designed for incremental learning, with each subsequent directory building upon concepts introduced in previous sections.
step1.py
)from langchain_openai import OpenAI from dotenv import load_dotenv # Load environment variables from .env file load_dotenv() # Create OpenAI LLM instance llm = OpenAI() # Query the LLM prompt = "What's the weather like today?" response = llm.invoke(prompt) print("LLM Response:") print(response)
This code demonstrates several key concepts:
load_dotenv()
loads environment variables securely, a crucial practice for managing API keys and sensitive data.OpenAI()
creates an LLM instance with default parameters. We use the default settings initially for simplicity, but these can be customized for temperature, max tokens, etc.llm.invoke(prompt)
sends a synchronous request to the LLM. We use synchronous calls here for clarity, though asynchronous operations are available for production scenarios.step2.py
)from langchain.agents import initialize_agent, Tool from langchain.tools import DuckDuckGoSearchRun from langchain_openai import OpenAI from datetime import datetime # Initialize tools search = DuckDuckGoSearchRun() calculator = Tool( name="Calculator", func=lambda x: eval(x), description="Useful for mathematical calculations" ) time_tool = Tool( name="Time", func=lambda _: datetime.now().strftime("%Y-%m-%d %H:%M:%S"), description="Returns the current time" ) # Create and initialize the agent llm = OpenAI(temperature=0) agent = initialize_agent( tools=[search, calculator, time_tool], llm=llm, agent="zero-shot-react-description", verbose=True )
This implementation showcases several advanced concepts:
Tool Integration: Each tool is encapsulated with a clear name and description, helping the agent understand when to use each tool.
eval()
for simple calculations (Note: In production, use safer evaluation methods)Agent Configuration:
temperature=0
: Set to 0 for deterministic responses, crucial for tool-using agentszero-shot-react-description
: This agent type is chosen because it:
Verbose Mode: Enabled for learning purposes, allowing observation of the agent's decision-making process.
Each use case demonstrates specific patterns and techniques:
Each implementation is carefully structured to demonstrate specific capabilities:
001: Basic Calculator Agent
from langchain.agents import create_react_agent from langchain.tools import Tool def safe_eval(expression: str) -> float: """ Safely evaluate mathematical expressions. Args: expression (str): Mathematical expression to evaluate Returns: float: Result of the evaluation Safety: - Uses ast.literal_eval instead of eval() - Validates input format - Handles division by zero """ import ast try: # Convert string to abstract syntax tree tree = ast.parse(expression, mode='eval') # Validate node types for node in ast.walk(tree): if not isinstance(node, (ast.Expression, ast.Num, ast.BinOp, ast.UnaryOp, ast.Add, ast.Sub, ast.Mult, ast.Div)): raise ValueError("Invalid expression") # Evaluate if safe return float(eval(compile(tree, '<string>', 'eval'))) except ZeroDivisionError: raise ValueError("Division by zero") except Exception as e: raise ValueError(f"Invalid expression: {str(e)}")
This implementation demonstrates:
002: Weather Information Agent
from langchain.tools import Tool from typing import Dict, Any import requests class WeatherAPI: """ Weather API integration with comprehensive error handling and type safety. Implementation Details: - Uses environment variables for API configuration - Implements retry logic for resilience - Provides detailed error messages for debugging """ def __init__(self, api_key: str): self.api_key = api_key self.base_url = "https://api.weatherapi.com/v1" def get_weather(self, location: str) -> Dict[str, Any]: """ Fetch weather data with error handling and validation. Args: location: City name or coordinates Returns: Dictionary containing weather data Raises: WeatherAPIError: For API-related failures ValidationError: For invalid location format """ try: response = requests.get( f"{self.base_url}/current.json", params={ "key": self.api_key, "q": location, "aqi": "no" } ) response.raise_for_status() return response.json() except requests.RequestException as e: raise WeatherAPIError(f"API request failed: {str(e)}")
Key features demonstrated:
003: Web Search Agent
from langchain.agents import Tool, AgentExecutor from langchain.tools import DuckDuckGoSearchRun from typing import List, Optional class EnhancedSearchTool: """ Advanced search tool with result filtering and validation. Features: - Content filtering - Result ranking - Cache management - Rate limiting """ def __init__(self, max_results: int = 5): self.search = DuckDuckGoSearchRun() self.max_results = max_results self._cache = {} def search_with_filter( self, query: str, filters: Optional[List[str]] = None ) -> List[str]: """ Perform filtered search with caching. Args: query: Search query string filters: Optional list of filter terms Returns: List of filtered search results Implementation: - Results are cached for performance - Filters are applied post-search - Results are ranked by relevance """ cache_key = f"{query}:{','.join(filters or [])}" if cache_key in self._cache: return self._cache[cache_key] results = self.search.run(query) if filters: results = [ r for r in results if any(f.lower() in r.lower() for f in filters) ] results = results[:self.max_results] self._cache[cache_key] = results return results
Implementation highlights:
004: Multi-Tool Agent
from langchain.agents import initialize_agent, Tool from langchain.memory import ConversationBufferMemory from typing import List, Dict, Any class MultiToolAgent: """ Advanced agent implementation with multiple tool integration. Architecture: - Tool registry for dynamic tool management - Memory management for context retention - State management for complex workflows - Error recovery mechanisms """ def __init__( self, tools: List[Tool], memory_config: Dict[str, Any] ): self.tools = tools self.memory = ConversationBufferMemory( memory_key="chat_history", return_messages=True, **memory_config ) def execute(self, query: str) -> Dict[str, Any]: """ Execute query using appropriate tools. Implementation: - Tool selection based on query analysis - Sequential tool execution with state management - Error recovery with fallback mechanisms - Result validation and formatting """ agent = initialize_agent( tools=self.tools, llm=self.llm, agent="chat-conversational-react-description", memory=self.memory, verbose=True ) try: result = agent.run(query) return { "status": "success", "result": result, "tools_used": agent.tools_used } except Exception as e: return { "status": "error", "error": str(e), "recovery_suggestion": self._get_recovery_action(e) }
Advanced features demonstrated:
from typing import List, Dict, Optional def process_data(input_data: List[Dict[str, any]], config: Optional[Dict[str, str]] = None) -> Dict[str, any]: """ Process input data according to optional configuration. Args: input_data: List of dictionaries containing data to process config: Optional configuration parameters Returns: Processed data as a dictionary """ # Implementation
Type hints are used throughout the codebase because they:
class CustomError(Exception): """Base class for custom exceptions""" pass def handle_api_request(url: str) -> Dict[str, any]: """ Handle external API requests with comprehensive error handling. Args: url: API endpoint URL Returns: API response data Raises: CustomError: When API request fails """ try: response = requests.get(url) response.raise_for_status() return response.json() except requests.RequestException as e: raise CustomError(f"API request failed: {str(e)}") except json.JSONDecodeError as e: raise CustomError(f"Invalid JSON response: {str(e)}")
This pattern demonstrates:
Memory Management:
Tool Integration:
Error Handling Challenges:
Synchronous vs Asynchronous:
Security Considerations:
Development Complexity:
LangChain:
LangGraph:
Scalability:
Monitoring:
Maintenance:
These limitations and trade-offs should be carefully considered when implementing these patterns in production environments. Mitigation strategies should be developed based on specific use case requirements.
This project is licensed under the MIT License