From LangChain Frustration to PydanticAI Discovery
After spending considerable time trying to learn LangChain for building AI agent systems, I found myself increasingly frustrated with the framework's rapid and often breaking API changes. Obviously any book or class or tutorial you go through has a chance of being oudated with the current version of the library, but moving classes and constructs from one place to another every few months or changing their names is just awful. It was giving me flashbacks to trying to learn Javascript!
Discovering PydanticAI
That frustration led me to investigate alternatives, and I discovered PydanticAI - a framework built by the team behind Pydantic. I've been using Pydantic for type checking and messaging contracts for REST services for years now, so it was intriguing to see how they would approach an AI framework. Just like their namesake library, the agent library seemed pretty straightforward but robust.
Learning Approach
I decided to take a methodical approach to understanding PydanticAI's capabilities:
Phase 1: Capability Exploration
I started by systematically researching and implementing examples of each core PydanticAI capability, building a comprehensive research table to track my progress. This methodical approach covered:
- Context Management: Multi-turn conversations using
message_historyandresult.new_messages(), with debugging patterns for context window visibility - Error Handling: Structured error responses with Pydantic models, retry patterns, and graceful agent degradation using try/except in tools
- Streaming Responses: Real-time output using
run_streamfor chat-like experiences and long-form content generation - Multi-Agent Pipelines: Sequential agent handoff with structured outputs at each step, demonstrating agent orchestration patterns
- Local RAG Implementation: Complete retrieval-augmented generation using ChromaDB and sentence-transformers with gaming-themed documentation
- Memory Systems: Building a reusable MemoryAgent with abstract storage interfaces and JSON file backends for persistent state
- Testing & Validation: Comprehensive testing using TestModel (deterministic, API-free) and FunctionModel (custom control) for agent logic validation
- Monitoring & Observability: Logfire integration and OpenTelemetry for tracing agent runs, tool executions, and performance metrics
Each capability was implemented in its own example file with .env model configuration, logfire logging, and extensive documentation referencing the official PydanticAI docs. This systematic exploration helped me understand not just what PydanticAI could do, but how to implement production-ready patterns with proper error handling, testing, and observability.
Phase 2: Integration Project
Once I was comfortable with the individual capabilities, I wanted pull them together into an example application that I could use to demonstrate it all. Rather than building another chatbot, I chose to create a text-based adventure game that would showcase multiple agentic patterns working in harmony.
The Multi-Agent Text Adventure
The result was a multi-agent AI text adventure that demonstrates sophisticated agentic AI architecture patterns. The project includes:
- Multiple specialized agents working together (intent parsing, room description, inventory management)
- RAG integration with ChromaDB for rich environmental storytelling
- Persistent game state using Redis and PostgreSQL
- Comprehensive testing strategy with both unit and integration tests
- Full-stack implementation with FastAPI backend and Next.js frontend
The gaming theme made the complex AI patterns accessible and engaging while showcasing their practical capabilities in a way that's easy to understand and demonstrate.
Why This Approach Worked
This methodical approach - exploring individual capabilities before integration - proved invaluable for several reasons:
- Solid Foundation: Understanding each pattern in isolation made debugging and optimization much easier
- Confidence Building: Successfully implementing each capability built confidence in the framework
- Real-World Application: The final project demonstrates how these patterns work together in practice
- Portfolio Value: The end result serves as both a learning exercise and a portfolio piece
Reflections on PydanticAI vs LangChain
LangChain is obviously still in use. It seems to keep growing. But I've also read that the reason it is so popular was it was first to market, not that it is necessarily the best. It might be something to revisit if they change their approach to ongoing development and start to lock down at least most of the library structure. But for now I'd rather just use a different library.
Pydantic AI seems lightweight in that it doesn't lock you into weird conventions or to specific LLMs etc. I think the world of AI is changing so fast, that I would rather just have a lightweight helper that does some of the stable best practices, rather than locking myself into one specific provider or architecture.