...or LangGraph, or LlamaIndex, or RAG, or whatever new AI-hype framework is trending this week in order build an AI-powered app.

More often than not, these frameworks are just wrappers around basic functionality—in this case, calling an API. And the layers of abstraction they introduce can make even simple things (“prompt an LLM”) feel unnecessarily complex.

Take RAG, for example. All it really does is frontload your prompt with additional context. That’s it. In practice, it boils down to concatenating a few strings—something you can do in five lines of code. But LangChain adds layer upon layer of custom methods, config objects, routing logic, etc., that often just get in the way.

Sure, these frameworks have their use cases. If your “context” is too large for your LLM’s token limit, using retrieval to send only the most relevant chunk does make sense. But with 64k+ token limits becoming standard, even that’s increasingly rare.

For poketto.me, I still use LangChain—but only as a thin abstraction layer over the LLM APIs. It makes vendor switching (Claude ↔︎ GPT ↔︎ DeepSeek, etc.) quick and painless. That’s about the only real benefit I’ve found so far.