AI is not at the core of what poketto.me does, but it helps a lot: I’m using LLMs to translate saved content and to smooth out formatting issues (especially with PDF content). Any old LLM can do these things quite well, but when it comes to pricing, none beats DeepSeek.

When using their API, processing a million input tokens can be as cheap as $0.035, and a million output tokens will cost you at most $1.10. To give you an example: A typical 1,500-word essay will come down to about 2,000 tokens (input and output combined).

Compare that to the pricing of OpenAI and Anthropic, and I’ve got a clear winner.

However, this comes with one big downside (besides the fact that I’m sending content over to China for processing): latency. It can take up to several minutes for DeepSeek to finish processing a request, especially during Chinese business hours.

But for poketto.me, that’s not a big concern from a usability perspective: Users save content at some point, and “read it later” anyway. Whether it takes a minute or two longer to process isn’t a big deal. Plus, DeepSeek offers discounted pricing during “off-peak” hours, where the price per 1M tokens goes down by 50%.