I love GCS (Google Cloud Storage). It’s a simple, robust, and powerful solution for storing files online and accessing them either programmatically or via HTTP. Storage is dirt cheap—especially if you don’t need global replication or sophisticated backups. And you can even turn a GCS bucket into an HTTPS-secured, internet-facing web server for static websites. https://poketto.me, for example, runs on that architecture.

Another good use case: the #podcast feature in poketto.me. Naturally, the generated MP3 files need to live somewhere, and storing them in a database or serving them through my Python web server would be… silly. So I push the generated files to a GCS bucket, and all is well: HTTPS-secured, fast, and compatible with any podcast client in the world.

But here's the catch: I also stored the podcast feed XML (i.e. the "inventory" that tells podcast tools which episodes are available, where the MP3s live, how long each plays, and other metadata) in GCS. Every time you add a new episode, the app overwrites that XML file. Your podcast tool should pick it up.

Should.

Enter: GCS caching.

GCS uses aggressive caching mechanisms—great for performance, terrible for dynamic content. Often, when accessing the updated podcast feed, you’ll just get a cached, outdated version. That means your shiny new episode doesn’t show up for hours. Frustrating, especially when you just saved an article and expect it to appear in your podcast tool right away.

Technically, one can work around this. A common trick is to append a cachebuster query string like ?version=42 to the feed URL, which forces GCS to serve the latest content. But realistically, you wouldn’t want users to delete and re-add the podcast feed every time they add a new episode, right?

My solution (for now) is twofold:

1️⃣Server-side: I apply as many "no cache" settings as possible when pushing the XML file to the bucket. Doesn’t help that much, but improves the situation a bit.

2️⃣Medium-term: I’ll stop serving the podcast feed directly from GCS. Instead, I’ll route it through the Python backend. That way, an HTTP endpoint under my control can fetch the latest file from GCS and enforce cache-busting behind the scenes—while keeping the actual feed URL stable for users.

GCS is still a fantastic tool—but just because you can serve dynamic content from it doesn’t always mean you should.