In order to enable seamless interactions between @hypercatcher and @Castopod , I wrote a draft for the podcast:chapters API Specification.

All comments are welcome.

We hope this will open a path to more collaborations between platforms which use the PodcastIndex namespace.

Thank you David for your support. 🙏

Poke @adam @dave @jamescridland

· · Web · 2 · 1 · 1

@benjaminbellamy @hypercatcher @Castopod @adam @dave @jamescridland

I'd love to join the discussion, since I have also thought a lot about how to distribute chapters.

On first read this seems super centralized, which is okay if chapters are handled by the podcaster alone.

If we include "community chapters", which I think we should, then it will need more work.

Moving further ahead without thinking how Podcast 2.0 clients can "share/update chapters" across clients is a big mistake imo.

@benjaminbellamy @hypercatcher @Castopod @adam @dave @jamescridland

When that's said, unless we want to move to something completely distributed, this API could work, and then it would be up to the individual podcast clients to make agreements where they share chapters with each other through another API.

That might be the only way to make sure everyone in the network is trusted. :)

@martin @benjaminbellamy @Castopod @adam @dave @jamescridland Yeah I agree I'd like to do something in a more distributed manner. I've got to admit I really like the idea, but I've barely really looked into the many solutions there are out there so I'm no where near having an actual solution to that. One thing that I've just grazed the surface of is FileCoin. On the surface it seems like exactly what we need:

@martin @benjaminbellamy @Castopod @adam @dave @jamescridland @brianoflondon is also working on something that might work for us too. I'm meeting with him tomorrow morning to talk more about it.

I think this is a pretty simple integration that @benjaminbellamy has come up with so I'm gonna go ahead with this, but I'm definitely open to expanding to distribute chapters and open up the API in different ways to facilitate a more distributed approach

@hypercatcher @martin @benjaminbellamy @Castopod @adam @dave @jamescridland
I'm thinking it could alternatively be done with IPFS pinning services and IPFS gateways for http support.

An IPFS pinning service is an authenticated API that offers to pin content for you on your behalf, generally paid but I think has a free tier someone could test with.

Not suggesting it's better than what's been proposed, but also don't know if anyone has tried it.

@agates @martin @benjaminbellamy @Castopod @adam @dave @jamescridland I wonder if we could just all pin files to the same directory. If you're writing to a file just read the most recently created file and post a new file with an update. Then everyone would just read the most recently created file in the directory?

@agates @martin @benjaminbellamy @Castopod @adam @dave @jamescridland that would be a super naive and wasteful implementation, but it's a start, lol. I'll see if there is a way to fetch a file from a directory by creation date

@agates @martin @benjaminbellamy @Castopod @adam @dave Am I just waaaay over thinking this though? What if we just create a new podcast index repo and we all just push, pull and read chapter files from there?

@hypercatcher @agates @martin @benjaminbellamy @Castopod @adam @dave once an episode drops out of a feed chapters are lost? Where else does the link live?

I think the Hive system I envisage will solve this.

@brianoflondon @hypercatcher @agates @martin @Castopod @adam @dave
The idea behind this drafted spec is not to define “HOW” to manage the chapter service but “WHO” manages the chapter service, so that ANY podcaster is able to choose ANY provider among the ones able to provide the chapter service.
Then that provider is free to use IPFS, centralised https or whatever makes sense to him — and to his users.

(This spec is NOT an extension of the podcast:chapter tag, it sits next to it.)

@hypercatcher @agates @martin @benjaminbellamy @Castopod @adam @dave @jamescridland

Pinning doesn't work like that.

If you pin a directory, you can never change the contents of the directory. If you change or add a file, it changes the CID value (hash) and in IPFS land this is a brand new directory.

This is a bit of a stumbling block for me. I am trying to see if one can host a static website on IPFS. You can set it up once, it is updating it gets weird.

Unless I am missing something?

@davekeeshan @hypercatcher @martin @benjaminbellamy @Castopod @adam @dave @jamescridland

@davekeeshan, you're correct, and that's where IPNS/DNSLink come in. That's how @dave does the daily SQLite dump of feeds.

@agates @dave

Ok I went the DNSLink route today.

That is my whole site, hosted on ipfs, including the webpages, the rss feeds and all the mp3s:

I just happened to be already up and running with cloudflare which helped.

it is almost too good, you can barely tell now that the information is source from ipfs. Here is the source of the real link the the rss.


@davekeeshan @agates Very cool. It's really quick to load. You can't tell at all that it's not normal HTTP delivery.

@dave @agates I suspect that is due to some level of caching that cloud flare are doing in between.

I am curious to see how heavy a podcast hosting could be sustained this way.

Sign in to participate in the conversation
PodcastIndex Social

Intended for all stake holders of podcasting who are interested in improving the eco system