Waste of Bandwidth

Over dinner with Andrew Devigal last night (that’s me, knocking back oysters), got talking about the massive amounts of bandwidth it takes to run a successful podcast. This Week in Tech, for example, reportedly chews through a terabyte a week. The only reason they can afford to do it is that AOL donates the bandwidth.

Started thinking about how badly RSS stats skew traffic logs. I’m subscribed to maybe 100 sites, and my aggregator is pulling feeds once/hour. I end up actually viewing those feeds maybe twice a month. The ratio of bandwidth consumed to media digested is just silly. Now map that same problem onto podcasting and you see the problem. I subscribe to around 20 podcasts but only listen to three or four of them regularly. Now multiply me times a few million podcast listeners out there. Massive amounts of bandwidth are being wasted to download serialized media that never actually gets consumed by the consumer.

There’s got to be a fix for this dilemma, or podcasting will be pulled underwater by its own anchor. First of all, RSS aggregators, and podcast aggregators in particular, need to grow some AI, and should politely recommend that untouched feeds be unsubscribed, or at least put into some kind of stasis. But that’s a voluntary solution, which could only mitigate, rather than solve the problem.

Another approach would be to take the load off single connections through seamless integration of BitTorrent (or similar technology) into podcast aggregators. The trick there will be not so much download/format recognition as discovery. Here‘s a tutorial on setting up a .torrent podcast… but until the discovery/consumption side of .torrent podcasting is solved, we’re still where we are right now — if you’re not listed in iTunes or similar, you’re not on the grid.

And ultimately, .torrent casting would only distribute the bandwidth wastage evenly across the network, rather than solve it.

11 Replies to “Waste of Bandwidth”

  1. Check out Liberated Syndication. https://www.libsyn.com/

    They offer free blog hosting and they push out terrabytes and terrabytes of data. :) There was a Inside the Net podcast about them, that was very interesting.

  2. Think about all the radio waves flowing through us every minute. Perhaps 99.99% of the waves never go into anything (antenna). And perhaps 99.99% that do go into something are observed (ears/eyes.)

    Broadcasting seems wasteful, if only because of the wasted energy.

    I think the model of bloglines.com is a good one. Tens, hundreds, or thousands of us may subscribe to an RSS feed, but bloglines only updates it once, sharing the results with the many when (if) we come looking for summaries. Perhaps that model can work for fat media like Podcasts.

  3. So, could we get a tracker (not sure if BT still needs trackers) at birdhouse to distribute .torrent podcasts via birdhouse? (This may be in my capabilities already and I don’t know it!)

  4. Sean – I’ve never noticed that. Is there any UI for that behavior, or does it just silently fail in the background (away from my machine, can’t check this now). If silently, I think that’s very bad design — I’ve wondered why some podcasts seem to stop production, never get updated again. Maybe that’s what’s going on….

    Oliver – Sounds like a great business, offering free bandwidth for massive waste! (Mostly kidding).

    Jeb – Interesting analogy between wasted broadcast signal and wasted signal over copper. I guess the significant difference is that copper bandwidth has to be paid for by the byte, unlike radio waves, and it also has the potential to overwhelm or saturate equipment (I guess this problem is dealt with in radio by limiting wattage to keep stations from stepping on each other).

    Joe – I’ve never looked into running a tracker for B’house generated .torrents, but would potentially be willing if I had a way to be reasonably sure that copyrighted material was not being distributed. Send me some pointers and I’ll look into it.

  5. I would not be surprised if somebody comes up with a scheme where podcasts/rss feeds are distributed by a decentralized set of servers – you upload it to one, the servers take care of the propagation, and everybody does the downloads from a server nearby.

    … oh, look! Usenet!

  6. RSS polling is the problem here. Seems like there should be a way to make it interrupt driven instead. Perhaps your RSS aggregator keeps all the RSS feeds you subscribe to up to date with your IP address and they send you a message when they update and THEN your RSS aggregator downloads the message.

    This is one of those problems that IPV6 will do a lot help – when every machine on the net can have a fixed and permanent IP address, things will make a lot more sense. Internet telephony, chat, etc become a lot easier too. How many of these services really distill down to DNS to facilitate connections between dynamically set IP addresses?

    -Jim

  7. It’s seeming now that NAT routers have become so ubiquitous, even for large organizations, that the need for IPv6 has largely gone away, or at least receded far into the distance. Now that IPv6 is implemented in major OSes and in routers, there’s very little push to make the jump. NAT gives us a great security layer too, so many admins would be very reluctant to go back to static addressing.

Leave a Reply

Your email address will not be published. Required fields are marked *