I think my list of tasks is now finally approaching a manageable level hopefully giving a bit more time to post what should be at least weekly content. Up this week is our recent position paper on using the size of HD as a natural DRM. While the paper unfortunately did not make it into NSPW, the topic is interesting enough that we have converted it over to a technical report and wiki-fied it. If anyone has a good LaTeX to Wiki converter, please let me know.
The thrust of the paper is relatively simple, is the size of high definition content (30+ GB) sufficient to act as a natural DRM and deterrent to file sharing. Taking a cue from music sharing, WAV file sharing certainly existed but the content was fairly large and the bandwidth speeds for exceptionally bad at that time. With the emergence of the MP3, all of that changed making sharing much easier. In some sense, HD content follows a similar parallel in that it is an order of magnitude greater in terms of size versus DVD and two orders of magnitude greater than CD-focused content.
If one looks at the general dynamics of beginning the seed of a movie say via BitTorrent, a fully symmetric link would take roughly 40 minutes to grab a 30 GB Blu-Ray disc (12.5 MB/s net speed ignoring headers, assuming TCP kept the pipe full, ignoring the ramp up in congestion avoidance). Now, while I would love for 100 Mb/s symmetric bandwidth to rapidly grow in the US, I simply do not see that happening any time in the near term. Taking DSL and sharing cuts bandwidth down by a factor of 10 on the downstream and 100 on the upstream. While P2P helps with that sharing, the file still has to get distributed out past that initial seeder. Given that one probably does not want to hang out too long on a given torrent with pirated content (*AA actions) nor wants to pay the potential bandwidth costs (size caps via Time Warner), is anyone going to be all that keen to share pirated HD content?
Ah, but what about smaller content as several of the reviewers raised? Sure, feel free to DRM away but the premise of the paper is that for HD content, it doesn't add all that match. With the analog hole (for any content) and nearly every DRM mechanism being cracked shortly after release, why bother? Can disc copying occur then? Most definitely but it is more like copying the analog tape of old rather than the easy, quick file sharing of today. Is it easier to copy or just to loan a disc? My thinking is that the economics of sharing (size caps on the upstream) will be a far more effective deterrent at sharing than DRM will ever be. Moreover, as the paper mentions, it is in the interest of the ISPs to try to swat down the heavy tail or at least convert that heavy tail into a net economic gain. We've already seen this with the recent brouhaha over P2P throttling and as much as one would like the all you upload buffet to continue, those days are likely numbered at least for the near term. It is quite easy to pick out the heavy tail on the upload and keep the "good" netizens from being penalized (good as in profitable subscribers to an ISP).
Long story short, it is an interesting topic of discussion in general with regards to design. Interesting questions as well would be quantifying the total energy cost of HD DRM or comparing it to applying it on Folding@Home or SETI@Home (computations spent). Alternatively, could one embed multiple signatures from the source side (ala steganography) to assist with tracing the root of shared content (40 GB is a lot of space to hide stuff but can it be done fast on the production side)? At what point are speeds feasible where DRM might need to be imposed, i.e. what is the cutoff where DRM is necessary (if at all)?
Monday, June 16, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment