Showing posts with label World Wide Web Consortium. Show all posts
Showing posts with label World Wide Web Consortium. Show all posts

Saturday, July 9, 2011

Apple's latest patent foe: the World Wide Web Consortium (W3C)

It's not like Apple needs more hassle on the patent front. Think of its "global war" with Samsung (eight venues, six countries, three continents), its disputes with HTC and Motorola, the unresolved Kodak problem, Lodsys's lawsuit and assertions against app developers, and a few dozen trolls, one of whom was just handed a jury verdict worth $8 million for the infringement of two playlist patents.

And now the vaunted W3C (World Wide Web Consortium), which has hundreds of corporate members, has launched a frontal assault on one of Apple's patents and one of its patent applications with a public call for prior art.

Hat tip to Anant Puranik, who pointed me (via Twitter) to this informative post on his PatentsInd blog, which covers patent matters in India.

The related announcement by the W3C

Let me quickly copy the text of the W3C announcement. Don't worry if some of it is not immediately clear to you because this is related to how standardization bodies in general and the W3C in particular operate. I'll explain further below what this is all about. Here's the text:

This is a public call for prior art on patent Application No. 11/432,295 and on Patent 7,743,336. The W3C seeks information about access control systems available before October 2005 and content distribution systems before April 2006 that offer a viable solution that may apply to the use of access requests policy in Widgets. People who wish to provide feedback should refer to the call for prior art for more information. On 13 November 2009, pursuant to its rights under W3C's Patent Policy, Apple, Inc. disclosed US Published Patent Application No. 11/432,295 and US Published Patent Application 11/409,276 and claimed that it applies to the Web Applications WG's Widget Access Request Policy specification. Apple excluded all claims from the W3C Royalty-Free License commitment of the W3C Patent Policy given by Participants of the Web Applications Working Group. In accordance with the exception procedures of the Patent Policy, W3C launched a Patent Advisory Group (PAG) to determine possible solutions. The PAG has advised W3C to issue this call for prior art.

The W3C has a rigid "royalty-free" licensing policy. If patents are essential to its standards (meaning you can't implement the given standard in any reasonable way without infringing such a patent), they must be made available "to all, worldwide, whether or not they are W3C Members" on a royalty-free basis. The W3C allows patent holders to impose certain conditions, but generally, if you make a patent available to a W3C standard, you have to give up most of your rights. By contrast, most standard-setting organizations operate under a so-called FRAND (fair, reasonable and non-discriminatory) -- or just RAND, without explicitly stating "fair" -- regime. FRAND commitments ensure the availability of patents but patent holders still reserve far more rights than under the W3C's policy.

Apple wants to be able to sue over those patents

Apple is a member of the W3C and, in that role, disclosed the fact that it holds one U.S. patent and one U.S. patent application that Apple believes read on the W3c's "Widget Access Requests Policy" specification. At the same time, Apple exercised its right to withhold those intellectual property rights. In other words, Apple refuses to make those rights available on the W3C's liberal terms. Simply put, Apple doesn't want to be restricted in any way and may want to assert those patents in its various lawsuits.

This means the W3C can't formally adopt the "infringing" specification because its rules require patent-free or at least royalty-free standards. For now the Widget Access Request Policy is just a candidate recommendation, not a final specification yet.

If a patent holder refuses to accept the W3C's terms, the W3C may try to have that patent invalidated (or a patent application rejected). If that effort succeeds, the specification is, again, patent-unencumbered. If not, the W3C can still evaluate possible workarounds or, if there's no workaround, give up on a standard.

In this case, the W3C hopes to do away with Apple's relevant patent and patent application. It's an unpleasant situation for the W3C to have to confront one of its members, especially such a large and powerful one, but sometimes this can't be avoided.

Another W3C patent fight: video codecs

This isn't the only issue in connection with which Apple favors the rights of patent holders over unencumbered standards: the W3C's rigid "royalty-free" policy is also a big problem in the debate over video codecs -- MPEG LA vs. the Google-led WebM. MPEG LA is a licensing agency for many patent holders, not to be confused with the MPEG (Moving Picture Experts Group) standard-setting organization. MPEG has plans for a royalty-free video codec specification. The market-leading AVC/H.264 standard comes with royalty obligations except for limited fields of use. MPEG LA collects royalties on behalf of the holders of many (possibly all) patents essential to AVC/H.264. Google and its allies would like to turn WebM into a W3C standard (in fact, into the default video codec under HTML 5), claiming that it's "royalty-free".

However, Apple and other major players oppose Google's proposal. There's serious doubt that it's truly unencumbered by third-party patents. Since there are now 47 Android-related patent infringement lawsuits going on by my count and frequent reports of patent holders (such as Oracle) demanding or actually (such as Microsoft) collecting royalties from Android device makers, Google has lost its credibility whenever it claims that its "open source" technologies are truly unencumbered.

With the strategic importance that patents have now (also evidenced by the $4.5 billion paid for Nortel's patent portfolio), I guess the W3C is going to find it increasingly difficult to develop standards under its policy. There's still going to be some interest among industry players in the W3C's ability to develop its standards, but a company like Apple is certainly not the most generous contributor of patents to "free" standards, to put it mildly. And when you're embroiled in so much litigation, it's probably a bad time to ask you to give up rights that you may want to assert against the likes of Samsung and Motorola. That's the problem.

If you'd like to be updated on the smartphone patent disputes and other intellectual property matters I cover, please subscribe to my RSS feed (in the right-hand column) and/or follow me on Twitter @FOSSpatents.


Share with other professionals via LinkedIn:

Monday, May 10, 2010

{Video codecs} Food for thought

This post is the third (and last one) in a three-part series on video codecs. Click here for the first post in the sequence, "The HTML 5 dimension", or here for the second post in the sequence, "Accusations flying in the aftermath of Steve Jobs' email".

The patent thicket problem

A couple of weeks ago I stated my conviction that there's no such thing as an open video codec that can be guaranteed to be unencumbered by patents. I regret to say this, but like I wrote then, the field of multimedia formats is a true patent thicket. Ed Bott, a ZDNet blogger, counted 1,135 patents from 26 companies just in the H.264 pool. That is only one of the multimedia standards MPEG LA commercializes, and there are patent holders who don't work with MPEG LA but who may also have rights that are relevant to Theora (or VP8, for that matter).

Those 1,135 patents refer to registrations in a total of 44 different countries (with different numbers in each country). So there are certainly many duplicates in terms of the scope of the patent claims. Nevertheless, even just a fraction of 1,135 is still a huge number considering that it's about a single codec.

Contrary to popular misbelief, patent law doesn't stipulate a 1-to-1 relationship between patents and products. In the pharma sector there is sometimes only one new patent on a given product, or maybe two or three. In software, every little step of the way is, at least potentially, patentable. That's why even a codec like Theora or VP8 might, although it has a different background, infringe on some MPEG LA patents.

The Xiph.Org Foundation's president, Christopher 'Monty' Montgomery, wrote that if Steve Jobs' email was real, it would "strengthen the pushback against software patents". I'm afraid there isn't enough of a pushback out there that would really be needed to bring about political change in that regard (because small and medium-sized IT companies aren't truly committed to the cause), but now that more and more people do look into the threat that patents pose to FOSS (and other software), there will be greater awareness for the patent thicket problem and for the fact that patent law creates huge numbers of little monopolies as opposed to serving to protect completely functional products or technologies.

The question of relative safety in patent terms

I agree with the FSFE's president, Karsten Gerloff, that "[j]ust because a standard calls for licensing fees does not mean that the users are safe from legal risk". It's true that there might even be patents that could be asserted against H.264 but aren't under the control of MPEG LA, for reasons such as the ones I outlined recently.

However, given the extremely widespread commercial use of H.264, including some of the prime targets of patent trolls, the fact that no such patents have been asserted against H.264 licensees so far is a fact that certainly makes a number of people reasonably comfortable. While there is also significant use of Theora (and related technologies), its adopters aren't nearly as attractive targets for patent trolls as the users of H.264. Besides patent trolls, there are those large commercial holders, and as I explained before, the MPEG LA pool is so big that problems for Theora are in my opinion not outside the realm of plausibility.

The question isn't how attractive a target Theora has been so far. If it was elevated from its current status to a part of the HTML 5 standard, we'd be talking about a commercial relevance that is easily 100 times greater.

The need for consensus in the HTML 5 standard-setting discussion

HTML 5 is an extremely important leap forward and the W3C certainly wants to achieve consensus that results in consistent support by the major browser makers. This is also in the interest of web developers and website operators. If the W3C imposed Theora as the standard video format against the concerns the leading proprietary browser makers voice, this could result in inconsistent implementations of what should become a common basis for all browsers. That, in turn, would mean a lot of potential hassle for the web community.

The market relevance of Apple and Microsoft is significant enough that even without the patent uncertainty argument the preferences and positions of those vendors must be taken into account by the W3C. Their support for H.264 doesn't mean they leverage their relevance as platform companies in order to push another product of their own. H.264 is a multi-vendor patent pool, and of the 1,135 patents in the H.264 pool, Apple contributed only one and Microsoft only 65 (less than 6% of the total), according to Ed Bott's count. Both companies are also H.264 licensees and, quite plausibly, net payers (getting charged more license fees for their own use than the share of MPEG LA income that they receive). They may very well have strategic reasons for which they favor H.264, but that would be another story.

The burden of proof in the HTML 5 standard-setting discussion

I can understand the frustration of FOSS advocates and, especially, the Xiph.Org Foundation that some companies make references to uncertainty surrounding patents Theora may infringe without telling the public which those patents are. At the same time, I don't think anyone could have expected Steve Jobs to include a list of patents in that email about open-source codecs, which was just a high-level explanation of his views.

Unfortunately for Theora's developers and other supporters, there is no such thing as a burden of proof on browser makers saying they're uncomfortable with Theora because of patent-related uncertainties.

If the proponents of Theora want to disprove the "uncertainty" argument, they can't just refer to the fact that nothing has happened yet. If Theora was elevated to a part of the HTML 5 standard, the resulting adoption would represent a fundamental change of the situation.

Unfortunately, it's easier to make the case for than against a possible infringement of patents by a given piece of software. If a patent holder wants to document an infringement, there are different formats, the most popular one being a so-called claim chart. If the Xiph.Org Foundation and its allies now wanted to show that Theora doesn't infringe on any patents, they'd have to theoretically look at every patent out there.

That wouldn't be possible, but how much of an effort would be reasonable?

Under normal circumstances I believe one couldn't expect an open-source project to undertake any patent clearance of this scale. However, if companies such as Google and Opera and a formal non-profit with very deep pockets such as the Mozilla Foundation push for a standards decision with far-reaching implications for the whole industry, then I don't think it would be unreasonable to expect that they should at least look at the patents in the MPEG LA pool and perform patent clearance for Theora with respect to those.

As long as they don't make that kind of reasonable best effort, their argument about Theora being patent-safe amounts to "trust us". I said that I agree with the FSFE that the availability of a patent pool doesn't guarantee that the pool is complete. Nor does the opposite situation (developers electing not to take out patents) guarantee anything.

I don't know what Theora's proponents and opponents laid on the table in internal discussions at the W3C level. What I just wrote is based on the public debate. Also, what I wrote about Theora would equally apply to VP8 if Google proposed its inclusion in HTML 5 (after possibly open-sourcing it).

Is H.264 licensing a practical alternative for FOSS?

I asked MPEG LA, the patent pool firm that manages H.264 and other codecs, whether it would -- hypothetically speaking -- be possible for Mozilla (the maker of Firefox) to license H.264 and then make it available to everyone on Free and Open Source Software terms including the right for users to include the code in derived works. This is the answer MPEG LA gave me:
MPEG LA’s purpose is to provide voluntary licenses of convenience to users enabling them to have coverage under the essential patents of many different patent holders as an alternative to negotiating separate licenses with each. The licenses are nonexclusive and limited to coverage in connection with the applicable standard (e.g., AVC/H.264) being licensed. Therefore, although MPEG LA does not regulate this space directly, as you point out, users are not authorized to use the licensed technology beyond these limitations without payment of applicable royalties or other licenses from patent holders permitting such use.

Under our AVC License, the Licensee is the party providing the AVC end product in hardware or software. Therefore, for products where Mozilla is the Licensee, it would be responsible for paying the royalties and notifying users of the License coverage, and where other parties are Licensees, those responsibilities will fall upon them. In normal usage such as personal use, no additional License or royalty is necessary because applicable royalties are paid by the end product supplier, but additional License rights may be required where the codec is used for other purposes such as subscription or title-by-title sale of AVC video.
That answer doesn't mention Free Software or open source, but it clearly reaffirms that "users are not authorized to use the licensed technology beyond [certain] limitations without payment of applicable royalties or other licenses [...]", and such limitations aren't compatible with FOSS licenses. They go clearly against both the Free Software Definition and the Open Source Definition with their respective prohibition of discrimination against certain types of use and the requirement to allow such use free of charge.

While it's clear that code made available under a FOSS license couldn't practically implement H.264, the alternative approach would be for a FOSS browser maker such as Mozilla to include a proprietary plugin in a distribution to end users. The proprietary plug-in would be installed automatically but the license terms would make it clear that, unlike the FOSS code that is part of the same distribution, that part can't be incorporated into derived works without obtaining a license to the H.264 patent pool from MPEG LA.

Canonical (Ubuntu) and OpenOffice are comfortable with proprietary extensions to free software

Ubuntu maker Canonical has chosen that mixed free-unfree software approach. This caused some outrage by parts of the community (since it gave the impression of a FOSS company supporting H.264 against Theora), and Canonical had to justify its approach. I interpret Canonical's "clarifications" as a recognition of the fact that H.264 is commercially extremely relevant, but they try to maintain their FOSS image as much as they can.

There's a similar debate now concerning OpenOffice, for which there are free as well as unfree plug-ins and certain FOSS advocates would like unfree ones to be excluded from the project's official list of extensions. Bradley Kuhn, a Free Software Foundation board member, expressed his personal views in a blog post, "Beware of proprietary drift". It seems the Free Software Foundation lost this argument and the OpenOffice project will continue to welcome extensions that aren't Free Software.

While patents aren't explicitly discussed in the OpenOffice context, this is clearly an example of where things may be heading, contrary to the FOSS purism some people advocate. Proprietary extensions to OpenOffice could also contain patented elements.

Will the W3C at some point have to depart from its royalty-free standards policy?

My prediction is that there won't be a solution for an HTML 5 video codec that proprietary and FOSS-oriented vendors can reach consensus on. The current diversity of codecs and plug-ins is suboptimal but acceptable: it certainly hasn't prevented web video from becoming extremely popular. So there isn't really a pressing need to converge on a single standard for now.

In the long run it remains to be seen whether the W3C can maintain is royalty-free standards policy. That approach has been key to the success that web technologies have had so far, but it could, as the situation concerning codecs demonstrates, increasingly impede progress.

In the early days of web technology development, there wasn't much attention by big industry, nor by patent trolls. Hence it was possible to create patent-free standards.

The kind of technology created at that time was also far simpler than today's advances in online media. The field has become very sophisticated, which has many implications including the consequence that patent thickets related to new web technologies will reach previously unseen heights in terms of size and density.

In this earlier blog post I wrote, under the subhead "The FOSS way of innovation exposes all FOSS to patent attacks", that patents reward the first to patent a new idea, while FOSS innovation is usually of a different kind (with a few exceptions). That's also an important kind of innovation, but it's not favored by the patent system and may therefore not be a sufficient basis for future web innovation.

HTML may become like GSM, at some point requiring licenses to large numbers of patents

As the web advances in technological terms, and given that software patents are extremely unlikely to be abolished in the largest markets anytime soon, the W3C may in a matter of only a few years feel forced to revisit its standards policy.

It takes licenses to thousands of patents in order to build a GSM phone, and at some point it may be required to license large numbers of patents to build a fully functional HTML web browser. I'm afraid it's only a question of when, not if it will happen.

If you'd like to be updated on patent issues affecting free software and open source, please subscribe to my RSS feed (in the right-hand column) and/or follow me on Twitter @FOSSpatents.

{Video codecs} Accusations flying in the aftermath of Steve Jobs' email

This post is the second in a three-part series on video codecs. Click here for the previous post in the sequence, "The HTML 5 dimension", or here for the next and final post in the sequence, "Food for thought".

After Steve Jobs made a thinly-veiled threat of patent enforcement against Theora and other open-source codecs, two key players from the Xiph.Org Foundation (the organization behind Theora) responded publicly. Its founder, Christopher 'Monty' Montgomery, sent his quick comments to the media (I also received them from him directly when emailing him after seeing Steve Jobs' email). His colleague Gregory Maxwell, the Theora project leader, sent his reaction to a public mailing list. A few days later, Karsten Gerloff, the president of the FSFE, stated his opinion on his blog.

The two Xiph leaders and the FSFE president took different angles but all of them doubted that Steve Jobs' threat had any substance. They used different terminology ranging from "blackmail" to (in a semi-hypothetical context) "jackbooted thugs". Those are hard words, but are they backed up by hard facts? Let's look at them one by one.

Are those patents holders dogs that bark but don't bite?

The official Xiph.Org statement starts by mentioning a long history of veiled patent threats against Ogg multimedia formats, ten years ago with respect to Ogg Vorbis (the audio format) and in recent years against Theora (the video format from the same family). Monty then concedes that this time it might "actually come to something", but he won't worry until "the lawyers" tell him to.

If the veiled threats Monty refers to appeared vain in the past (since no legal action against those open-source codecs was actually undertaken), I can understand the Xiph.Org Foundation's wait-and-see approach. However, a famous Spanish proverb says (in a literal translation) that "the pitcher goes to the well so often that it ultimately breaks."

For whatever reasons, one of which may be the fact that suing open source over patents hurts a company's popularity among software developers, certain patent holders may have refrained from legal action in the past but we may now have reached (or be nearing) a point where at least some of the relevant patent holders may indeed be prepared to strike. A reluctance to do so need not be an impediment forever. When weighing off pro's and con's (of legal action), patent holders may come down on the "no" side in one year and on the "yes" side a few years later under different circumstances in the market.

One field that is very litigious -- and for which HTML 5 and video are going to be fairly relevant -- is the mobile communications sector. Apple and Nokia are suing each other in different courts in parallel. Apple is suing HTC. Those actions are real and giving cause for concern that the concept of the mobile web may also bring mobile sector-like litigiousness with it.

The representation that patent holders -- especially some of those who have contributed to MPEG LA's H.264 pool -- only make unspecified threats and are too afraid of actually taking their patents to court (which could result in invalidation of patents for prior art or a court opinion that interprets a patent claim more narrowly than its owner) was voiced by FSFE president Karsten Gerloff in an effort to question the substance of Steve Jobs' infringement assertion. I understand his motives and they are good, but I have a different impression of how far Apple is willing to go. Just in its litigation with HTC, which is not the only one to which Apple is a party as we speak, Apple is asserting 20 patents.

Karsten makes a similar claim about Microsoft and the possible infringement of some of its patents by the Linux kernel. But it's not hard for me to imagine that there may be (easily) hundreds of Microsoft patents that have the potential to read on the Linux kernel. The ones that are most frequently heard of, the FAT patents, have survived various patent busting attempts due to the way patent law unfortunately works, a fact on which I reported recently.

I strongly doubt that companies of the nature and stature of an Amazon or HTC would pay Microsoft patent royalties without substance just on the basis Karsten speculates about. There's nothing to gain for those companies by doing a press release in which they confirm (even without specifying details, which simply isn't usually done) royalty payments to one major patent holder. That can actually result in others who believe they have patents reading on GNU/Linux trying to collect royalties from the same licensee.

All right holders will prefer to achieve their objectives without suing, which is always just a last resort, but that doesn't necessarily make it a safe assumption that they aren't prepared to sue, especially if they have already proven so or are, like Apple, proving it right now.

Is there an antitrust problem?

Monty and Gregory (both of the Xiph.Org Foundation) allude to antitrust issues in their statements while I can't see any problems in that regard.

Monty says about MPEG LA that "they assert they have a monopoly on all digital video compression technology, period, and it is illegal to even attempt to compete with them." Monty notes they don't say exactly that, but it appears to be how he interprets their past statements on these kinds of issues.

Assuming -- just for the sake of the argument -- that MPEG LA's patent pool indeed does cover so many codec-related techniques that no one can build a competitive codec at this stage without infringing on at least some of those patents, that would (in case it's true) constitute a monopoly. However, in that case the only obligation that regulatory authorities could impose on MPEG LA under competition rules would be to make its IP available on a RAND (reasonable and non-discriminatory) basis. In other words, they can charge something (there's no way that competition law could justify an expropriation without compensation), but they aren't allowed to overcharge.

When Steve Jobs wrote that a patent pool was being assembled to "go after Theora" and other open-source codecs, he didn't say that the objective would be to shut everyone else down. this could also simply mean to collect royalties from those using that technology. As long as those royalties are RAND, there wouldn't be any anticompetitive behavior, but Theora would lose its royalty-free status. It could still compete, but the playing field would look different than the way Theora's proponents describe it as of now.

Gregory's email statement quotes a US Department of Justice statement on licensing schemes premised on invalid or expired intellectual property rights not being able to withstand antitrust scrutiny. I can't see that this reduces in any way the legal risk for Theora and its proponents. I assume that there are, unfortunately, large quantities of valid and non-expired patents related to codecs.

I also can't think of any legal theory based on which patent holders forming a pool to assert rights against Theora would have to contact the Xiph.Org Foundation beforehand. Not only is there no legal obligation but also do I think that in case there are patent holders who (unfortunately) own patents that read on Theora, they are free to coordinate their efforts and present a united front to Theora's supporters.

The term "anti-competitive collusion", which appears in Gregory's email as one of the possible explanations for what's going on, is unclear to me. While my sympathy is with an open-source project, this is just about what would or would not be legal if undertaken, a question on which I reach, to my own dismay, a somewhat different conclusion.

Is there a risk of H.264 becoming too expensive?

Karsten (FSFE) is afraid of a future H.264 "lock-in" and the cost increases this could result in:
It hardly takes economic genius to determine that when enough people and works are locked into H.264, the MPEG-LA will have every incentive to start charging any fee they please. (Oh, and don’t you dare use that expensive camera for professional purposes. Your H.264 license is purely for non-commercial use.)
Lock-ins can indeed come with a hefty and ever-increasing price. The mainframe hardware market, in which IBM has a monopoly, is a good example: for a given amount of RAM, the cutthroat price is 60 times of what it is for an Intel-based PC.

However, in the specific case of H.264 and the license fees charged by MPEG LA now and in the future, there are assurances that a scenario of "charging any fee they please" (as Karsten wrote) won't happen.

Like I explained further above, if MPEG LA had a monopoly because any video codec (at least any codec that would be competitive in today's market) needs at least some their patents, then antitrust rules would require RAND pricing. Otherwise, if those patents don't cover the entire field, there could and would be competition, which would gain traction in the market especially in the event of price hikes.

One must also consider that MPEG LA's current pricing is very far from "any fee they please" (even though in a perfect, software-patent-free world the price would be zero), and they have promised to keep future price increases within certain limits. To those who are interested in those pricing questions, I can strongly recommend Ed Bott's ZDNet blog post, "H.264 patents: how much do they really cost?" His analysis contains a number of good points that are consistent with my own analysis of the information available on MPEG LA's website. While controversial (starting with its headline), his blog post "Ogg versus the world: don't fall for open-source FUD" is also quite interesting.

Having analyzed in this post some of what's been said in the debate, I will outline some of my own thoughts in the following post, including what I believe the W3C may have to consider at some point.

{Video codecs} The HTML 5 dimension

This post is the first in a three-part sequence on video codecs. Click here for the next post in the sequence, "Accusations flying in the aftermath of Steve Jobs' email".

All of the recent activity surrounding video codecs is undoubtedly related to the question of which codec(s) will become part of the HTML 5 web standard or, in the absence of an official standard, will evolve into a de facto standard in connection with HTML 5.

HTML 5 will be the first version of the web markup language to have <audio> and <video> tags and XML-based interfaces for controlling media players.

W3C requirement for patent-free (or at least royalty-free) standards

The W3C has so far only allowed standards definitions that are, to the best of the W3C's knowledge, unencumbered by potential obligations to pay patent royalties. The W3C recognizes the important role that Free and Open Source Software has played in the field of Internet infrastructure and would like as much Internet software as possible to be available free of charge.

The W3C's current set of requirements leaves only two kinds of options for W3C standards:
  • patent-free standards (which may be the case if a standard was published before anyone might have filed patents on it, in which case the standard could be used as prior art to invalidate patents filed by others later)

  • standards for which licenses to all relevant patents are available on a royalty-free basis
Licenses to the undisputed market leader in web video, the H.264 standard, are available from the MPEG LA patent pool firm on commercial terms. The need to pay royalties makes H.264, despite being a de facto standard, a non-option for the W3C under its current set of rules.

Browser makers divided into two (if not three) camps

Some browser makers, especially Mozilla, Google and Opera, would like an open-source codec such as Theora to become part of HTML 5. While the availability of those codecs on open-source terms seemingly ensures compliance with the W3C's requirements, proprietary vendors such as Apple and Microsoft consider the patent situation surrounding such formats unclear. They are, however, comfortable that those who obtain a license to H.264 from MPEG LA are reasonably safe from patent hold-up.

Microsoft's Dean Hachamovitch stated the following in a recent post to the official Internet Explorer blog:
The biggest obstacle to supporting more than H.264 today is the uncertainty. When there’s industry consensus and confidence that the uncertainties are resolved, we’ll be open to considering other codecs. Until then, we’ll continue with our current plans to deliver great HTML5 video in IE9 with certainty for consumers and developers.
The proponents of Theora and similar formats believe the major proprietary vendors are just spreading FUD (fear, uncertainty, doubt) against the "open" approach. Accusations of this kind have been leveled just again following Steve Jobs' email. In the subsequent part of this sequence of posts, "Video codecs: accusations flying", I will discuss the more recent statements.

At this stage, a further division into three camps (H.264, Theora and VP8) is actually more likely than industry-wide consensus in favor of one format. VP8 belongs to Google, which acquired its maker, On2 Technologies, this year.

Living in a multi-codec world

A single standard video codec for HTML 5 would simplify things. An HTML 5 web page could then include a tag such as
<video src="http://www.example.org/MyVideo">
and the video (specified by its Internet address) would be played. It would have to be available on the server in only one format in that scenario.

In a situation in which different browsers have support for different codecs built in, web sites that wish to display video to users of different browsers will need to keep their video content available in all of the formats required to support all of the (relevant) browsers.

In order to make sure that browser A is provided with a video encoded in format X and browser B with the same video content encoded in format Y, the server will have to make a distinction. Since browsers tell a web server their name and version number, such a distinction is possible. The server can then either provide different versions of an HTML page -- with different addresses (URIs) attached to all those <video> tags in order to offer different files to different browsers -- or can use the same URI and then provide the video file in one format or another. Presumably it would be more efficient to provide different URIs (but then there can be problems if users of different browsers share not the URI of the web page but of the video file itself).

Absent an agreement on a standard HTML 5 video codec, plug-ins will continue to be relevant

Originally the World Wide Web Consortium (W3C) believed that standardized <audio> and <video> tags and interfaces could greatly reduce or even ultimately eliminate the need for media player plug-ins. However, the W3C's authority is limited: it is a well-respected and influential organization, but it does depend on support from the major browser makers.

Those couldn't agree last year on the inclusion of a standard codec in the HTML 5 specification.

As a result, it is expected that different browser vendors will make different decisions, and multimedia plug-ins are likely to continue to be relevant for some more time.

With plug-ins, it is possible to make all video codecs available for all browsers. Such plug-ins could support only one format each (in which case one would -- and could -- install multiple plug-ins to watch different formats), but the most popular ones, such as RealPlayer, support a multitude of formats anyway.

I understand Microsoft's position as saying that if third parties wish to display Theora videos in Internet explorer, they can provide a plug-in for it if they wish.

FOSS advocates would obviously prefer the adoption of Theora as a standard by the W3C and all major browser makers. In that case, users of open-source software wouldn't have to install closed-source plug-ins in order to be able to watch web video in the most popular format (which right now is, and probably continues to be for some time, H.264) and content providers could make Theora videos available on their servers without having to require Internet Explorer and Safari users to install a plug-in. But the adoption of Theora looks like a long shot now.

After analyzing some of what's been said in the debate in the following post, I will later outline my thinking (in the last part of this sequence of posts) on video codecs, including what I believe the W3C may have to consider at some point.