Friday, November 26, 2010

Tidy5 aka the future of HTML Tidy

UPDATE 2011-11-19: The most immediate of my concerns have been addressed by Björn Höhrmann who has submitted basic support for HTML5 in Tidy to a forked version available on Github.

I have been a long time fan of Tidy, a tool to clean up and do some basic checks on the code. However, the tool is not really being updated any more, and since I have moved to using HTML5 and ARIA on all my new projects, it has lost much of its usefulness.

I also see no momentum picking up and thus think it should be considered folding Tidy into html5lib. By that I mean using html5lib to get Tidy like functionality.

Today I wrote a mail that I cross posted to the discussion list for Tidy and the help list for WHATWG. This blog post is essentially a longer version of that email.

Tidy must go HTML5

Here is the deal with HTML5. Pretty soon every browser will have an HTML5 parser. Except for IE, browsers do not have multiple parsers.

This means that tokenization and DOM tree building will follow the rules defined in HTML5 – as opposed to not really following any rules at all, since HTML 4 never defined them.

Simply put, there is no opt out of HTML5. An HTML 4 or XHTML 1.x doctype is nothing more than a contract between developers. Technically all it does is to set the browser in standards compliance mode.

Thus, I do not see any future in a tool that does not rely on the HTML5 parsing algorithm. Tidy can not grow from its current code base, but needs to have the same html5lib at its core that is in the HTML5 validator, which basically is the same as the one being used in Firefox 4.

Additionally, Tidy suffers from:

  • Implementing WCAG 1 checks in a world that has gone WCAG 2.0.

  • Not recognizing ARIA, which is an extremely valuable technology on the script heavy pages of today.
  • Not recognizing SVG and MathML.

I know one can set up rules to enable Tidy to recognize more elements and attributes, but for full HTML5 + ARIA + SVG + MathML (and perhaps RDFa), that is simply not doable without superhuman efforts.

The merge

A basic Tidy5 implementation could look like this:

  1. Parse the tag soup into a DOM.
  2. Serialize HTML from that DOM.
  3. Compare the start and the end result.

Perhaps any error reporting can be made during the parsing process. Henri Sivonen could probably answer the question if that is possible.

However, there is also talk about having a lint like tool for HTML, that goes beyond what the validator does. So in addition to the above, there can be settings for stuff like:

  • Implicit close of elements. Tolerate, require or drop all closing tags?
  • Implicit elements – tolerate, require or drop (maybe require body but drop tbody...)?
  • Shortened attributes – tolerate, require or drop?
  • HTML 4 style type attributes on <script> and <style> – tolerate, require or drop?
  • Explicit closing of void elements – tolerate, require or drop?
  • Full XHTML syntax (convert both ways)
  • Indentation. Preferably with an option not to have block elements with a very short text content not to be broken up into 3 rows as in Tidy today.

Besides purification and linting, such a tool/library can be used for:

  • Security. This will require the possibility of white and/or blacklisting elements and attributes. And preferably also attribute values.
  • HTML post processing. This will enable authors to see indented code, that is explicit, while at the same time such "waste" can be removed before gzipping. This would be akin to JS minification and it could be performed on the fly from within PHP, Python, Java, Ruby, C#, server side JS or whatever. It can also be done manually before uploading from the development environment to production - or it could be integrated into the uploading tool!

Checking templates

The main feature that Tidy has today, is the ability to handle templates, by preservering/ignoring PHP or other server side code. To what extent the HTML5 parser can be modified to handle that feature I do not know.

From a maintenance and bug fixing point of view, I see huge wins in having a common base for Tidy, the HTML5 validator and HTML parsing in Gecko.

In fact, a very radical idea for Firefox (or any other browser using html5lib) would be to actually integrate these tidy-inspired features directly in their development tools, re-using the existing parser! A Firebug extension that lets me validate as well as tidy up my code directly within the browser would be super awesome!

But the actual possibility thereof is beyond my technical knowledge to evaluate, so I need to hear from people who know this stuff better than I do.

Integration with accessibility checking

Although automatic testing can not not substitute manual tests, they can give a developer an in the ball park idea about the accessibility of a page and fix the most obvious mistakes.

The fact that Tidy today do integrate WCAG 1.0 is better than nothing and any implementation of Tidy5 should strive to integrate WCAG 2.0 in a similar fashion. That really is a no brainer. Having to use only one tool and getting all errors in the same buffer (for programmers) or the same console (for manual checks) is certainly convenient.

OK, that was my two cents. What do you think?

Monday, October 18, 2010

How to know if you are watching a bad JavaScript tutorial for beginners

Answer: You are watching it. There are no good tutorials for beginners anywhere!

People who actually know enough to teach JavaScript properly either don't have the time to teach, since they get paid to work for clients and when they do teach, they do so at conferences or seminars in front of their peers, not to newbies. That means that what's available on-line for beginners is stuff taught by people who do not know enough to actually teach.

For me as a professional teacher that means that I have to devote considerable time to warn my students about bad habits, thus wasting valuable lesson minutes, which turn into hours. And there are always someone who missed a class, and tried to compensate by Googling or YouTubing – something teachers normally would encourage – but will turn in bad work, that I have to give a lower grade. Thus, the worst case scenario is that students are actually punished in a way for taking an initiative.

Bad pedagogy

One of the things I've learned developing for the web for 15 years and teaching it the last 9, is that it is way more effective to learn things the right way from the start, and then be warned about some bad or outdated habits to avoid. Learning bad stuff, and then un-learning will harm students' motivation, confidence (how can I trust anything I read?) and slow down the learning process.

Also, to a newbie it is not immediately apparent why a particular way of doing things is bad, so they tend to stay with their bad practice, even though they may hear that they shouldn't. But it works! of course I answer something like Yes, but it's fragile, not scalable, hurts performance and violates good style, but it will take a very long while until I can actually show them that this is the case, and a demonstration is more worth than a thousand words.

And habits, once formed, have a tendency to become automatic, instinctive. They are hard to un-learn.

Don't believe me? Look at the praise in the comments for this super crappy JavaScript tutorial on YouTube. People are saying thing like now I know how to do it, when they should be saying that now they know how not to do it.

What's out there?

My students belong to the YouTube generation. They prefer videos to words, demonstrations to articles, images to text. Thus, nowadays the first place I look for material to use is on sites like YouTube or Vimeo, but every single beginners tutorial I've found so far is crap. They all exhibit most of the following features:

documen.write()
All by itself, document.write() is bad and should be avoided. This is way more important than keeping the "hello world" tradition alive. If you really must do that, at least use alert() since it does not alter the DOM and thus does not give a wrong impression…
document.write() is giving the impression that JavaScript has a linear execution model and that document.write() works like echo in PHP, or cout.
The faster students get into an event-loop/event-driven programming mindset, the better!
Inline event handlers
As PPK says,Don't use it.
Hiding JavaScript from old browsers using HTML-comments
Yup, in the year 2010 some wannabe teachers think you should still be nice to Netscape 1.2 and early versions of Mosaic! Monkey sees, monkey does.
Setting the language attribute on script tags
Now that we are learning that even the type attribute is not needed any longer, I still see people use the language attribute.
It is OK to omit semi-colons on the end of a statement
This is telling students that it is OK to rely on automatic semi-colon insertion.

I am teaching this stuff for a living and you really do not need any of the features listed above as an intermediate step to learning. When I teach my students, I only mention these in passing, saying that:

  • If you use them, I will fail you on the course, or at least give you a low grade. (Yes, I will!)
  • If you receive a tip from someone on a forum, or by Googling, suggesting that they are used, do not listen any more to that person. Learn to use these only as a sign of bad advice.

Which brings us right back to our problem. Where is the good advice?

There are some good things to read, but as I've said, I want videos as well.

What I do as a teacher today

I recommend watching the way too advanced videos, but only the first couple of minutes, originally meant as a rehash only by the speaker. I tell my students to watch the first 20 minutes of this video with Robert Nyman and the first 18 minutes of this one with Douglas Crockford.

That is not something they do instead of listening to me or reading, but this semester I have occasionally been called away on other business and also missed a few days due to illness (nothing serious).

If not for these, I might as well let my Swedish students look at this one. At least they won't be taught something bad!

(No, I do not speak Hebrew and I have no clue if that is a good video or not.)

And I enforce the following rules:

  • You shall validate your HTML early and often.
  • You shall validate your CSS early and often.
  • You shall learn how to use JSLint and use it early and often.

Linting is not an advanced topic. It is an essential tool to encourage good habits for newbies. Besides, validation and linting are not something you do at the end of your development process just to get a stamp of approval. They are tools to help you along the way.

In a similar fashion, I introduce PhpCodeSniffer very early in my teaching of PHP. In fact, with every year of teaching, introducing these tools have come earlier and earlier. My web design students this year were introduced to the HTML validator the very first lesson that I devoted to HTML.

Cursing darkness or lighting candles?

This blog post basically is me ranting again. However, I do have have put in some considerable effort (if I may say so myself) to make the JavaScript teaching world a better place, by my work on the InterACT curriculum for DOM-scripting for the Web Standards Project.

If you are somewhat knowledgeable about unobtrusive JavaScript, and the good parts of the language, please take a look at the competency table and provide a video explaining one or two bullet points from it. You do not have to be Douglas Crockford, but should at least have heard a couple of his speeches and have read his book.

In so doing you will not get the reputation of being a ninja or guru. You won't get a chance to show how clever you are, but guess what? Rookies eventually grow up to become intermediate programmers, and they may very well evolve to become rather advanced one day. They will go to conferences and hear you speak on the advanced topics – one day. If you did provide them with something of real value when they took their first steps, they might just value that experience and return to hear some more in the future.

Bonus teaching tip

Use the JavaScript shell, while talking about basic things like types, variables, operators, expressions, statements and blocks.

Since the students are not in the browser environment, they are less likely to transfer bad habits to real work. The fact that they are in a shell makes it very clear that what they are seeing is not definitive. Even so, I will nevertheless point that out to them, again and again. One must never take stuff like this for granted, and as a teacher one always have to repeat oneself…

Oh yes, when teaching PHP I use the PHP shell as well.

The better version of this post

Chris Williams talks about how JavaScript education must be revolutionized (and other things) in this talk from JSConf.eu 2010. (transcript)

Saturday, September 4, 2010

Why H.264 is disqualified from being a web standard

In short: H.264 can never become a standard for web video as long as the patents are not released according to W3C patent policies.

The MPEG-LA consortium so far has showed no interest whatsoever to release their patents in a W3C compatible way. Thus the question is answered, H.264 is not even a candidate for becoming a web standard. It can't win that race, since it's not even in the running!

The W3C patent policy

The goal of this policy is to assure that Recommendations produced under this policy can be implemented on a Royalty-Free (RF) basis.

You see, it's not a case of Mozilla and Opera being obnoxious. In fact they are only fighting for the same thing as the W3C is: An open web. Video should be open just like HTML, CSS and the DOM is open.

Yes, the W3C mandates that all standardized web technologies should be free for all and for all types of usage and that as far as they are affected by patents, the owners of those patents legally commit to not stopping such usage.

But H.264 is implemented natively in the browser

Yes, it is. In fact there is no law against implementing anything that is not a standard or being considered for standardization. Google implements Flash, within the browser, but that does not make Flash a web standard.

H.264 is usable, and since the drivers are mature technically still the best option for delivery on mobile platforms (more on that later). In fact, would the MPEG-LA consortium consider releasing their patents in a W3C friendly way, it would be an excellent web standard. I don't see that on the horizon, though.

The fact remains that H.264 is a proprietary, patented and closed technology. Some vendors have bought themselves the right to use that technology and others perhaps could, but that is not the kind of freedom web standards should be made of. I find it very ironic that people fighting for free and open web standards for markup and stylesheets, for scripting and for graphics (PNG, SVG, etc) and for net neutrality and universal access are so quick to sell out their ideals when it comes to video.

Since H.264 video is implemented natively is some browsers, we can do stuff with it that we otherwise perhaps could not. But there is still precious little we can do that could not be done in Flash. Really. At least when you look at the end result, not at how it's done.

Don't get me wrong, I like having native video in the browser, but native does not equal open.

An aside: The bigger issue

There is one ideal solution to this problem of course. The USA should change its patent system, that is flawed and broken beyond usefulness. Patents are granted for user interface ideas, algorithms and all kinds of obvious stuff.

If I'd been an American I'd write my congressman and ask him or her what they are doing about this. And if I'd not be content with that answer I'd vote for somebody else, and I'd let everybody know i DID. If the USA would change its laws, most of the world would follow.

However, since a change in US patent laws not is going to happen soon, we are stuck in this mess for the foreseeable future. So what do we really do about it?

Could the MPEG-LA consortium be persuaded to change its mind?

Here is an idea: Let's have Hixie add H.264 to the HTML5 spec and release that spec in such a way as to start the W3C patent clock. That would mean that any patent holder who feels that their patent is being infringed must protest.

There could be two outcomes. The MPEG-LA could show its true colors and protest or they might succumb to the pressure and actually change its policies. The first alternative would perhaps silence everyone who thinks H.264 is free enough, the second alternative would really make H.264 free enough!

I doubt Hixie would include H.264 in the spec in order to float a balloon like this, though. But it's a fun thought.

The real solution: Solve problems that can be solved

The one strong argument in favor of H.264 is hardware acceleration, especially on mobile platforms like phones, netbooks and pads. But bringing VP8 to a comparable state is within our grasp. The hardware acceleration problem can be solved and it is an easier problem to solve than flawed US patent laws or changing the minds of stubborn MPEG-LA patent bureaucrats.

In order to understand this we must consider two things: What exactly is hardware acceleration and what is the expected lifespan of a web standard compared to the lifespan of current chip sets?

I'll start with the former. Video codecs will probably improve over the next couple of years, regardless of them being standardized. Smart people will conjure up better ways to reduce file size while increasing quality, or at least improving one of the two without hurting the other too much.

The question thus becomes, has H.264 been implemented in the layout of the transistors of modern GPUs in such a way as to make any other algorithm, or any variation of the algorithm impossible? That is, are the calculations required to encode or decode H.264 implemented in silicon in every minute detail and will electrons flow from transistor to transistor in a sequence that exactly matches H.264 encoding or decoding?

If that's the case, we have really dug ourselves into a hole. If that's the case, we've made it impossible to improve anything at all! Since new ideas can not use the GPU, they are doomed to be bad ideas!

But since it still takes a whole lot of code to actually write an H.264 encoder or decoder, the answer is of course no. Hardware acceleration of H.264 is not a magic black box.

A GPU is just a slightly different processor, optimized for some kinds of arithmetic that a normal CPU is not. There is no magic to it. It's just a layout of transistors. In the 80's and early 90's most CPU's could not do floating point arithmetic effectively. One had to buy a separate piece of silicon to get that (the 8087, the 287 and the 387). IBM recently introduced a CPU that has a core for decimal (sic!) arithmetic and does cryptography in the hardware.

It's actually not about doing some stuff in the hardware as opposed to other stuff in software. Last time I looked, the CPU was a a piece of hardware! It's a matter of letting the right piece of hardware perform the kinds of computational stuff it does best. It's matter of writing and compiling your programs to use the integer part of the CPU when that's appropriate, the floating point part when that's appropriate and the GPU when that is the most effective solution.

There is no technical barrier preventing VP8 or ogg/theora, or indeed any other software, from using the GPU. In fact, Microsoft is using the GPU to speed up core JavaScript arithmetic in Chakra. That's just one example of modern programs using the power of the GPU to do calculations that are not graphics related at all. So if that's possible, what says it's impossible to move arithmetic calculations to the GPU in the case of non H.264 encoded Video?

Mozilla has gotten CPU usage decoding ogg/theora down from 100 % on the Nokia n900, to just 20 %. And the main thing preventing that number to drop is the fact that the sound is decoded only in the CPU. But that's an obstacle that can be overcome as well.

Lack of so called hardware support for ogg/theora or WebM is in fact not really a hardware problem, but a software problem. The decoders (and encoders) have not been written in such a way as to optimally harness the arithmetic power of the GPU &ndash: yet! I expect this to change rapidly, though.

But maybe current hardware has been made with H.264 in mind, making it impossible for VP8 to fully catch up? Well, if the web industry would show a clear support for the VP8 codec, AMD, NVIDIA and Intel will soon implement some alterations to their transistor layouts in the next generation of chip sets, making the playing field even.

In a very short time we will see WebM video implementations that move enough calculation to the GPU to make it usable in portable devices, using today's silicon. But for the sake of argument, let's suppose that looking at WebM video would drain the battery of your cell phone 10-20 percent more than H.264. How bad is that? It is still within a reasonable limit, I say. And HTML5 still let's you provide H.264 as progressive enhancement to any client. But what's being argued (at least in this article) is what we should consider as a baseline, what can become a true standard for web video.

Let me say this as emphatically as I possibly can. Even if H.264 could be considered somewhat better than VP8 from a technical point of view, it still is not a good enough reason to let go of our freedom. Anyone who is valuing a slight short term technological advantage over long term freedom, needs a reality check and an ethical wake up call!

What about submarine patents?

Microsoft and Apple keep talking about submarine patents, that it is a hazard to everyone implementing ogg/theora or WebM and the MPEG-LA likes everyone to believe that they soon will smack down on the VP8 codec used in WebM video. Since not everyone smells the FUD, let's argue about this for a while.

If indeed VP8 is trespassing on H.264 patents does that mean that anyone implementing a VP8 encoder or decoder can be sued? Could Microsoft be sued? Could Apple?

The premise for such a thought is that the patents for H.264 not only stipulate algorithms but prohibits anyone licensing those patents from doing any kind of alteration not only to individual patents, but to the exact combination of those patents.

This is thus a legal variation of the hardware argument. It stipulates a lock-in mechanism to H.264 that prevents any kind of experimentation or improvement. All by itself that would be a bullet proof case against H.264. Who would like to lock the web into such a solution?

But of course this is not the case. One may use individual algorithms from H.264 together with new or altered algorithms. Anything else would be plain stupid!

And since Apple and Microsoft are licensees of the MPEG-LA patent pool (as well as contributors to it, although Apple has not really contributed as much as Microsoft has), they are authorized to use those patents. They have bought themselves the right to write software that use those patents! So even if we admit – for the sake of argument – that the VP8 codec indeed does infringe on H.264, what risk does that pose to Apple or Microsoft? None whatsoever!

If Mozilla and Opera are willing to take the risk of implementing VP8, without licensing anything from MPEG-LA, what risk is that to Apple? In what way is that a threat to Microsoft? Having bought themselves the right to use all MPEG-LA patents that risk is absolutely zero.

Bottom line: MPEG-LA will not sue Apple if they implement the VP8 codec. Nor will they sue Microsoft.

(Of course, one option for Apple would be to let anyone submit any driver they'd like to IOS. If it was a truly open platform, we would see a WebM enabled version of Mobile Safari tomorrow, without Apple lifting a finger, without Apple programmers having to write a single line of code!)

H.264 advocates can not both have the cake and eat it too

On one hand we hear that VP8 is so similar to H.264 that it probably infringes on the patents guarding that codec. On the other hand we hear that it is so vastly different that we can not get hardware decoding. But which one is it?

If the algorithms are so similar, that there is a patent infringement going on, it goes without saying that GPU accelerated VP8 encoded video must not be hard to implement. If that's the case, the silicon has been wired to do these exact calculations.

On the other hand the algorithms are so different that decent GPU accelerations is impossible, what makes anyone think that the MPEG-LA could sue you for using them?

I wish H.264 advocates would chose which of these two dangers we are supposed to be afraid of, because they are mutually exclusive.

Another example of mutually exclusive claims is that MPEG-LA supposedly owns so many patents that is is virtually impossible to write a video codec that does not infringe on their patents and the fear that there might be some third party, that is not participating in WebM or Theora video, nor in the MPEG-LA, but holds patents in secret, waiting for someone to implement it. A Paul Allen, but with an actual case. A troll with infinite patience that will strike just when WebM has taken off.

But if VP8 is so akin to H.264 that it infringes on their patents, what space would that leave for this third party troll? Very little I'd say.

Once again, I am not saying that the one of these propositions is true. In fact I believe them both to be untrue. But I wish that H.264 advocates would agree on one argument, when mere logic dictates that one being true by definition means that the other one is not.

What kind of power does Apple and Microsoft wield within MPEG-LA?

Speaking of lawsuits, the MPEG-LA is a consortium and it must act according to the will of its members. So if Microsoft and Apple really cared about open video, I have suggestion for them. Use your muscle within that consortium, that you are part of, and convince your fellow members that truly open video is a good thing™. Convince them to release H.264 in a W3C patent policy compliant way. Show us that you are submitting such proposals to the board, show us that you are arguing the case. Only then will your opinion be worthy of consideration.

Until that happens, H.264 can not be a web standard. Until that happens, it can in fact not even be considered for standardization.

Tuesday, August 10, 2010

Chrome's auto update argument has just been disabled

A short note: When I have criticized Chrome for putting out half baked features on the web, the usual defense is that Chrome auto updates and that the release cycle is so short that developers will not be locked in to a version where the feature is half baked only. OK, but now Chrome is available as an MSI packet, and that changes everything!

I was anticipating that move, having experience not only in web development, but also in network administration. The key to getting into any organization is deployment through Active Directory group policies. Setting up and deploying an MSI-packet is expensive and not something an organization will do on a bi-monthly basis, unless there is some really critical security fix needed. They will not do it in order to fix a broken HTML5 feature.

What does this mean

If corporations, universities, schools, hospitals and other organizations decide to use Chrome, we will see a significant rise in the usage of old versions.

So providing an MSI-package makes it more appealing for corporations to roll out Chrome, but it still is work to do all local modifications, like setting proxies and home page. That's a hurdle big enough to discourage most organizations from updating on every release. Such aggressive updating will not happen. Are we clear on that?

Ergo: Rolling out half baked features in Chrome just became a really important problem for us all and failure to see that from the Chrome team is simply irresponsible. You can't have the cake and eat it too, and that simple rule applies to Google as well as everybody else.

Update

It seems that the Chrome MSI-package is half baked as well! How ironic. It's basically just a wrapper around the install exe-file. Furthermore, this means I've not yet been able to determine if the auto-update feature is disabled. But if not, what sysadmin in his or her right mind would allow self-updating software on the computers of the network?

Thursday, July 15, 2010

Rotating table headers now in 4 of the top 5 browsers

Update May 3, 2012. Demo link fixed. + Firefox 14+ (Aurora, Nightly) no longer skews text. Apparently the CSS Working Group have recently decided that Webkit behavior was better and Mozilla has fixed their code. I think this was a bad decision, but that is how things stand. Other stuff seem to be happening in the CSS WG with regards to skew and I have unfortunately been unable to keep track of this. Note that skewX does not seem to be in jeopardy, though.

A little more than a year ago I wrote about a technique I have developed to rotate table column headers. (If you have not, you had better read that article to understand this one correctly.) Back in May 2009, this could be done in Firefox 3.5 and Safari 4. Now browsers have evolved and it's doable in all major browsers, except Internet Explorer. So, with Firefox, Chrome, Safari and Opera all supporting this, has the time arrived to use this on production sites?

The sad answer is no, there are still a few issues to that needs to be sorted out in my opinion. First of all, one really can not ignore Internet Explorer, and even if we can rotate content using -ms-filter, that is not an optimal solution. I have also seen reports that these filters will not work in IE9 in its strictest standards mode. Removing them, while not adding CSS transformation, gradients and a few more things, will make it near impossible to achieve true cross platform effects. I hope that won't be the case, though. (More thoughts on this in my conclusion.)

Updates to my code

The full demo is at keryx.se/lab/rotating-th/rotate-th-2.html

Today I revisted my code. The first thing I did was simply to slam on -o- prefixed rules, identical to all -moz- rules. The result looked like this. Click the image to see it in full size.

Screenshot of Opera, firefox and Chrome, showing bad alignment

Not nice. Firefox, Opera and Chrome (the 3 browsers I could test on my Linux driven Thinkpad) all got the horizontal position differently. Admittedly, Chrome got a slightly different rule, thanks to a Webkit CSS filter. But this worked as intended in my tests one year ago. My code was largely experimental and not really calculated anyway, so I was not surprised that it broke. It was intended as a proof of concept, not as production ready code.

Before I started to investigate the differences in earnest, I tweeted, and soon Faruk Ateş chipped in and had some helpful thoughts. First we removed my line "top: 1em" for all browsers. Note, that it must be removed. Manually setting it to 0, will still mess things up in Firefox. I suppose that's a bug, since the calculated value is 0 with that line removed…

The line that was removed:

th > span > span {
    …
    top: 1em;
    …
}

Stability issues

Next problem, subpixels. I had used the em unit to set heights and widths. But 1.3em is not the same in all browsers. In my code it's 23.4 pixels in Firefox, but only 23 in Opera and Chrome. The latter two does not translate ems into subpixels. At least not on Linux and Windows. So I made some changes to the code, to use pixels almost everywhere.

th > span > span {
    …
    padding: 9px;
    height: 23px;
    width: 120px;
    …
}
td {
    padding: 5px;
    text-align: right;
    width: 36px;
}

All values above were set in ems in my original version. Now I was getting close to a working version. There was one thing that bugged my designer eye – and I am really not a designer. The line in between table columns did not align perfectly with the column header lines in Firefox. Once again, this was a sub-pixel problem. So I added this rule, explained in the comments:

th > span > span {
    …
    position: absolute;
    left: -0.5px; 
        /* 
          So far only Firefox does subpixel positioning = 
            ignored by Opera, Chrome and Safari.
          But they got this right (visually) in the first place.
          This rule puts the rotated span exactly in place for Ffox
          (tested on Linux and Windows July 2010)
        */
    …
}

Should Opera and/or Webkit add support for subpixel positioning, it is my hope that it will affect their rendering just like it does in Firefox. But this is a fragile hope!

Webkit text skew (not a) bug

Update May 3, 2012. Firefox 14+ now treats text the same way as Webkit.

To make the text as legible as possible it is skewed back to being non-skewed. Let me explain. There are three spans. The outermost is simply an anchor for the middle one, where the real magic happens. That span is rotated and skewed. That leaves the text a bit… skewy(?) To remedy that, I use a third span, only used to skew the text back again.

th > span > span {
    …
    -moz-transform: rotate(-65deg) skewX(25deg);
    -o-transform: rotate(-65deg) skewX(25deg);
    -webkit-transform: rotate(-65deg) skewX(25deg);
    -moz-transform-origin: 0% 0%;
    -o-transform-origin: 0% 0%;
    -webkit-transform-origin: 0% 0%;
    …
}
th > span > span > span {
    /* Rotate the text back, so it will be easier to read */
    -moz-transform: skewX(-25deg);
    -o-transform: skewX(-25deg);
    -webkit-transform: skewX(-25deg);
    /*
      Safari and Chrome won't skew back, so the above
      line is actually redundant right now
      (checked July 2010 on Linux and Windows)
    */
}

I suppose this is a Webkit bug, that needs to be filed. This image illustrates the problem. The red line shows the actual angle of the stroke in the letter "l" (small "L"). The green line shows what angle it was supposed to be.

Text is still skewed in Safari

Screenshot of my table from Safari on a Mac, graciously provided by Matthew Irish. This problem affects all Webkit based browsers, on Windows, Linux and Mac.

Opera text blurriness and zoom bug

Opera gets the skewiness right. (Being a non native English speaker, I love the word skew and will jump at every opportunity to skew it!) However, Opera looses the smoothness of the text, once it has been rotated. It will look blurry. The following image compares Opera to Firefox. The text is not perfect in Firefox either, though.

An even bigger problem with Opera is that it really will mess the text up when zooming the page, to the point where it becomes totally illegible. The image below is zoomed to 300 % and one can not read the text at all, since the bottom (=left) margin has widened and pushed the letters on top of each other.

Messed up text in Opera when zoomed

Gaps between cells in Firefox

The image above illustrates another problem in Firefox, perhaps also caused by subpixel positioning. At some zoom levels small gaps appear between the cells. Note that we are not drawing lines on the actual table cells but a box around the edges of a span. We are just visually emulating rotated table headers.

From previous testing I also found these gaps also when not zooming the page. One has to be really precise in the measurements to fix this. Right now I basically get the visuals right by having the line from one cell on top of the line from the previous cell.

If I'd fine tuned my technique, I think these gaps could be avoided. Not drawing both lines both left and right (= top and bottom in CSS) and extending the top (= right) line a bit might do the trick.

On subpixels

A small aside: It might seem like subpixel positioning is all bad. I believe it generally is a feature, not a bug. I wish all browsers could agree on Firefox' behavior. But in this context it seem to be problematic. I will ping a few people at Mozilla and see what their take on this is.

Conclusions and some thoughts about the future

Please see the top note about things being unstable in CSS WG, when it comes to skew.

I really think there should be a CSS-rule that would make this super easy. Rotating column headers is a really common technique in spreadsheet programs like Excel and LibreOffice Calc. I use it all the time. It would be a great feature for Google and Zoho Docs and similar on-line products. So far, however, the CSS Working Group and browser vendors have shown very little interest.

All browsers display an issue of some kind with my current technique:

  • Killing: Bad rendering of the text in Opera, especially when zooming.
  • Bad: Gaps in Firefox between the cells, when zooming.
  • Slightly annoying: Skewed text in Safari and Chrome (and probably Firefox 14+).

Internet Explorer 9 is a big question mark. My current idea is to capability detect for CSS transforms and replicate this behavior in SVG, if available, or using -ms-filter as a third option. That should cover all bases = MSIE 6-9, Firefox 3.5+, Opera 10.51+, Safari 4+ and Chrome (always at the latest version, at least until it comes to the corporate environment – a subject worthy of a blog post by itself).

Having to limit oneself to pixels as a unit and fixed width for the columns is a major obstacle. For this reason, as well as the Internet Explorer problems, I think that the only sane way of doing this at the moment and the foreseeable is using JavaScript. Perhaps this could be my first official JQuery plug-in. (Please feel free to beat me to it!)

Friday, July 9, 2010

No browser supports HTML5 yet. Part 2. Technology.

In the first post I questioned the perception that a particular browser supported HTML5, whereas other browsers do not, according to some misguided fan boys. I pointed out that browser vendors are quick to claim that a particular feature is supported, when in reality that support is half-baked and far from complete. And that is not a good thing!

History repeating itself

My main gripe with browser vendors not using solid shipping criteria for stable versions, but releasing flawed and incomplete implementations of HTML5, is that it may lead to flawed and incomplete web sites. The message from Google, Apple and numerous others is not, go forth and experiment, but go forth and use. Today! Knowing very well that the HTML5 input range in Chrome or Safari is not accessible, they still encourage actual real world usage. Knowing very well that Chrome exposed the JavaScript form validation API only, and had no real constraints for invalid data, the implementation was shipped – and lo, it looked good in the support charts!

So what happens when the spec changes? And what happens when real world usage will make needed spec changes not an option? Being responsible is way more important than being first. But once again, that does not look as cool on the support charts…

By putting half baked implementations into non-beta software Chrome and Safari actually do harm to the web. Why are we cursing Internet Explorer 6. It was not for lack of innovation and standards support when it first shipped. It is because that standards support was buggy and incomplete – that's right, it was half baked. And now we see Chrome and Safari releasing unfinished implementations as well.

In one way, we are at a better place right now. Chrome is aggressive in its updating of itself and Safari versions also have relatively short shelf life. But when real world web sites appear that lock in to today's versions and break when bugs are fixed, things will not look so rosy any more.

And please consider this. Back in the earlier 90's Netscape released new versions at an equally frantic pace. The web year was supposed to be five months and every web year should mean a major update to Netscape navigator. So JavaScript shipped even long before the implementation was mature, and because of this we are still stuck with some really bad api's and bugs, that could have been squashed given a few more months of time, can not be squashed, because sites soon depended on them to work correctly. As long as Chrome was a niche browser it might not have mattered, but now that it has received a descent and well earned market share, it will matter. (Not to mention the mobile space, where Webkit based browsers are ubiquitous.)

Exhibit A: Web forms

Web forms are really the starting point for all things HTML5. In spite of that fact, it is not until now that work has begun in earnest to implement all new cool form features. Except for Opera, of course, who implemented most of this a few years ago!

The work to get the HTML5 additions to web forms into Webkit is tracked in bug 19264. The work to get it into Gecko is tracked in bug 344614 and on a wiki planning page. The most important part being the shipping criteria. (I subscribe to these bugs to keep myself informed, but then perhaps I don't really have a life…)

I could go on and explain details about what is lacking in browser A, B or C, but I will not make this blog post any longer than necessary. Look at the tracking bugs and see for yourself. At the moment neither Webkit nor Gecko (nor Opera) is nowhere near having a complete HTML5 web forms support. They all lack some features and they all lack accessibility. (The Webkit bug is a bit deceptive, since it does not seem to list all features like the Mozilla bug does.)

Exhibit B: Sectioning elements

Some new HTML5 elements seem really easy to understand. They are not hooked into browser behavior, JavaScript API's or some other esoteric behind the scenes weirdness. They just seem to be turbo-charged divs, divs with real semantic meaning. Furthermore, they are a part of the HTML5 spec that is reasonably mature and thus ripe for implementation in browsers.

Yes, there is one talked about crux. It is not super duper easy to differentiate between article and section. Perhaps the spec will need some further explanations, or perhaps that will be taken care of in the for dummies version.

But there is one further matter, a point that seem to be universally missed. Even by browser makers! And that is the fact that sectioning elements affect document outline and that should in turn be exposed to seeing users through varying sizes on the headings and to non-sighted users through their assistive technologies. Headings are used when scanning a page – both by sighted and non-sighted users. Being able to tell at what level a heading is, is therefore a critical part of any implementation.

The real deal breaker is not if you can set display: block on <article>. The real deal breaker is if you can easily set an <h1> within any sectioning element to resemble <h2>, or setting it to resemble <h3> if it's one step further down in the document hierarchy. Etc.

Being able to style a sectioning element is actually a bullshit claim. HTML5 mandates that one should be able to style any unknown element. That is, a browser vendor could claim that they support the <foobar>, <my_cat> and <steve_jobs_is_god> elements, since according to HTML5, they should! At least in the sense that they make those elements part of the DOM and therefore styleable through CSS.

-moz-any() to the rescue (somewhat)

The only browser to have any reasonable way of styling headers, depending on how deeply nested they are within sectioning content, is Firefox 4, currently in early beta only. This is done through the brilliant any() selector, being implemented, as it should, with a vendor prefix, until all details are agreed upon.

(The hardest part to figure out, before this can go CR, is how this selector should handle specificity. What happens if you mix type, class and id selectors inside one parenthesis?)

Nevertheless, try and write the following CSS selectors so that they work in any browser but Firefox 4:


h1 {}
h2, -moz-any(section, article, nav, aside) h1 {}
h3, -moz-any(section, article, nav, aside) \
    -moz-any(section, article, nav, aside) h1 {}

And so on. You will get tired of typing really quick.

Thus, only Firefox 4 can claim to support HTML5 sectioning elements in any usable fashion. The point of these elements is not that they should serve as styling hooks. We already have <div> for that. The point is that they should affect the styling of headers, and that's simply not doable in a generic fashion in any browser but Firefox – so far!

But Firefox also have a long way to go, since they've not begun on working on the accessibility side of this. Until support for the any selector is universal, these elements can only be used in experiments or with fallbacks, such as always using <h2> when one level down into sections, <h3> when two levels down, and so forth. Actually, this is current recommended best practice.

But by not using <h1> all the time, we are missing out on one of the main advantages of this new outline model, namely cut-and-paste-ability. We still have to re-calculate the heading level depending on what page a piece of content appears. E.g. a blog post might have its heading as <h1> on the dedicated page, but it should be <h2> on the home page of the blog. Sub-headings should be <h2> on the dedicated page and <h3> on the home page, etc.

Until there is universal and accessible support for headers within sections, support charts may say that the elements are implemented and marketing may make a big noise about their presence. But right now, the only way to use these elements is through scripted hacks.

And that, my friends (and all enemies I've made by posting these two posts) is a non negotiable fact.

Exhibit C: <hgroup> and <nav>

This is a no brainer. The point of <hgroup> is to hide the subtitle from the outlining algorithm. Thus there care only two requirements to call this feature supported. Make it available in the DOM. As explained above, that's the easy part. The hard part is the accessibility issue. When a blind user scans through a page by jumping between headers, an <hgroup><h1 /><h2 /></hgroup> should be presented as exactly one heading, not as two. And it is a reasonable expectation that the subtitle is read out along with the main heading. In a perfect world even perhaps prefixed with the word subtitle to indicate that relationship. At least, when jumping to the next heading it should be skipped, since it really is not next, but part of the main heading.

In the same way, the nav element should be presented to blind users in a way that will let them skip over navigation or jump to navigation. (Also a no brainer, really.)

There is no browser on the market that supports thees behaviors. Indeed, to my knowledge, no browser even has begun working in earnest on this. But my point is not that browser X is better than browser Y. My point is that it is premature to claim support for this feature, when indeed one is missing the whole point of that feature.

So, is there no value in using these HTML5 elements?

There might be. First of all, they do no harm. And on that day when browser support really has been properly implemented, you will automatically get added benefits over using a div. Furthermore, they might be picked up by other software than browsers, or browser extensions, like Readability (I love it) and Safari Reader (technically not an addon, but a feature). Presently, this kind of functionality must rely on educated guesswork – some kind of software algorithm that analyzes the page and looks for ids, classes or patterns that usually would indicate the main content of a page. With HTML5 markup that algorithm will be simpler and therefore execute faster.

So this is my point. Knowledgeable web developers can use any HTML5 feature they like today. With progressive enhancement and graceful degradation, perhaps through JavaScript libraries. If you've come this far and think that I am discouraging any use of recent additions to the web stack, you are wrong. Just take reasonable care in how you do it, and do not trust browser marketing! One example of such care is to use WAI-ARIA everywhere, to address the accessibility issues. Since no browser offers built in accessible implementations of HTML5, we must resort to bolt on accessibility.

Browser support for HTML5 is not boolean, and neither should your usage of this be today. Thus can I use HTML5 is a stupid question. It is all about the how you use HTML5 and more specifically, what parts of it you use.

I am not a foe of HTML5 usage – but I am a friend of caution. After all, I'm over 40 years of age!


P.S. Firefox 4 does not look like Chrome, it looks like Opera! And of course there are several reasons behind Chrome's growth in market share.

Thursday, July 8, 2010

No browser supports HTML5 yet. Part 1. The rant.

Yes, you've read that headline correctly. There are so many websites that measure HTML5 readiness in one way or another, and so many marketing pitches that claim HTML5 support for browser X, Y or Z. But the crux of the matter is this. Supporting HTML 5, regardless of definition, is not a boolean proposition. I.e. It's not something you do or do not, it is something you do more or less.

This discussion will consist of two posts. The first is an anti-webkit fan boy rant, probably only useful as self-therapy for me. The second part is my technical discussion about the subject matter at hand.

Rant begins here

Whenever a major browser vendor releases a new version, or preview version, you can bet a month's salary on the fact that comments will appear on forums, Twitter or blog posts that asks does it support HTML5 (it used to be CSS 3). Some other browser is then hailed as if it does, usually Safari or Chrome, since they have either the most obnoxious marketing or the dumbest fan boys(?) And sometimes the comment is made complete in its stupidity by an argument that vendor X should just “use Webkit”.

I do not intend to throw cheap jabs at Webkit, in any incarnation, be it Chrome, Safari, Froyo, S60, Web OS, Nokia WRT, QTWebkit or WebkitGtk. Webkit is a really good rendering engine, or perhaps nowadays more aptly described as the core of a rendering engine. OK, maybe I'd like to throw a jab at Froyo and Adobe AIR for the dumb ass decision not to enable SVG, but that's beside my point, and not Webkit's fault at all.

Other comments like Mozilla is lazy or have stopped inventing are not hard to find either. But it's hard to claim that Opera is not inventing, so non Opera fan boys just tend to ignore them. After all, that makes it much easier to claim originality, even though one has just copied Opera.

I am not saying that Firefox is without it's gang of fan boys. Perhaps they are equally loud and obnoxious, but it's been a long time that they've been in my vicinity. (Or perhaps I am that fan boy?)

Source of confusion number one: Browser vendors

It is reasonable to expect the upstarts to be more aggressive in their marketing, but marketing tends to turn into blatant lies when exaggeration is becoming the norm. Consider this support chart for Safari 5 from Apple:

Apple claiming that Safari 5 supports several HTML5 elements

Source: http://www.apple.com/safari/whats-new.html [checked 2010-07-08]

Problem is, once someone has started to claim support for a feature, even though that support is half baked and incomplete, everyone else has to answer in kind, and claim support even when their implementations are equally half-baked. Or even worse, rush out such half baked implementations to the market to show everyone that they are also a leader.

(I'll explain why Apple's claims are false in part 2 of this discussion.)

Source of confusion number two: Well intended web developers

Why is this a source of confusion? Because we tend to put up demos of new cool technologies that are not really examples of best practice, e.g. even though transformations and transitions work in the latest versions of Firefox and Opera, many demos use the webkit prefix only. Heck, I've even seen demos of rounded corners, something that's been in Firefox since 2004 (3 years ahead of Webkit) that used the only the -webkit- prefix! (Yes, I know there are good examples as well.)

I am not surprised that Apple browser sniffs for Safari in their HTML5 demos – even though I am annoyed at such blatant disregard for best practice. After all, that's not technology, that's marketing. (And yes, I know there are a few things that one can do in Webkit based browsers only, such as CSS Animations (not transformations) – a technology still in need of a valid use case, BTW – and CSS perspectives, but that's also beside my point.)

When the WebGL Quake demo originally worked in Chrome only, thanks to flaws in the demo code, not in Firefox itself, it was claimed that Firefox was ”too slow”, even before such a claim could be tested. That was not marketing (I hope), that was developers not doing their job. And when someone is demoing reflections in Webkit, without at least discussing that Firefox can do the same thing, albeit with a different technique, which is more powerful, BTW, it might be lack of knowledge. But the lasting impression on readers, equally lacking in knowledge, is that Webkit based browsers are soo far ahead, when in reality they are not.

Another example is gradients. They first appeared in Safari and for a while they could not be demoed in any other browser. But since 3.6 Firefox supports gradients as well. Doing an gradient demo today using the webkit syntax only is not only bad practice because it is limiting the demo to a few browsers. It is also cheating oneself and one's audience of the syntax that is much more likely to be the upcoming final W3C standard. I.e. If you are limiting your demo to one syntax only, the Firefox version is the more future proof one, the one web developers really should be looking at in earnest.

A real problem caused by too many Webkit-centric demos on the web, is Microsoft contemplating supporting -webkit- prefixed CSS properties. Luckily they back paddled on that one, but it still serves as a nice illustration of the problem.

To alleviate this problem Mozilla has proposed a set of best practices for demos, that includes being as cross browser as possible, using graceful degradation ,etc. Read (at the end of the post) and learn, people!

Where innovation happens = everywhere

Even Internet Explorer, that I've cursed so many times, did tons of stuff already in the 90's that's only recently have been picked up by others. Yes, there is one big difference. The filters in IE were not being put forward for standardization, but was an attempt to embrace and extend, Microsoft of the 90's primary way of competing in unjust ways. But from a pure innovation standpoint, IE was first in doing many things.

And for all Webkit fan boys I have a home assignment. Please investigate where the following technologies were invented:

  • WebGL
  • HTML5 video and audio
  • Using any element as CSS backgrounds
  • Applying SVG effects on non SVG content
  • Full page zoom
  • Canvas text
  • Compiled JavaScript
  • Hardware accelerated SVG and Canvas
  • Audio Data API

Hint: The answer is not one and the same, but never Safari or Chrome.

I am not saying this to diminish the considerable achievements by Webkit browsers, but wishing for a Webkit monoculture is plain stupid. Just like it was plain stupid to wish for a Gecko based monoculture five years ago – when Webkit hardly was a blip on the radar and had tons of bugs (JavaScript in Safari 2 anyone?). And what if KHTML never had been developed? It could hjave happened since lots of people thought it would be better for Konqueror to switch to Gecko. Well, Webkit is based on KHTML, so if that advice had been heeded, we would not have had Webkit today.

End of rant – sort of

All of the above is not me saying Chrome is a bad browser. It is not. In some ways it's the best browser – but not in every way! My primary reason for not using Chrome in my daily work? I think monoculture is bad and even though I sympathize with Google using Chrome to push the competition into being faster, I do not want to see a world where one company is the dominant player at every tier of the web experience. Such power will inevitably corrupt, no matter how hard the company in question tries to avoid being evil. Add to that a leader that is absolutely clueless about integrity and that by far outweighs the fact that Firefox currently is a few milliseconds behind Chrome in some JavaScript benchmarks.

Oh, yes, I use Linux, so Safari is not an option at all. And Apple is every bit as evil today as Microsoft was in the 90's.

My primary reason for supporting non Webkit based mobile browsers like Opera or Fennec (Firefox) is not that they are clearly superior. In many ways they are not – and again in some ways they are! (At least they do SVG, Froyo!) But once again it comes back to this. Monoculture benefits no one in the long run. For a moment the idea might seem to be appealing – as when developing a specific web app – but holding on to such an idea in the long run is just showing lack of vision and lack of historical knowledge.

To round things off, here is a video (HTML5 video was an Opera idea) of WebGL in Firefox 4 – a technology invented mostly by Mozilla. (Oops! I just gave away two answers in my home work assignment.)

OK, therapy session is over. Glad to have gotten that off my chest. Tomorrow I promise to be productive!

Friday, April 23, 2010

Why declarative animation should be in the DOM and not in CSS

Note: This blog post did reflect my opinion at the time of writing. Since then technology has evolved and now this has become less of an issue. I will leave the post online, but if you came to learn about how to do animation today, reading will be a waste of your time.

A little more than a year ago Safari introduced experimental support for CSS based animations, to compliment transformations and transitions. I have no gripe with transitions and transformations, but I think animations belong in the DOM and not in CSS. My main argument is that animations will most of the time be triggered by DOM-events, and during at least the next five years most CSS-based animations will be duplicated using classic DOM-methods anyway. The purported separation of animation (for designers) who are supposedly scared away from scripting is not a valid argument, since they are going to use libraries anyway. (And, frankly, the CSS animation syntax in itself looks quite scary to most people!)

Originally, this started out mostly as a gut feeling, and the arguments that I've made on the W3C mailing list are varied and admittedly a bit confused. I was thinking out loud more than I was presenting a coherent argument. Hopefully this blog post will come across as more reasonable!

I also believe that this discussion needs to be known outside of the CSS working group and the participants on the www-style mailing list. Specifically, it needs the input of developers of JavaScript libraries and normal web developers. These are the people who will be affected the most by any decision. (I am providing an abundance of links to the discussion on the mailing list for context.)

CSS animation – the good parts

A few aspects of the CSS animation proposal are brilliant and not of dispute:

  • Declarative syntax. Designers specify what effect they want, not how the browser should achieve it.
  • Hardware acceleration. Animations get smoother, faster and less CPU-draining. The GPU is optimized for this and can perform the calculations using only a fraction of the power and time a CPU would take to do the same number crunching.

When I am asking the CSS working group to reconsider CSS animations I am not in any way trying to take away these two strong points. I am firmly pro having GPU-accelerated, declarative animations in the browser. I just think the DOM is a better fit for them.

What will be animated?

CSS has no events, it has states. Basically it knows the focused and unfocused state on links and widgets and if a pointer is hovering, clicking ("active") or not hovering over an element. While exeperimenting with the CSS animation syntax, the working is producing examples using these states. Ironically mobile devices are one of the main reasons why CSS animations were originally thought up, and they rarely have a pointer, and CSS is not really equipped to handle touch events.

It can safely be assumed that 99 % of all real world use cases for animation will be the result of user or server interaction on some part of the document that is not being animated. The user clicks a button and a div will slide into view. The user presses a key on his keyboard and text will wiggle and bounce. Data is sent back from an AJAX request, and the received data will appear through an attention grabbing sliding effect.

Currently the only way to achieve these real world use cases is by adding or removing class attribute values. Thus we have scripts that will trigger animation in the DOM and the actual design of the animated effects in CSS. Conceptually this is nice. Separation of logic is a good thing™. However, in real world practice this will not be so neat.

Events confusion

Even though there are no events in CSS, there is discussion about having animations running upon entering a state, while being in a state and when leaving a state. These are not events in a technical sense, but outside of the W3C working group, most developers will be really confused about the difference. Such precise knowledge is not found in abundance! If a specific application need to differentiate between these, it is by far easier for developers to use the more familiar DOM events.

Having some animations run because they are affected directly, e.g. on hover, some animations run because they are triggered by a scripted change of className, and in both cases also have animations that may run entering, during and leaving states looks like a recipe for unmaintainable and confusing development. It is way much better to trigger all events from one place only, and that place can only be the DOM.

How to implement animations in a library

OK, you are building a little library to animate stuff. What do you do? I suppose the following:

  1. First you capability detect support for declarative animation. That in itself would be easier if it was in the DOM, but it is at least doable now. But not in a neat fashion. Score one against having animations in CSS.
  2. If CSS-animation is indeed supported, you will wrap your animate function around className switches. Doable, but not neat.
  3. If CSS-animation is unsupported, you fall back to old school timed manipulation of the style attribute.
    • However, using the animation parameters from the CSS-file is a huge impracticality. You must find a way to read all CSS-files, parse them and interpret the cascade, the specificitivity of all animation rules and convert that information into timed logic. This is impractical, slow and CPU-draining and fragile.
    • The CSS Object Model (CSSOM) will not alleviate this problem. Browsers that need to parse the animation rules are the ones that neither implement animations, nor the corresponding CSSOM.

Alternatively, the author is required to re-specify the animation once again, now using a syntax for the fallback. We thus get code duplication, with all the error proneness and maintenance problems that follows from that approach. But it is the only approach currently available with reasonable results.

It can safely be said, that CSS-animations are not backwards compatible in any reasonable way. And we are going to need backwards compatible solutions for almost another decade or so.

What about progressive enhancement?

Using progressive enhancement we can deliver CSS-based animation to browser that support it, and non-animated but still usable content to the rest. Problem solved, is it not?

I like progressive enhancement. I teach it and I practice it. However, there will be a great number of real world customers that will insist upon having animations in both the brand new cutting edge browsers and the legacy ones. As least as long as more than 10 % of their visitors use them. We can preach all we want. This scenario will face the real world developer way too often.

Animations will be used to convey information as well as for eye candy. Not having a scripted fall back will not be an option for such use cases either. In real world web development, progressive enhancement can not be called upon to be a panacea, how appealing that thought ever may be.

What else will be hard to do using a CSS approach?

In real world use cases developers are also going to want to manipulate animation and keyframe properties, as well as programmatically create animations from scratch. Using the CSSOM this can probably be done in browser that support animation but once again the fallback for legacy browsers will be very hard to achieve.

What is my counter proposal

I have barely begun thinking about this issue, so any propsal I have at the moment should not be regarded as a final suggestion. In order to keep the separation of concern between designers and developers – for those situations where one has the luxury to keep them separate – animations must be easy to define with a CSS-like syntax. JSON fits that requirement quite well. The method to start an animation could be called runAnimation. It might return a value that I can store in a variable in order to manipulate or cancel the running animation. Another way to manipulate it would be by altering the animation properties. For convenience, there should also be a method to stop all running animations on an element.

Here is an example, recreating the effect from Surfin' Safari's announcement:


// Keep the JSON objects in a separate file for designers to fiddle with

var bounce = {
    "from" : {
        "left" : "0px"
   }
   "to" : {
       "left" : "200px"
    }
}

var myAnimation = {
 "animation-name" : "bounce",
 "animation-duration" : "4s",
 "animation-iteration-count" : "10",
 "animation-direction": "alternate"
}

// Keep these lines in another file for the JavaScript guy/girl to fiddle with

document.getElementById("foo1").onclick = function() {
    document.getElementById("bar").runAnimation(myAnimation);
}

document.getElementById("foo2").onclick = function() {
    document.getElementById("bar").stopAllAnimations();
}

// Example 2
// Keep the JSON objects in a separate file for designers to fiddle with

var pulse = {
    "0%" : {
        "background-color" : "red",
        "opacity" : "1.0",
        "transform": "scale(1.0) rotate(0deg)"
    }
    "33%" : {
        "background-color": "blue",
        "opacity" : "0.75",
        "transform" : "scale(1.1) rotate(-5deg)"
    }
    "67%" : {
        "background-color": "green",
        "opacity": "0.5",
        "transform": "scale(1.1) rotate(5deg)"
    }
    "100%" : {
        "background-color": red,
        "opacity": "1.0",
        "transform" : "scale(1.0) rotate(0deg)"
    }
}

var pulsedbox {
    "animation-name": "pulse",
    "animation-duration": "4s",
    "animation-direction" : "alternate",
    "animation-timing-function" : "ease-in-out"
}

// And here comes the DOM-parts, this time using JQuery for easy iteration

$(".pulsedbox").each(function() {
    this.runAnimation(pulsedbox);
});

As stated above, my counter proposal is not a finished product in any way. It merely is intended to serve as an illustration to an alternative approach. The technical merits or defeciencies of that proposal is in itself not really something that should guide the general discussion about how to implement declarative animation. The principles on which I draw the conclusion that the DOM is a better fit is the true talking point here.

Now I am especially interested in hearing the opinions from the DOM-scripting community!

Monday, March 22, 2010

PPK is wrong, vendor prefixes are a necessary evil.

Yet another considered harmful essay has hit the web. This time it is PPK, a well known JavaScript guru and very influential author that has written it. And I get to disagree with another one of my heroes. (Just recently I have disagreed with Rasmus Leerdorf on the naming of the next major PHP version...)

Basically PPK is like many others fed up with writing the same rule 2, 3 or even 4 times. You know the drill:


-moz-border-radius: npx;
-webkit-border-radius: npx;
border-radius: npx;
Add in a few (-ms-)filter as well and it's a nightmare. And the CSS-validator is not configured to ignore these, even though they are not errors per se. It is sure easy to echo PPK's sentiment. But I believe he is wrong, and fortunately I am not the only one.

Proposals do change

Case in point is border-radius and gradients. Mozilla could not just drop -moz- from border radius, since their 6 year old implementation is not aligned with standards. Webkit can not drop the prefix from their gradients rules, since the standard probably - please note that word, probably - will look like Mozillas implementation.

I also note that he has put the non-prefixed version of a rule before the prefixed ones, which is not optimal for the very same reason. A problem I have dealt with in an earlier blog post.

Experimental versions are needed for things to move along. Without them very few people will be able to experiment and improve the proposal. This is a real need, this is a real problem.

Contrary to what PPK says, vendors do not simply copy each other. They often start that way, end then they run into questions like what about... and they will have to discuss and test and decide and make changes. Both to the implementation and the spec. Such discussions take place on the W3C mailing lists all the time.

Standards must be standards

PPK dislikes vendor prefixes because they seem to be the opposite of standards: Vendor specific rules, code forks, similar to browser sniffing in JavaScript.

But what he is proposing is effectually doing away with the entire W3C process. Yes, it is slow and burdened by politics. But if we drop vendor prefixes that will mean that as soon as a browser puts out a new technology it, by virtue of being first, has decided unilaterally what the syntax should look like and how it should work in every browser. This is not a standards process, this is in fact the opposite. First to market rules and everyone else be damned!

Experimental versions of future standards are a good thing, since they allow for real discussion and much more thorough standardization. One of the reasons standards are moving slowly is that today they are much more detailed, much more tested and therefore much more reliable.

This is not unique to web standards. Consider IEEE802.11n and the fact that we for a couple of years had draft version equipment on the market. Good or bad? It made the final implementation better, so yes it was good. And a necessary evil as well.

There must be room for errors

Webkit has not copied Mozilla's implementation of border-radius and Mozilla has not copied Webkit's gradients. Having discussed the proposals changes have been made, lessons have been learned and this has happened in real life.

If a vendor must not use prefixes it would take forever to get to a place where they would be confident enough to put out a new technology. Opera and Microsoft may skip the prefix for border radius, since thanks to Mozilla that 6 years ago put out an experimental implementation and thanks to Webkit, that 3 years ago put a out a slightly different experimental implementation, the standard has now reached Candidate Recommendation status and web authors demand the technology, because we have been able to experiment with it!

If it had not been for prefixes, only a very few people would actually have bothered to download experimental browsers and try this out. That is our only other option. The word would not have spread and demand had not been built up.

Is summary. PPK, I love your work and have tons of respect four you. But in this case you are wrong. I will not shout long live the prefixes, since I wish that every prefixed CSS rule should have short life. But I want that short life to be productive.