Render testing in html2canvas with webdriver

January 5, 2013

Read time 2 min

Automated testing for html2canvas has been an issue for a long time already. The library has had a number of qunit tests that check that different parsed DOM values are correctly calculated across all browsers, but they only touch the surface of the testing requirements of the library.

Problem

The purpose of html2canvas is to generate as close of a representation of the DOM, as it is rendered by the browser the script is ran on. In other words, there is no one clear result it should render for any given page, but instead it should attempt to represent the page as it is rendered with that particular browser on that particular screen with any browser specific issues that may be present. If the browser doesn’t support some particular CSS properties, the result shouldn’t render them either.

This meant that there couldn’t really be any premade screen renders that could be used to compare the results generated by html2canvas, as the results would and should vary between browsers and systems.

Approach

Version 0.4 will introduce testing capabilities with webdriver which will allow automating the testing on a number of different browsers, while still taking into account the expected differences in results.

The tests capture a screenshot of the actual browser, after which html2canvas runs and renders its representation of the page. The base64 encoded png images are then sent back to node where they are first converted into an arraybuffer and then into pixel arrays. It then calculates the percentage difference between the two images by comparing them pixel by pixel. The results are then compared to previous baseline values which allows us to analyze whether there has been any changes in the results.

The reason the html2canvas render is sent as a base64 encoded string and converted into a pixel array in node instead of simply sending a CanvasPixelArray is because it is a lot faster.

Analyzing the results

The results vary a lot depending on the browser, with Chrome generating the most accurate results. The slightly lower results in Firefox are mainly caused by slight displacement of some texts and some aliasing issues. For IE9, the results with text are lot worse, which is in part to be expected considering there is no support for getBoundingClientRect for text ranges and a slightly less accurate method of finding text positions have to be used.

With the automated tests in place, updating and improving the library is significantly easier and less time consuming, and will hopefully result in more frequent updates from me as well. Overall, it was satisfying to see that there were a number of tests that are 100% accurate to what the corresponding browser renders.

Never miss a post