Many of our clients have asked what is the actual difference between pixel-by-pixel and feature based comparisons. We will give you an overview of the main differences by using examples.
How to detect an image difference
Image difference calculation is the most popular and simplest way to find differences between two images. You only have to calculate the absolute difference between each pixel pair. If two images are identical, the difference calculation will produce a blank black image where each pixel has an intensity value of zero RGB(0, 0, 0). In case of differences, the regions with different intensity values between two images will light up. This is where the easy part ends and the hard part begins.
Let’s say we have two images of a white square. At first sight they seem to be identical:
However, when we calculate a difference between these two images and convert the result to a binary image we get an unexpected result:
This means that although images may seem identical, they might have slightly different pixel intensity values. The white pixels are actually between RGB(240, 240, 240) and RGB(255, 255, 255). These differences are very hard to notice with bare eyes and would not be considered as a difference by most users.
Let us now take more realistic source images. We want to compare two images of seemingly identical web pages. Here are two images of the Facebook web page rendered on Windows 7 Chrome and Windows 8 IE10.
At first glance, these images seem to be very similar, but the difference calculation reveals something different:
Every slight misalignment is marked. To make matters worse, these two images have different sizes. These differences in size are caused by different viewports of different browsers. So in order to use image difference for cross-browser testing, the web pages should be literally 100% pixel perfect. This is unfortunately almost impossible in real life due to browser vendors.
How we have solved it
First of all, we handle the screenshot acquired from a browser as a collection of segments (buttons, input boxes etc.). We use image processing techniques to segment images into smaller regions which we call regions of interest (ROI’s) or segments. Here are a number of small segments from the original Facebook screenshot:
Segmentation is an iterative process during which an image is analyzed and segmented many times until segments have a sufficient size. Simply put, we want a segment to be bigger than a single letter, but definitely smaller than half of a web page.
We then compare these segments between two images. We compare parameters like size, position, geometry and many more. If we detect any differences we mark them as red sliding transparent boxes. The screenshot below has been saved from our testing environment. Click on the image below to see the results.
The same kind of approach can be used to compare not only web pages but also different kinds of documents. Fox example we can compare PDF documents or screenshots form smart phone apps. This is why cross browser testing has not yet been fully automated – pixel by pixel comparison does not give you very accurate results.
This is a first article in our short series describing visual cross-browser testing in more detailed ways.
Feature based comparison is in the heart of the cross browser testing service at Browserbite that reduces the amount of manual work. Go ahead and try it out yourself!