Cross Browser Testing – Best Practices
Author: Stuart Watkins
In this article, we will focus on Cross Browser Testing. We will talk about what it is, why it’s important, the challenges posed and outline strategies that are not only less labour-intensive, but far more effective when it comes to catching difficult-to-find bugs.
What is Cross Browser Testing?
Cross-browser testing is a process that ensures your website or application behaves as expected across multiple web browsers.
Why It Is Browser Testing Important
A user should have the same experience when visiting a website, no matter eh browser being used. Be it Google Chrome, Firefox, Safari or any of the other variants on the market.
Challenge 1 – Cross-browser testing can be time-consuming
Automated methods such as unit testing, functional testing or visual regression do not capture cross-browser bugs. So as of yet, it can only be done manually.
Because every test starts with usage statistics, simplifying the statistics only makes sense. This means, taking into account only the meaningful platform/operating system/vendor/version combinations.
For the most part, differences between versions, platforms and operating systems can be omitted.
Challenge 2 – Browser-neutral bugs can waste time
These are stylistic bugs, incapable of being caught by automated methods, which have nothing to do with a specific browser. Falsely assuming that a browser-neutral bug is browser specific can waste time.
A short list of techniques that can be used to catch them are:
1. Resizing the browser
2. Zooming in and out if applicable
4. Turning off CSS
6. Using only a keyboard to interact with the application
Any browser can be used.
Challenge 3 – Which browsers should you test first?
What are the top 4 browsers that makeup 98% of the market as of 2016? These, of course, are Chrome, Internet Explorer, Firefox and Safari. But should you start from the least popular browsers because they are the ones that expose 80% of the bugs?
If you troubleshoot the top 4 browsers only, you could be ignoring potentially disastrous bugs. On the other hand, if you start from the most problematic browsers, you may discover that the code you altered to fix the bugs has now broken your website in the most popular browsers.
Once you’ve dealt with Challenges 1 and 2, there is a far better alternative to the two extreme approaches presented in Challenge 3. This alternative approach consists of five steps:
1. Form three headings on a spreadsheet and name them High-risk, Medium-risk and Low-risk.
2. Under High-risk, list older web browsers which are not maintained. One example is Netscape.
3. Under Low-risk, place the top 98%.
4. Finally, under Medium-risk, enter any intermediate browsers you have experience with.
Test your website on the high-risk browsers by varying screen size, pixel density and switching orientations.
When you finish, you will have identified roughly 80% of the bugs. However, be aware that fixing bugs in older browsers, can worsen your code.
Repeat as above on the low-risk browsers. Those, of course, are Chrome, Internet Explorer, Firefox and Safari.
In addition to using different screen sizes, pixel densities and orientations as before, try testing on multiple devices with varying capabilities.
Run the same tests on the medium-risk browsers you have experience with, and hopefully, there are some.
Test the ones that make sense but avoid testing all 100.
If you found any bugs and you fixed them, iterate once more through steps 1 to 4. Continue doing so until no more bugs can be found.
When using this strategy, it is rare that you have to run more than two cycles. Although the concept of looping through a series of steps can be off-putting, this strategy is actually much more efficient than testing in either descending or ascending order of browser popularity.
Image sourced from borland.com