Highlights

News, Tweets & Thoughts
Blog | Top Tips

Cross Browser Testing- Best Practices


In this article, we will focus on Cross Browser Testing. We will talk about what it is, why it’s important, the challenges it poses and discuss strategies that are not only less-labor intensive but far more effective when it comes to catching difficult-to-find bugs.

What It Is 

Cross browser testing is a process that ensures your website or application behaves as expected across multiple browsers.

Why It Is Important

It stands to reason that a user should have the same experience when using an app, whether the browser being used is Chrome, Firefox, Safari or any of the other variants on the vendor market. Anything less, cuts short.

Challenge 1

Cross browser testing can be time consuming. Automated methods like unit testing, functional testing or visual regression do not capture cross browser bugs. So as of yet, it can only be done manually.  Because every test starts from usage statistics, simplifying the statistics only makes sense. This means, taking into account only the meaningful platform/operating system/vendor/version combinations. For the most part, differences between versions, platforms and operating systems can be omitted.

Challenge 2

Browser neutral bugs can waste time. These are stylistic bugs, incapable of being caught by automated methods, but that have nothing to do with a specific browser. Falsely assuming that a browser-neutral bug is browser specific can waste time.

A short list of techniques that can be used to catch them are:

– resizing the browser

– zooming in and out if applicable

– turning JavaScript off

– turning CSS off

– turning off both CSS and JavaScript

– using only a keyboard to interact with the application.

Any browser can be used.

Challenge 3

Which browsers should you test first? The top 4 that make up 98% of the user share as of 2016? These, of course, are Chrome, Internet Explorer, Firefox and Safari. Or should you start from the least popular browsers because they are the ones that expose 80% of the bugs?

If you troubleshoot the top 4, or 98% of what your audience uses, you could be ignoring potentially disastrous bugs. If, on the other hand, you start from the most problematic browsers, you may discover later that the code you altered to fix the bugs has now broken your application in the most popular browsers.

The Strategy

Once you’ve dealt with Challenges 1 and 2, there is a far better alternative to the two extreme approaches presented in Challenge 3 and it consists of five steps:

STEP 1:

Form three headings on a spreadsheet and name them High-risk, Medium-risk and Low-risk. Under High-risk, list older browser that are not maintained. One example is Netscape. Under Low-risk, place the top 98 percent. Finally, under Medium-risk, enter any intermediate browsers you have experience with.

STEP 2:

Test your application on the high-risk browsers by varying screen size, pixel density and switching orientations. When you finish, you will have roughly 80 percent of the bugs. However, be aware that fixing bugs in older browsers, can worsen your application code.

STEP 3:

Repeat as above on the low-risk browsers. Those, of course, are Chrome, Internet Explorer, Firefox and Safari. In addition to using different screen sizes, pixel densities and orientations as before, try testing on multiple devices with varying capabilities.

STEP 4:

Do the same tests on the medium-risk browsers you have experience with, and hopefully there are some. Test the ones that make sense but avoid testing all 100.

STEP 5:

If you found any bugs and you fixed them, iterate once more through steps 1 to 4. Continue doing so until no more bugs can be found.

When using this strategy, it is rare that you have to do more than two iterations. Although the concept of looping through a series of steps can be a put off, this strategy is actually much more efficient than testing in either descending or ascending order of browser popularity.

Image sourced from here

talk to us