Opinion: Writing automated system tests is a great way to test a website / webapp.

Opinion: Writing automated system tests is a great way to test a website / webapp.

A few weeks ago I wrote a post explaining how running automated tests does not perform the same function as (good) manual testing. In a recent project, I was reminded that although running the tests is not deep testing, writing them is.

I am very fortunate that I work with developers who are hot on quality and focused on writing appropriate unit tests for all their code, but what is the most cost effective way to have confidence that the whole thing hangs together and performs as expected?

In this case:

  • The project was an extension to a current single page web application with a number of Selenium tests running against every build.
  • The acceptance criteria were clearly defined as a collection of user journeys.
  • I had clear wireframes from our UI expert and enough technical documentation to understand exactly what was required.
  • The timescale for production ready code was extremely tight!

It was clear to me that there was never going to be time to test everything manually at the end, especially checking for regressions after every minor bug fix and tweak. Equally, with many integrations involved in a complex system, there would not be time to write full integration tests at every level before an expected fully working demonstration to the stakeholders.

I felt that quality Selenium based user interface system tests were going to be my best bet here: As each stage of the user journey is completed, the UI test is hopefully ready for any tweaking required to turn that part on and then we can check for regressions continually. Fortunately, I know a thing or two about writing maintainable tests using Selenium. (Check out my earlier blog posts and my tutorial github project if you want some tips and ideas on how to use the Pagefactory to build a test framework in Java.)

The wireframes and documentation meant that I wasn’t having to wait for every step of the UI to be checked in before I could start writing the tests to interact with them.

Build approach:

Start with the test classes;

Use the user journeys to script out the process of each test e.g. (using C# / Nunit)

        [Test]
        public void CheckThatICanCompleteScript1()
        {
            //set up objects and test data
            ...
            //

            //act
            StartSteps.OpenPage()
                .Login(username, password)
                .SelectNewFromMenu();

            //assert
            NewItemSteps.GetPageTitle().Should().......;

            //next action

            //next assertion 
            ...repeated until user journey script is complete.
        }

These will not be complete or correct but give me a structure to work towards.

Then I use my wireframes to plan out my Steps classes. Each new ‘view’ in the browser requires its own steps class that uses composition to model what we see using Page and Block classes.

The methods in a Steps class call out to the relevant Page or Block to do something (e.g. ClickCallToAction()) or return data (e.g. GetTitleText()).

If the action method doesn’t change the view it should return a self reference.

If the action method changes the view it should return the Steps class for the new view

A getter method of course returns the data instead. Note: An alternative is to perform the assertion in the Steps class (e.g. AssertThatTitleIsCorrect()) and return a self reference for chaining. For now at least I am going with always passing the data back and performing the assertion withing the test class.

I can then frame out the Page and Block classes from the wireframes, although until its written I cannot chose the selectors I will use.

As each new view becomes available I can then chose my selectors and fix any troublesome interactions.

Outcomes:

  1. It worked: Having the confidence that you have tested the acceptance criteria immediately before you show to clients is a great feeling. This enabled development to continue right up to the end. I didn’t quite get everything finished, but enough that the amount of additional manual testing for each deployment was manageable.
  2. It was hard: To begin with you are writing in isolation as you are writing against something that does not exist yet. You can’t find those problems that take so long to overcome.
  3. The structure meant that breaking changes only broke one place and so were generally quick to fix.
  4. It doesn’t matter whether I can remember the scripts in 6 months time if we need to refactor. They are checked into git and running against every deployment.

On Reflection:

  • It would have been nice to have more API tests as well, but manually verifying the UI would have been impossible to achieve.
  • As the deadline approached I found myself fielding more and more questions from the developers and had less time to write more code / test. This discussion enabled us to complete even a lot of the ‘would be nice’ and lower priority UI tweaks, but meant that I was absolutely relying on the automated tests to spot regressions.

Lessons learnt:

  • I probably need another layer of abstraction for ‘Actions.’ Ideally a single method call from the test class completes an action of some kind, but this may actually move through more than one steps class. Login is a great example that typically moves through several different views to complete a single ‘Action’.
  • There is a lot I like about C# compared to Java, however there are two things that I really miss:
  1. Java enums.
  2. The ability to set break points at any call in a fluent interface.

 

So why the title of this post?

Many of the bugs and issues that I found were discovered due to the level of immersion that you get spending time buried in each part of the DOM choosing and testing suitable selectors. Running the tests visually (i.e. not headless) also meant that my “Mark One Eyeball” was tuned into what is normal. Even if my asserts didn’t spot the change, I usually did.

Browser based UI Testing and the Selenium project have been receiving a lot of criticism lately, but when your client is after a website that works as intended; 80% coverage in unit tests all passing means nothing if there is a problem in the UI that prevents you from pressing the right button. Writing these tests ensures that I am testing in depth, and at the end of it all we have a reliable, maintainable test suite.

Don’t think that any amount of UI tests will solve all your testing problems, we have already started to fill in the middle with lower level integration tests; but writing them forces you to focus up close and personal with the UI that you are about to launch on users. You quickly find the pain points and have a lasting set of regression tests to identify when something has been broken.

And in case you missed it before. I am absolutely NOT suggesting to only run UI tests. I see unit tests going in to the repository frequently and review the code as often as I can. Service level tests are also an essential and WILL be built up, but sometimes writing maintainable UI tests first / in parallel provides the benefit of finding bugs now and regression tests later.

And finally….

Feel free to message me on GitHib or Twitter to let me know how you get on if you use any of my code. Its nice to know if you found it helpful and suggestions for improvement particularly welcome. Java is there for now. When I can get sometime I’ll try to see if I can produce something similar in C#.

Comments are closed.