Standardised Bug Reporting with SEERS

SEERS

When raising bugs and defects, it can be easy to miss out vital pieces of information without a strict policy to adhere to. Often the developers have to refer back to the tester that raised the bug and ask exactly what went wrong, what they expected to happen or in which environments the problem occurred. SEERS is a term I have coined to use as a standardised series of criteria for bugs and defects. SEERS stands for:

Screenshot

There should always be a screenshot of the problem (unless it is a non-visual bug). If possible some kind of obvious visual outline around the problem should be shown. As noted by Alun Coppack in the comments, if the error is behavioural we should annotate the screenshot to describe the problem – and if this is not practicable a screencast of the problem leaves no margin for miscommunication.

Environment

There should always be details of the environment(s) (browser, operating system, screen resolution, etc) that the problem occurs in.

Expected/Actual Behaviour

There should always be details on the expected behaviour and the actual behaviour.

Reproduction

There should always be accurate details on how to consistently reproduce the bug. As noted by Alun (again!), some errors can be temporal and the testers should be aware of this and take note of the time/date before they try to recreate the error.

Severity

There should always be details on the severity of the bug.

Severity

It is a common complaint that the recording of the severity of a bug or defect is inaccurate. An element being two pixels to the left in IE6 is not a huge problem, but is often rated as Critical. To that end, I list the severities we use below.

Blocker

A blocker means that our entire story or functionality is not working or unusable, even if following The Happy Path. The Happy Path is that where the user does exactly what we expect them to and does not stray from the norm at all – e.g. entering the correct login details and clicking “login”. If we do not fix this bug we cannot release. An example is a login box where clicking the login button does nothing.

Critical

A critical bug indicates that functionality is severely impaired. If we do not fix this bug we cannot release. An example is a login box where entering invalid credentials takes the user to an empty page. The Happy Path (entering correct credentials) works fine, but a very likely situation exists which could cause functionality to break.

Major

A major bug indicates that some functionality does not work as expected. Copy may be obscured, or specific situations may cause errors to occur. We should certainly fix this bug, but it would not be impossible to release without. An example is a login box where entering over 100 characters in the password field causes an error.

Minor

A minor bug indicates some kind of cosmetic issue or unlikely situation that causes an error. We should fix this bug if it is cost effective to do so and other more important tasks do not exist. An example is the login button being aligned incorrectly, or clicking the button 30 times causing an error.

Trivial

A trivial bug indicates a very minor cosmetic issue or very unlikely situation that causes an error. We may decide not to fix this bug. An example is IE6 and Firefox not looking exactly the same, although IE6 looks acceptable and functionality is not impaired.

Terminology

Let’s discuss what actually constitutes a bug or defect, although this is terminology that I use and your mileage may vary.

Bug

When a task is in the current iteration or sprint and we raise an issue against it, this is called a bug.

Defect

When an issue is found that does not relate to any current tasks, this is called a defect.

Graded Browsers

Finally, here is a list of the browsers we support and their grades. We expect grade A browsers to have complete functionality, and look close to perfect (as much is reasonable or practicable). Grade B browsers should look acceptable and work as expected (although potentially with less of the flashy but essentially unnecessary functionality). Grade C browsers should allow the user to read the content, and accomplish their tasks. It is important to mention that we do expect content to be accessible in absolutely any environment; it is the aesthetics and nice-to-have functionality that we are less concerned with.

Grade A

These browsers should work perfectly, look good and perform well.

  • Windows XP/Vista
    • IE8
    • IE7
    • Firefox 3.5
    • Opera (latest version)
    • Safari (latest version)
    • Chrome (latest version)
  • Mac OSX 10.5/10.6
    • Safari (latest version)
    • Firefox 3.5
    • Opera (latest version)

Grade B

These browsers should work acceptably (any unsupported functionality should be dealt with gracefully e.g. JS animations or Flash), look good (any unsupported rendering functionality should be dealt with gracefully e.g. rounded corners, alpha transparency or drop shadows) and perform reasonably well.

  • • Windows XP/Vista
    • IE6
    • Firefox 3
  • Mac OSX 10.5/10.6
    • Safari (previous version)
    • Firefox 3

Grade C

All other browsers should work acceptably (degrading gracefully where appropriate), have no obscured or illegible content and perform acceptably.

Notes

It is important to note that in my list IE6 has been relegated to Grade B (which is at odds with Yahoo!’s graded browser support, which I otherwise use as a base for my list). This is simply due to the development effort to class it as Grade A. Where it is easy to do, I still recommend striving for good aesthetics for IE6 – but (for any version of IE) rounded corners and drop-shadows are often simply not cost-effective. I would implement these with CSS3, and argue that the time saved (and performance increase) by doing this The Right Way gives us enough time to develop new features. I obviously do not expect testers to test every single Grade C browser in existence, but simply to be aware of what does and what does not class as a bug or defect.

Remember, your users do not compare browsers side by side like we do. They have no idea there are inconsistencies, and more often than not simply wouldn’t care even if they did. It is unfortunately impossible for a website to look the same in every browser or environment (without just using a huge image map, and even then a text-only browser like Lynx would not be able to render it) and there is nothing to gain in trying needlessly to overcome this.

4 thoughts on “Standardised Bug Reporting with SEERS

  1. Sounds like a great idea. Only comments are that the addition of screencasts can really make it easier to understand behavioural bugs (rather than just screenshots) and a comment that severity can be subjective unless the people creating the bugs have a good idea of what the business goals and timelines are for a release/project.

    Also important to capture is the time that the bug occurred (to the second if possible) especially if this differs from the time that the issue is raised as this information can aid in server side debugging at a later time.

    1. Whilst I agree in principal on the addition of screencasts, there is a clear cost in time and effort implementing this with it rarely being beneficial. Often, an annotated screenshot will be enough to get this theme of point across. That said, I have included it under the Screenshot section, as there will be occurrences where this is the clearest way to demonstrate a bug. Do you recommend a piece of software for capturing these? I have not seen anything that lets you annotate them in a clear way.

      Capturing the time is a good idea too, but again something that is only going to be useful on rare occasions. Still, if the error only occurs at a specific time/date/interval, it is going to be very hard to recreate consistently and so I agree it would help.

      The severity can certainly be subjective, but including clear (as much as is possible) definitions and examples as I have done helps to stop every bug being raised as Blocker or Critical, which is often the case.

      Where I have implemented SEERS, it is because the testing team do not include enough accurate details in their bug reports, and so I have tried very hard to keep it down to a small list of guidelines that are easy to follow, and therefore likely to be followed. While I agree with both of your points, and have added them to the post, I do not think they merit their own heading. Also, it would spoil my acronym. :)

      I hope you can use this is of help to you your next project. Thanks for the comments, Alun!

  2. Oh, I don’t think I explained the bit about time correctly. The reason I find time important sometimes is that if it is a server side bug I can then look for a stacktrace or load figures in the server logs to try and determine what the issue is.

    I don’t know the screencasting tool that has been used in the past I am afraid as I was receiving, rather than raising the bugs. This one sounds ok though: http://camstudio.org

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>