Testpad's new UI sees 1m results in 3months

It's now been 3 months since Testpad's UI saw a big upgrade in look and feel, and, judging from customer emails (keep 'em coming), Testpad has never been so popular.


And it's not just the positive feedback - since June, the new UI has been used to record over  1,000,000  results, taking Testpad's total to over 15m. Sadly for the dev teams out there, not all of those have been 'passes', but at least the bugs are being found in testers rather than customers hands!

Looking ahead, Testpad's next big upgrade over the coming months will be a much-requested API. The API work is still early days, so don't hold your breath just yet. Although please do get in touch if you have specific requirements or ideas for how you'd use an API. The more input at this early stage, the more useful it will be.

If you're new to Testpad, please don't hesitate to get in touch for help... video demos, importing existing tests, usage patterns etc.., just email support@ontestpad.com.

Testpad Tips - don't make Scripts too short

This is the third post on usage tips for Testpad and is about getting the most from a Script. Actually there are lots of aspects to using Scripts well, but this post is all about avoiding a common mistake: making scripts too short.

It's very tempting to treat a script as a single test case, and thus put as few as 5-10 rows in it that only collect 1 or 2 meaningful results. While this works, it's not what the UI was designed for and it will quickly become annoying navigating to and fro from the project view.

Aside: it is also possible to make scripts too long, but a) this is way less common, and b) Testpad tries to protect you from yourself by limiting scripts to 2000 rows. In fact, too long only becomes a problem if you've got 1000+ rows combined with 100+ columns, but see the first post in this series for tips on fewer run columns.


Before we get into it, here's a screenshot of the top of a well-formatted script. This one happens to be test-case inspired in terms of specifying steps and expected outcomes separately, but the ideas apply equally to exploratory style guides and simple checklists of terse test prompts.



The formatting in this example is pure convention. There are lots of possibilities, but the general aim is to make good use of the hierarchical structure and keep each cell to a single row of text.

This example also makes use of Link Shortcodes defined in Project Settings, for convenient linking to e.g. User Story documents.


Short Scripts

The main issue with short scripts is in missing out on the power/efficiency of the script-editing page, with its progress bars, indentation structure, inline styling, keyboard-driven UI etc etc. It also overloads the Project view, as any reasonably sized project will have several hundred cases to test, and hundreds of scripts in one folder is just annoying to manage.


What does a short script look like?



Or worse...



When instead you can have lots of tests in the one script:



Which then lets you do things like collapse the rows for convenient overviews of your test coverage (collapse buttons hiding at the very bottom of the browser window):



Or even:



And here's another example in the BDD/Gherkin syntax format, which Testpad will auto-highlight when it spots rows starting with keywords like Given, When and Then:




Refactoring: Joining Scripts Together


Lastly then, if you find yourself with lots of short scripts and want to combing them, here are two ways of getting that done...

COPY/PASTE BETWEEN TABS

Open Script A and Script B in separate tabs in the same browser.

In Script A, make a multi-row selection and press Ctrl-C (you should get a message saying "N tests copied").

Switch tabs to Script B, select (but don't focus on, as in you don't want the text edit cursor visible) the last row, and press Ctrl-V. You should get a message saying "N tests added".

If successful, head back to the Project view to delete Script A (right click on its name).


EXPORT/IMPORT

In Script A, use the Script Menu → Export → Tests As Raw Text.

Select and Copy the text in the text export dialog box.

Navigate to Script B, and use the Edit Menu → Import.

Paste in the copied export from script A and click on Import.

The tests from Script A should have been added to the bottom of Script B.


Refactoring: Splitting Scripts Apart


Similarly, if you have a script that's too long and haven't started collecting results yet, then you can use either of the above methods to copy a subset of tests out of one Script and paste them into a New (empty) Script. You then head back to the long script and delete the rows you just copied out. Be careful though, if you delete the wrong rows, you've only got Undo while you're on the page... as soon as you navigate away or reload, the Undo history is lost.




As usual, don't hesitate to email support@ontestpad.com if you have any questions, or even just to discuss your own test formatting in Testpad.

Testpad Tips - make copies to go from release to release

This is the second post on usage tips for Testpad and is about a great way to organise tests when going from release to release. The first post was about using retesting to go from build to build within a release.


Use a Folder per Release

The simplest way to model releases is to use a whole new folder for each new release, and to populate the new folder with copies of the scripts you used last time.

When scripts are copied, all the tests and run headers are copied, but the results are left behind.

A copied script is therefore like an on-the-fly template for the next release.


Example first, details below

Suppose you have a product called SlicedBread and you've just finished testing v1.0.



Prepare for v1.1 by right-clicking to Duplicate the folder.



Finally, rename the new folder to e.g. SlicedBread 1.1 and it's ready for testing the next thing since SlicedBread.





In a bit more detail


// Copying the whole folder

You can copy a whole folder with a right-click on the folder name, and selecting the Duplicate option. Or you can drag'n'drop with the CTRL key held down. Either way, you get a new folder with all the contents from the previous folder copied.

// Or copy the individual scripts

Alternatively, make an empty new folder for the new release, and selectively drag'n'drop+CTRL each script you want a copy of from the previous release.

This is useful if you only want a subset of the scripts from last time. If you want all of them, it's going to be quicker to just copy the whole folder as above.

// Or copy templates instead

Some much more complex projects, typically those involving custom configurations of components that are different from release to release, might instead look to populate a new release folder with scripts taken from a Library of Templates. Templates are nearly the same as scripts, except Templates never collect results, and only ever exist to be copied to make a script from.

Use templates (or folders of templates) by dragging and dropping onto the Project Name (over on the left) that you want the template copied into. Then go to that Project and move the new scripts (which are created at the top of the project) into the right folder.


Keeping a record of the previous release

The whole point of making copies for new releases is to leave intact the tests and results from last time.

Tests need to be updated in step with each evolution of a product, and it would be a shame to lose the consistent record of what was tested and with what results if the old scripts were then edited to add new features etc.

Instead, by working on new copies, the old tests and results are left alone, and the new copies can be edited as much as required to bring them up to date for the latest version of the product.



Archiving old Releases

And to keep the interface tidy, it helps to archive away old releases once they're only needed for their reports.

Archive a folder by right-clicking on its name and selecting the Archive option.

Find archived scripts and folders via the Archived Scripts link at the bottom of the list of scripts and folders for a project.

This is archiving within a project, and is most relevant for archiving old releases. Which is not to be confused with archiving a whole project (right-click on the project name over on the left and select Archive) for when you don't need a whole project to be around anymore.



Please get in touch if you need any help with how to apply these ideas to your projects... just email stef@ontestpad.com.

Testpad Tips - use the Retest feature to go from build to build

Testpad has a very flexible model for writing test plans (Scripts) which lends itself to pretty much however you want to approach and organise your testing. However, with this flexibility comes the power to go a bit wrong too.. it's perfectly possible to build inefficient plans and lose out on a lot of the value Testpad has to offer!

This post then is the first in a series of three providing tips on getting the most from Testpad. In time, Testpad's interface will itself be improved to nudge users in these "more optimal" directions, but until then, here are some hints for great usage patterns.

These tips also have a side-benefit of keeping content to a sensible size, both for manageability by the user and in terms of browser performance rendering the content on the screen. However, that said, you have to build an enormous script (think 1000+ rows with 100+ columns) before browser lag becomes annoying.

Spoiler alert... if you don't have time to read all the detail, the top three usage tips boil down to:

  • using the Retest feature for retesting new builds as part of the same release (the subject of this post)
  • copying Scripts (or whole Folders) when preparing for new releases - see the second post
  • and getting Script length right; combining together scripts that are too short (very common) or splitting apart truly massive scripts (less common) - see the third post


Retesting New Builds

First up is the idea of "retesting" test runs (columns). When a Test Run in Testpad is completed, whether marked as such manually in the Test Run Details dialog, or through setting results for every test, the Test Run Details dialog offers the "Start a Retest" button.

Retests are intended for retesting e.g. a new build. You've run through your tests a first time and found a number of test fails. The developers have fixed these and issued a new build that needs another test. So you come back to Testpad to make another run through your tests.

The thing not to do at this point is click on "New Test Run" to make a new column, additional to the columns already present. This is the path to the dark side and will lead to an ever accumulating number of columns, eventually slowing down the browser's responsiveness as you get north of 100 columns (sooner if you have >1000 rows).

Instead, you want to be using the "Start a Retest" button, offered in the Test Run Details dialog for runs marked Completed.


A Retest makes a new column that takes the place of the old column. To start with, the old column is displayed beside it, but greyed out. After a page reload, or when you next return to the script, the old column is not displayed at all. (You can get old columns back on screen with a right-click on the run header, selecting "Show Old Runs").

Retests are numbered like version numbers. Test Run 1.1 is the first retest of Test Run 1. Test Run 2.4 is the fourth retest of Test Run 2, and so on.

You can optionally inherit the results from last time. Inherited results are shown slightly lighter than new results and are there if you want to e.g. only retest the problems from last time.


A Retest also takes over the contribution to the progress statistics for the script. If you keep making new test runs (the bad way!), the progress statistics are always the sum of all runs ever conducted, and so can never approach a 100% pass rate with successive new builds. But use retests, and each retest can iterate the results for that run and allow them to reach 100% when everything is passing.



That's it for tip #1. Use the Retest feature when retesting new builds and don't keep making more and more columns, stretching the page off to the right.

Any questions, please email stef@ontestpad.com - always keen to help you get the most out of Testpad.

New Release: Testpad 4

New Look and Feel

Testpad just got a major update to its look and feel. The whole design of the app, and the branding of the marketing website, has been updated with a fresh look.


The functionality remains nearly identical, with only a few menu options moving around. Everything you could do before you should still be able to do in the new version.


New Pricing

At the same time, Testpad also updated its pricing. This is the first time prices have been changed (yes, raised) since Testpad launched in 2011, so a change has been well overdue for some time. The four existing plans have been replaced with three user-limited plans (for up to 3, 10 and 25 users respectively) along with a new kind of bespoke plan for enterprise customers.

Existing customers who were paying automatically either monthly or annually may continue on their current plans at their current pricing.

If you have any questions, or find problems with the new version, please don't hesitate to contact stef@ontestpad.com.

Writing BDD Test Scenarios with Gherkin Syntax

Testpad has built-in syntax highlighting for writing tests in the "Given, When, Then" style.

[EDIT: Screenshots pre-date the big UI update in June 2018]



Behavior-driven development (BDD)

Behavior Driven Development is a software development process that is based around the desired behaviors of a system; defining a framework for collaboration that is both precise enough for developers/QA and human-readable enough for business stakeholders and non-technical team members.

Requirements are framed as user-focused scenarios (effectively acceptance criteria), composed in plain English with a descriptive grammar in the form: Given [some initial conditions], when [something happens], then [something should happen].

Scenarios often use one or more concrete example values to help define the behavior.


BDD Automation frameworks, e.g. Cucumber, jbehave

The controlled structure of BDD requirements lend themselves to test automation. Frameworks like Cucumber and jbehave start with Scenarios defined in the plain "business-readable" language and then structure the test code in and around it.

As the developers of Testpad, a tool to help you manage your manual testing, it's perhaps not obvious that we promote the idea of automating as much of your testing as is humanly/reasonably possible. If you're not automating any testing yet, stop reading and get on it!

But you can't automate everything, and probably have lots of tests that you haven't got around to automating yet... in which case, despite wanting to automate everything, you have plenty of manual BDD testing to organise. Which leads us nicely onto the new support in Testpad for writing test plans as collections of scenarios in the Gherkin syntax...


Gherkin syntax - Given/When/Then - a business-readable domain-specific language

The Gherkin language is a formalisation of the Given/When/Then style of test definitions, as used by most prominently by the Cucumber automation framework amongst others.

Testpad now automatically looks for this formatting and colors the keywords if it finds them. If Testpad doesn't recognise your way of formatting Given/When/Then, you can force ON the syntax highlighting. Equally, if Testpad is thinking you're using Gherkin when you're not, you can force it OFF.


Formatting options to suit your style of Gherkin Syntax

By default, every row in Testpad, that isn't a title row, is a "test" that will collect a result i.e. Pass, Fail, Blocked etc, along with optional Comments, Screnshots and Bug Numbers.

When importing (or writing) BDD tests, this means rows that begin GIVEN and WHEN will also be regarded as tests looking for a result:


For lots of customers, this is just fine, and recording a "Pass" against a GIVEN statement can just be interpreted as "yep, set that up, no problems", and a "Pass" against a WHEN statement can mean "yep, made that happen, no problems".

For other customers, there is a preference to only record results against the THEN statements that define what should (and therefore might not) happen. To this end, Testpad offers a couple of options:

  • Use the new comment prefix -- which acts just like the // comment prefix, making a row a non-test row, except that the -- characters are hidden for a prettier display
  • Use cascaded indentation to make title rows out of every non-test statement


Combining Exploratory Testing with BDD/Gherkin

Whilst Testpad's free-form layout can be used for almost any test process, Testpad excels in writing plans that are guides (whether high-level or detailed) for Exploratory Testing. The idea of exploratory testing is that you don't follow prescriptive instructions for what to test, but instead leave the human brain free to be inquisitive and tenacious in hunting down unexpected behavior. Test plans for exploratory testing are therefore "guides"; checklists of features that you don't want to forget to take a good look at.

However... BDD/Gherkin tests, on the other hand, are more proscriptive, defining in significant detail what needs to happen given/when/then this or that happens. This is of course great for Acceptance Testing; proving that the stated requirements have been met. But alone, this is then less useful for having confidence that the system is bug free.

So why not have your cake and eat it?

Copy/paste your Gherkin tests into Testpad, tweak the formatting if required, and then extend the tests with ideas, details and edge-cases to explore in and around the given scenario.


Test plans in Testpad are a free-form checklist document. So whilst you can structure your requirements using Gherkin syntax, there's nothing to stop you adding further lists (sub-checklists) to each BDD Scenario.

If bugs crop up in the field after Release 1.0, go back to your Scripts (checklists) in Testpad and add tests/ideas/notes to the relevant scenarios to protect against regressions next time.


Any problems, questions or further suggestions to improve these ideas, please email stef@ontestpad.com.

Folders for Simpler Release Management

With the recent success of the new folders feature, here's a usage pattern that lots of customers have adopted to handle releases. In essence, this pattern avoids using Templates and instead uses the previous release as an on-the-fly template for the new one:

[EDIT: Screenshots pre-date the big UI update in June 2018]


    Duplicate the previous release

  1. Right-click on the folder for the previous release and select Duplicate
  2. Edit the new folder name to reflect the new release version number
  3. Update the tests

  4. Edit the new copies of the Scripts to catch up with the latest features in the product; adding new Scripts as required, and deleting any unneeded Scripts
  5. Archive the old release

  6. (optional, depending on how tidy you like your Project view) Right-click the old release folder and select Archive; this puts the old folder in the Project's Archive tab. Note that this is different to archiving a whole Project which you do by right-clicking on the Project's name in the list on the left.
  7. Do the testing

  8. Within each Script in the new release folder, and when the product is ready for testing, make the first test runs for the release, collecting lots of passes (hopefully) and a few fails (inevitable)
  9. Share the progress

  10. Start sharing the folder report with stakeholders: from the Project view, right click on the folder name and select View Report. This opens the report in a new window. The contents and verbosity of the report can be configured using the button in the top-right corner, which is also where you can find the Enable Sharing button to get a share link. Alternatively, the whole report is easy to SaveAs because it is a single self-contained HTML document that can be archived as-is in your own systems (file sharing, wiki attachments, emails etc).
  11. Retest new builds

  12. If a new build is coming, create ReTests of the complete Test Runs. Prepare ReTest columns using the prompt when a run is first completed, or by opening the Test Run Details dialog (hover over a Test Run header and click on the Edit icon that appears), sliding the slider to Complete if it's not already, and then clicking "start a retest".
  13. A ReTest is different to simply pressing "new test run" in that the new column takes the place of the previous run, both visually and in the progress stats. Thus you don't clutter the screen with more and more old test runs (unless you like that!), and the progress bar for the Script (and in turn, the Folder it is part of) can approach 100% pass as the fails get re-tested as working. You can get old runs back on the screen with a right-click on the header of the latest test run.

  14. And repeating step 7 as many times as new builds need re-testing to ship the release.
  15. Ship it!


Any questions, problems or feedback, just email stef@ontestpad.com