Guest Testing - anyone can help when the pressure is on

This article is about a powerful feature in Testpad called "Guest Testing". It does what it says on the tin: you can invite guest users to help make test runs, testing your software as prompted by your test plan and recording results.

Guest testers don't need logins to your Testpad workspace. You invite them by email, and a special link gives them time-limited read-only access to just the test plan (script) that you've invited them to help test.

And with Testpad's near instant learning curve, anyone you do invite can immediately get on with that testing, without formal training or induction to your tool chain.

OK. So why is this so great?

Guest Testing is helpful in many scenarios, each of which deserves its own article (watch this space!), but including

getting extra help at release time from outside the test team: developers, managers, business stakeholders, friends...

involving clients in testing fantastic for de-risking delivery and building trust

user acceptance testing involving clients again, but the more formal (or not) version where the test plan is being used explicitly to prove (or not!) the software is ready

restricting visibility of test plans containing sensitive content

Preparing for Guest Testing

1. Test Runs

Test results in Testpad are recorded in columns to the right of test prompts. Each column is to record a different set of results against the same prompts (although filters could be used to vary the subset that's actually tested). Columns are mostly used to establish test runs against different environments such as different browsers for webapps, or different phones for mobile apps. Columns can also be used for the same environment but for different testers (you get a higher chance of finding bugs if different brains can look at the same test ideas).

2. Assigning to Guest

For guest testing then, you initiate the process by assigning a Test Run (column) to "Guest".

Test Run assigment is managed in the Test Run details dialog which you open by any of:
- creating a new test run (which auto opens the dialog for the new test run)
- right clicking on the header of a test run column and selecting edit details
- hovering over the header of a test run and clicking on the pencil icon (edit)

Test Run assigment is only available for Test Runs that are still "IN PROGRESS" (as opposed to "COMPLETED"). So if you can't see the assignment drop-down box, that'll be because the test run is already completed! In which case, why would you be trying to assign it to someone else? :)

When the dropdown box is used to select "Guest" as the assignee, a button below appears for composing and sending an email. Use this button to address, and optionally edit the contents of, an invitation email that will give the recipient a link they can click on to make the test run.

3. Email Invitation

The recipient of the email will get a link, like a document sharing link, that gives them some restricted access into your testpad account. Specifically, they get read-only access to that one test plan (script) to record results against that one test run (column).

For practice, you can send guest test invitations to yourself to see what the experience is like. Though note that clicking on a guest link will log you in as Guest... which means logging you out as your normal user.

Top tip: for practicing with guest testing, open the guest test links in a different browser. That way you can stay logged in in your usual browser.

4. Performing guest test runs

This doesn't really need description - which is the whole point!

1. Click on the link you received.
2. Follow the test prompts.
3. Record appropriate comments and the pass/fail status.


Well, OK. The simplicity here will depend on your test prompts and the complexity of your software under test. Testpad does it's bit to not add to the complexity however, reducing the task to prompting and recording results.

If your prompts are more exploratory in style, i.e. they're not full of detailed instructions to do this, then that, then that, then expect exactly this and that; but instead suggest looking at e.g. unicode input in the login page, or forgotten passwords for returning users, then they will need the Guest Testers to engage brain and be imaginative in what they actually test.


Lastly, don't miss that Guest Testing links can be opened on mobile devices as well as desktop devices. On most tablets and mobile phones, you get a mobile version of the testing interface. Being prompted for and collecting results on a separate device can be very practical; it saves the tester from constant context switching on their main screen between the app under test and the test mgmt tool.

So that's it for now. Any questions about how any of this works, please don't hesitate to contact

Happy (guest) testing.

PS. Guest Testing is a premium feature available from the Team Plan and upwards. However, it is available in the Free Trial, so give it a go and see for yourself how convenient it is to bring in extra, or even external, help when the pressure is on to make a release.

Testpad's new UI sees 1m results in 3months

It's now been 3 months since Testpad's UI saw a big upgrade in look and feel, and, judging from customer emails (keep 'em coming), Testpad has never been so popular.

And it's not just the positive feedback - since June, the new UI has been used to record over  1,000,000  results, taking Testpad's total to over 15m. Sadly for the dev teams out there, not all of those have been 'passes', but at least the bugs are being found in testers rather than customers hands!

Looking ahead, Testpad's next big upgrade over the coming months will be a much-requested API. The API work is still early days, so don't hold your breath just yet. Although please do get in touch if you have specific requirements or ideas for how you'd use an API. The more input at this early stage, the more useful it will be.

If you're new to Testpad, please don't hesitate to get in touch for help... video demos, importing existing tests, usage patterns etc.., just email

Testpad Tips - don't make Scripts too short

This is the third post on usage tips for Testpad and is about getting the most from a Script. Actually there are lots of aspects to using Scripts well, but this post is all about avoiding a common mistake: making scripts too short.

It's very tempting to treat a script as a single test case, and thus put as few as 5-10 rows in it that only collect 1 or 2 meaningful results. While this works, it's not what the UI was designed for and it will quickly become annoying navigating to and fro from the project view.

Aside: it is also possible to make scripts too long, but a) this is way less common, and b) Testpad tries to protect you from yourself by limiting scripts to 2000 rows. In fact, too long only becomes a problem if you've got 1000+ rows combined with 100+ columns, but see the first post in this series for tips on fewer run columns.

Before we get into it, here's a screenshot of the top of a well-formatted script. This one happens to be test-case inspired in terms of specifying steps and expected outcomes separately, but the ideas apply equally to exploratory style guides and simple checklists of terse test prompts.

The formatting in this example is pure convention. There are lots of possibilities, but the general aim is to make good use of the hierarchical structure and keep each cell to a single row of text.

This example also makes use of Link Shortcodes defined in Project Settings, for convenient linking to e.g. User Story documents.

Short Scripts

The main issue with short scripts is in missing out on the power/efficiency of the script-editing page, with its progress bars, indentation structure, inline styling, keyboard-driven UI etc etc. It also overloads the Project view, as any reasonably sized project will have several hundred cases to test, and hundreds of scripts in one folder is just annoying to manage.

What does a short script look like?

Or worse...

When instead you can have lots of tests in the one script:

Which then lets you do things like collapse the rows for convenient overviews of your test coverage (collapse buttons hiding at the very bottom of the browser window):

Or even:

And here's another example in the BDD/Gherkin syntax format, which Testpad will auto-highlight when it spots rows starting with keywords like Given, When and Then:

Refactoring: Joining Scripts Together

Lastly then, if you find yourself with lots of short scripts and want to combing them, here are two ways of getting that done...


Open Script A and Script B in separate tabs in the same browser.

In Script A, make a multi-row selection and press Ctrl-C (you should get a message saying "N tests copied").

Switch tabs to Script B, select (but don't focus on, as in you don't want the text edit cursor visible) the last row, and press Ctrl-V. You should get a message saying "N tests added".

If successful, head back to the Project view to delete Script A (right click on its name).


In Script A, use the Script Menu → Export → Tests As Raw Text.

Select and Copy the text in the text export dialog box.

Navigate to Script B, and use the Edit Menu → Import.

Paste in the copied export from script A and click on Import.

The tests from Script A should have been added to the bottom of Script B.

Refactoring: Splitting Scripts Apart

Similarly, if you have a script that's too long and haven't started collecting results yet, then you can use either of the above methods to copy a subset of tests out of one Script and paste them into a New (empty) Script. You then head back to the long script and delete the rows you just copied out. Be careful though, if you delete the wrong rows, you've only got Undo while you're on the page... as soon as you navigate away or reload, the Undo history is lost.

As usual, don't hesitate to email if you have any questions, or even just to discuss your own test formatting in Testpad.

Testpad Tips - make copies to go from release to release

This is the second post on usage tips for Testpad and is about a great way to organise tests when going from release to release. The first post was about using retesting to go from build to build within a release.

Use a Folder per Release

The simplest way to model releases is to use a whole new folder for each new release, and to populate the new folder with copies of the scripts you used last time.

When scripts are copied, all the tests and run headers are copied, but the results are left behind.

A copied script is therefore like an on-the-fly template for the next release.

Example first, details below

Suppose you have a product called SlicedBread and you've just finished testing v1.0.

Prepare for v1.1 by right-clicking to Duplicate the folder.

Finally, rename the new folder to e.g. SlicedBread 1.1 and it's ready for testing the next thing since SlicedBread.

In a bit more detail

// Copying the whole folder

You can copy a whole folder with a right-click on the folder name, and selecting the Duplicate option. Or you can drag'n'drop with the CTRL key held down. Either way, you get a new folder with all the contents from the previous folder copied.

// Or copy the individual scripts

Alternatively, make an empty new folder for the new release, and selectively drag'n'drop+CTRL each script you want a copy of from the previous release.

This is useful if you only want a subset of the scripts from last time. If you want all of them, it's going to be quicker to just copy the whole folder as above.

// Or copy templates instead

Some much more complex projects, typically those involving custom configurations of components that are different from release to release, might instead look to populate a new release folder with scripts taken from a Library of Templates. Templates are nearly the same as scripts, except Templates never collect results, and only ever exist to be copied to make a script from.

Use templates (or folders of templates) by dragging and dropping onto the Project Name (over on the left) that you want the template copied into. Then go to that Project and move the new scripts (which are created at the top of the project) into the right folder.

Keeping a record of the previous release

The whole point of making copies for new releases is to leave intact the tests and results from last time.

Tests need to be updated in step with each evolution of a product, and it would be a shame to lose the consistent record of what was tested and with what results if the old scripts were then edited to add new features etc.

Instead, by working on new copies, the old tests and results are left alone, and the new copies can be edited as much as required to bring them up to date for the latest version of the product.

Archiving old Releases

And to keep the interface tidy, it helps to archive away old releases once they're only needed for their reports.

Archive a folder by right-clicking on its name and selecting the Archive option.

Find archived scripts and folders via the Archived Scripts link at the bottom of the list of scripts and folders for a project.

This is archiving within a project, and is most relevant for archiving old releases. Which is not to be confused with archiving a whole project (right-click on the project name over on the left and select Archive) for when you don't need a whole project to be around anymore.

Please get in touch if you need any help with how to apply these ideas to your projects... just email

Testpad Tips - use the Retest feature to go from build to build

Testpad has a very flexible model for writing test plans (Scripts) which lends itself to pretty much however you want to approach and organise your testing. However, with this flexibility comes the power to go a bit wrong too.. it's perfectly possible to build inefficient plans and lose out on a lot of the value Testpad has to offer!

This post then is the first in a series of three providing tips on getting the most from Testpad. In time, Testpad's interface will itself be improved to nudge users in these "more optimal" directions, but until then, here are some hints for great usage patterns.

These tips also have a side-benefit of keeping content to a sensible size, both for manageability by the user and in terms of browser performance rendering the content on the screen. However, that said, you have to build an enormous script (think 1000+ rows with 100+ columns) before browser lag becomes annoying.

Spoiler alert... if you don't have time to read all the detail, the top three usage tips boil down to:

  • using the Retest feature for retesting new builds as part of the same release (the subject of this post)
  • copying Scripts (or whole Folders) when preparing for new releases - see the second post
  • and getting Script length right; combining together scripts that are too short (very common) or splitting apart truly massive scripts (less common) - see the third post

Retesting New Builds

First up is the idea of "retesting" test runs (columns). When a Test Run in Testpad is completed, whether marked as such manually in the Test Run Details dialog, or through setting results for every test, the Test Run Details dialog offers the "Start a Retest" button.

Retests are intended for retesting e.g. a new build. You've run through your tests a first time and found a number of test fails. The developers have fixed these and issued a new build that needs another test. So you come back to Testpad to make another run through your tests.

The thing not to do at this point is click on "New Test Run" to make a new column, additional to the columns already present. This is the path to the dark side and will lead to an ever accumulating number of columns, eventually slowing down the browser's responsiveness as you get north of 100 columns (sooner if you have >1000 rows).

Instead, you want to be using the "Start a Retest" button, offered in the Test Run Details dialog for runs marked Completed.

A Retest makes a new column that takes the place of the old column. To start with, the old column is displayed beside it, but greyed out. After a page reload, or when you next return to the script, the old column is not displayed at all. (You can get old columns back on screen with a right-click on the run header, selecting "Show Old Runs").

Retests are numbered like version numbers. Test Run 1.1 is the first retest of Test Run 1. Test Run 2.4 is the fourth retest of Test Run 2, and so on.

You can optionally inherit the results from last time. Inherited results are shown slightly lighter than new results and are there if you want to e.g. only retest the problems from last time.

A Retest also takes over the contribution to the progress statistics for the script. If you keep making new test runs (the bad way!), the progress statistics are always the sum of all runs ever conducted, and so can never approach a 100% pass rate with successive new builds. But use retests, and each retest can iterate the results for that run and allow them to reach 100% when everything is passing.

That's it for tip #1. Use the Retest feature when retesting new builds and don't keep making more and more columns, stretching the page off to the right.

Any questions, please email - always keen to help you get the most out of Testpad.

New Release: Testpad 4

New Look and Feel

Testpad just got a major update to its look and feel. The whole design of the app, and the branding of the marketing website, has been updated with a fresh look.

The functionality remains nearly identical, with only a few menu options moving around. Everything you could do before you should still be able to do in the new version.

New Pricing

At the same time, Testpad also updated its pricing. This is the first time prices have been changed (yes, raised) since Testpad launched in 2011, so a change has been well overdue for some time. The four existing plans have been replaced with three user-limited plans (for up to 3, 10 and 25 users respectively) along with a new kind of bespoke plan for enterprise customers.

Existing customers who were paying automatically either monthly or annually may continue on their current plans at their current pricing.

If you have any questions, or find problems with the new version, please don't hesitate to contact

Writing BDD Test Scenarios with Gherkin Syntax

Testpad has built-in syntax highlighting for writing tests in the "Given, When, Then" style.

[EDIT: Screenshots pre-date the big UI update in June 2018]

Behavior-driven development (BDD)

Behavior Driven Development is a software development process that is based around the desired behaviors of a system; defining a framework for collaboration that is both precise enough for developers/QA and human-readable enough for business stakeholders and non-technical team members.

Requirements are framed as user-focused scenarios (effectively acceptance criteria), composed in plain English with a descriptive grammar in the form: Given [some initial conditions], when [something happens], then [something should happen].

Scenarios often use one or more concrete example values to help define the behavior.

BDD Automation frameworks, e.g. Cucumber, jbehave

The controlled structure of BDD requirements lend themselves to test automation. Frameworks like Cucumber and jbehave start with Scenarios defined in the plain "business-readable" language and then structure the test code in and around it.

As the developers of Testpad, a tool to help you manage your manual testing, it's perhaps not obvious that we promote the idea of automating as much of your testing as is humanly/reasonably possible. If you're not automating any testing yet, stop reading and get on it!

But you can't automate everything, and probably have lots of tests that you haven't got around to automating yet... in which case, despite wanting to automate everything, you have plenty of manual BDD testing to organise. Which leads us nicely onto the new support in Testpad for writing test plans as collections of scenarios in the Gherkin syntax...

Gherkin syntax - Given/When/Then - a business-readable domain-specific language

The Gherkin language is a formalisation of the Given/When/Then style of test definitions, as used by most prominently by the Cucumber automation framework amongst others.

Testpad now automatically looks for this formatting and colors the keywords if it finds them. If Testpad doesn't recognise your way of formatting Given/When/Then, you can force ON the syntax highlighting. Equally, if Testpad is thinking you're using Gherkin when you're not, you can force it OFF.

Formatting options to suit your style of Gherkin Syntax

By default, every row in Testpad, that isn't a title row, is a "test" that will collect a result i.e. Pass, Fail, Blocked etc, along with optional Comments, Screnshots and Bug Numbers.

When importing (or writing) BDD tests, this means rows that begin GIVEN and WHEN will also be regarded as tests looking for a result:

For lots of customers, this is just fine, and recording a "Pass" against a GIVEN statement can just be interpreted as "yep, set that up, no problems", and a "Pass" against a WHEN statement can mean "yep, made that happen, no problems".

For other customers, there is a preference to only record results against the THEN statements that define what should (and therefore might not) happen. To this end, Testpad offers a couple of options:

  • Use the new comment prefix -- which acts just like the // comment prefix, making a row a non-test row, except that the -- characters are hidden for a prettier display
  • Use cascaded indentation to make title rows out of every non-test statement

Combining Exploratory Testing with BDD/Gherkin

Whilst Testpad's free-form layout can be used for almost any test process, Testpad excels in writing plans that are guides (whether high-level or detailed) for Exploratory Testing. The idea of exploratory testing is that you don't follow prescriptive instructions for what to test, but instead leave the human brain free to be inquisitive and tenacious in hunting down unexpected behavior. Test plans for exploratory testing are therefore "guides"; checklists of features that you don't want to forget to take a good look at.

However... BDD/Gherkin tests, on the other hand, are more proscriptive, defining in significant detail what needs to happen given/when/then this or that happens. This is of course great for Acceptance Testing; proving that the stated requirements have been met. But alone, this is then less useful for having confidence that the system is bug free.

So why not have your cake and eat it?

Copy/paste your Gherkin tests into Testpad, tweak the formatting if required, and then extend the tests with ideas, details and edge-cases to explore in and around the given scenario.

Test plans in Testpad are a free-form checklist document. So whilst you can structure your requirements using Gherkin syntax, there's nothing to stop you adding further lists (sub-checklists) to each BDD Scenario.

If bugs crop up in the field after Release 1.0, go back to your Scripts (checklists) in Testpad and add tests/ideas/notes to the relevant scenarios to protect against regressions next time.

Any problems, questions or further suggestions to improve these ideas, please email