Historically, the focus of Test Management tools has been on Manual Testing and features to facilitate the managing of activities for a team of testers. The majority of commercial Test Management applications support the following core features:

  • Requirements
  • Test Procedures
  • Planning
  • Test Execution
  • Reporting
  • Bug Tracking Integration

With the popularity of Continuous Integration, Continuous Testing, and DevOps much of the quality assessment for a software project is now driven by automation.

Automation is changing the process and software companies are now facing the challenge of managing workflows for manual and automated testing. This evolution is disrupting the requirements for comprehensive Test Management.

Automation is disrupting the requirements for comprehensive Test Management

Disconnected Workflows

Automated testing can leverage Version Control platforms and Continuous Integration tools to execute tests daily or even with every software change committed, potentially resulting in a high volume of test result content. When failures occur, Test failure tracking can be used to manage resolution. Variations in result content are common and expected whether due to new features, refactoring, or the removal of stale tests and must be handled by the Test Management tool.

Disconnected workflow

In contrast, manual testing is typically managed by a separate team using its own process and toolset. Specific builds are targeted for testing producing a lower volume of test result content due to the constraints of time and test resources. Failures discovered require a much slower, more involved triage process often requiring Defects to be opened.

Manual testing is typically managed by a separate team using its own process and toolset

Manual testing will continue to be a very important technique for evaluating the quality of a product/service, along with the growing emphasis on automated testing.

However, differences in cadence and volume of test content along with differences in the management of failing test cases by automated and manual test tools have resulted in disconnected teams along with a fragmented status of the software product.


In modern software development processes, all team members own and contribute to quality - developers, testers, managers, and product owners alike.

All team members must own and contribute to quality - developers, testers, and managers

Testspace 2.0 has been designed specifically to connect the workflows of automated and manual testing as well as gather and process test results from both to generate actionable status that can be consumed and understood by all team members. This is realized by providing an Integrated Test Management application built for CI and Manual testing:

  • Supports Continuous Integration Reporting designed for scale (HIGH VOLUMES) including test results, code coverage and static analysis reports, and other custom artifacts
  • Offers a new streamlined approach for Authoring and Running Manual Tests
  • Provides Project Management for test planning, requirements, and other activities based on version control platforms such as GitHub, Bitbucket, and GitLab

Integrated Test Management built for CI and Manual Testing


The Status of the software, regardless of the type of testing, is seamlessly aggregated. All of the metrics: test results, code coverage, defects, requirements, etc., are collected and used together.

Test Analytics used to recommend process improvements now seamlessly include all of the project’s testing content and activity. Projects can now better assess that their testing process is adequately capturing defects and change side-effects from slipping into production. Testing processes are also viewed for efficiency regarding people, time, and equipment.

The Readiness of the software during development is visible and more predictable.

Aggregated reporting

Software quality is the problem we are trying to help solve. It is a challenging issue. Automation in itself does not solve it. Manual testing in itself does not solve it. Humans working better together, including all teams, is the best way to improve quality.

Humans working better together, including all teams, is the best way to improve quality.


For a closer look refer to the following sections.

Continuous Integration Reporting

As previously mentioned, the primary focus of existing Test Management systems is on executing and managing manual tests. Although most systems provide a means to import automated tests results they are not suited for scaling to the high volume of results from automated CI.

Existing applications are not suited for scaling to the high volume of results from automated CI

Continuous Integration Scale

One of the challenges with Continuous Integration is collecting all of the valuable information that is buried in the build system. This becomes even more difficult, and time-consuming when multiple build systems are being used – going to different systems to extract out software status from build-centric content.

Testspace has been specifically designed for automation by scaling for high volume and connecting to your automation workflow. It aggregates data from all your build and testing systems and performs analysis on the content, including historical patterns, analytics, etc.

To upload content to the Testspace server only a single command is required:

testspace result*.xml coverage.xml log.txt .. 

Testspace continuously data mines content generated from automation - test results, code coverage, static analysis, logs, source changes, and more - to monitor the health of the software, all from a single view, and is one of the keys to optimizing release readiness.

All test content is unified independent of the build/CI systems being used.

Add Custom Charts

Custom Charts You can also create your own testing charts from log files and other sources; timing and performance data, resource consumption, and more.

A common practice for many development organizations, including ours, is to collect logs and data during automated testing. Whether this is done to provide context for failures or as secondary quality measurements, ancillary data can be a vital part of any Continuous Integration (CI) process.

Elevate Important Metrics! Include important metrics in the build status check.

There are just two steps required: (1) add the values to a .csv file (2) use the Testspace client utility to upload the additional files along with the test results.

testspace result*.xml coverage.xml log.txt{log.cvs}

Authoring and Running Manual Tests

In Testspace 2.0 we’ve adding Manual test execution to our existing CI centric product. The value is all team members view status the same, triage failures the same, and communicate on quality issues using one application.

All team members view status the same, triage failures the same, and communicate on quality issues using one application.

Traditional Test Management applications almost universally use a proprietary User Interface to capture the test specifications/instructions. Why? Right away the testing requirements become more difficult to share with other teams.

We’ve taken a different approach by using natural language (plain text) for common understanding that can be automated while maintaining the specifications!

Nature language used for specifying tests

Because Testspace has built-in support for continuous integration reporting, we felt it important to facilitate automating manual tests – when it makes sense. We decided to standardize on ThoughtWorks open source automation test framework called gauge. Gauge is similar to the Cucumber’s automation framework, which uses the Gherkin language. But we wanted less prescribed syntax, at least to start with. Refer to the ThoughtWorks article on why gauge was built.

Plain text example:

# Clear Channel

## Reset scan
Make sure to let scanner settle for ~ 10 seconds.

  * Scan set to Zero
  * Open gate to 777

If/when a test is selected to be automated, simply implement the corresponding steps using a language preferred plugin, while continuing to push results into Testspace. When automating a test, NO CHANGES to the test instructions are required.

The Gauge automation framework is an end-to-end testing tool, supporting features for large test projects that require scaling. The test case specification format, as defined by Gauge, will be supported by Testspace 2.0. For the complete specification refer to the following documentation.

The following examples capture the majority of the syntax required to capture test cases.


Test Suites are called Specifications in Gauge. A Suite is represented by a single file (i.e. only one Suite can be defined in a file). The file contains the specification – business test case. They require the H1 tag. Files require .spec and/or .md file extensions.

# My Suite One
Some description content can be optionally added here.. 

Files are required to be located in the specs folder within the Repo. This is to be compatible with the Gauge project structure.


Test Cases are called Scenarios in Gauge. They require the H2 tag.

# My Suite One
This is my example Suite

## My Case One
Some description can optionally go here..


Steps are the actions required to verify correctness. They are the main building blocks of a test. Steps are written as markdown using unordered list items.

* My Step One
* My Step Two

And More ..

Other standard test framework features such as fixtures, parameters, and sharable instructions are also supported. See documentation for details.


Tests are checked into a repo and managed just like automated tests. Testspace is used to create Test Runs and execute Suites with its built-in Runner.

Test runs

Project Management

Modern Repos such as GitHub, Bitbucket, and GitLab have implemented project management functionality that’s been adopted by a large percentage of the development community.

Rather than re-invent our own proprietary management features, we’ve taken the unique approach of leveraging theirs.

We decided to use Repo’s project management functionality vs inventing our own

Note that most Test Management applications support 3rd party Issue tracking tools (Jira for example) for defects and requirements. What we have done differently is to fully integrate with modern Repos (see connect with Repos for CI testing) to leverage their built-in functionality for the following:

  • Version control for Tests
  • Test Planning using Project Boards, Issues, and Notes
  • Requirements using Issues
  • Bug Tracking using Issues

Reinventing the wheel


Test Requirements are captured using Repo Issues. A specific label (i.e. test requirements) associated with the Issue is used to identify the type.

Test Requirements

For requirement tracking purposes the Issues can be reference in the specifications.


The Test Specifications (aka tests) are maintained in the Code section of the Repo using folders and markdown files. The specifications are based on the test requirements as defined by Issues.

Test Specifications in Repo

Test Plans

Test planning involves creating one or more Test Plans associated with testing a sequence of software builds. This typically corresponds to software release milestones.

Test Plan(s) are used when creating a Test Run:

  • Capture what set of the Test Specifications to execute (folders and/or test files)
  • Assign testers (i.e. GitHub users) to sections of the plan responsible to execute

A Repo Issue is used to create a Test Plan, using a special label: test plan.

Test plans using Repo

Project Boards are used to assign and track testing activity. Two standard Project Templates are supported:

  • Basic kanban
  • Automated kanban

Repo Projects for managing

Project Boards can also be used to automatically schedule Test Runs using Board Notes.

Get setup in minutes!

Try Testspace risk free for 30 days.

No credit card required. Have questions? Contact us.

Testspace works seamlessly with the vast majoritiy of CI systems and online services, including

Travis CI Circle CI Jenkins Appveyor Shippable GitLab CI Bitbucket Pipelines