Robot arms in a factory

Automated Testing on the Digital Commons Platform

We test the Digital Commons platform using a number of automated tools.

Author: Marla Laubisch

Benefits of automated testing

Automated testing can perform many more tests than manual testing, leading to cost savings. It allows for greatly increased test coverage, because it can test many more pages or features than manual testers can achieve. Because it's consistent and replicable, it is more accurate. The reporting, also, is consistently presented.

There are other benefits: because it's faster than manual testing, it has a faster feedback cycle to detect and resolve errors.

Importantly, it allows manual testers to focus on higher-value tasks like exploratory testing, that benefit from human creativity.

  • Cost/speed
  • Test coverage
  • Accuracy
  • Reporting

Test early, test often

We test our software - the Digital Commons platform - multiple times during a development cycle, so that errors are discovered and corrected in the development environment, before the code is deployed to a live website. "Shift left" is the practice of moving testing and evaluation early in the development process. We begin testing on our local machines, and then the majority of testing takes place on development builds. Thus we are employing both unit and integration testing.

Regression testing

Along with manual testing and specific one-off tests, we run automated regression testing on a suite of test pages, which include all of the content types and paragraph types on the platform. Once the test pages have been created, it's much faster to automatically test them than to expect a human to create each of these many combinations. This test is deployed several times throughout a development sprint.

Example of Event cards with different options

Regression testing checks whether new development in the software has changed any existing features or appearance. For example, if we are increasing the padding on an Event card, we might inadvertently affect an Article card. 

Visual regression testing captures a baseline image - the before snapshot - and then a checkpoint image after new code has been deployed. It quickly marks any changes between the before and after. If changes have occurred, we analyze the test results to determine if it's a desired change. We use an application called Applitools, which validates the test visually - any changes are marked in magenta. In one test, we discovered that a recent change to padding had hidden the column titles on mobile devices.

Example of padding error on CTA cards; text is being covered up.

Functional testing

Functional tests perform user actions, rather than capturing a static snapshot. For example, a script that performs basic actions: it logs in, creates a site page, adds content to it, publishes it, puts it in a menu, and ultimately deletes it. This script deploys automatically whenever code is pushed to the development environment. This is an example of continuous integration: The developer's changes are validated by creating a build and running automated tests against the build.

Accessibility testing

We've been increasing our accessibility testing in a number of ways. Monsido, for example, allows our users to find accessibility issues in their content. But we are also testing the platform itself during the development cycle, using the WCAG 2.2 standards. While this testing discovers possible issues with content errors, such as missing alternative text, we focus mostly on the architecture itself: structural elements, such as headings and lists, and ARIA items, which provide enhanced semantics and accessibility for web content.  We use a tool called Pope Tech for automated accessibility testing. There is some overlap, but it covers structural and ARIA items that Monsido does not. 

We perform accessibility testing on the same suite of test pages that we use for regression testing. Pope Tech generates detailed reports that describe the problem, the suggested solution, and where in the code the issue exists.

When an issue is uncovered, we research it to make sure it is in fact an error. If so, it's put into our development pipeline for a developer to investigate the problem and correct if needed.

This is a relatively new tool for us, and currently the tests are automatically deployed on a calendar schedule. The next steps are integrating it into our deployment pipeline, like the functional testing that is kicked off by a build, and with our JIRA workflow to create tickets automatically.