Why and How to Execute Manually-Written Tests with Parasoft Solutions

Why and How to Execute Manually-Written Tests with Parasoft Solutions
Writing your own unit tests and incorporating them into nightly/continuous builds is a great thing,
but open-source testing tools can only do so much. For many years, the Parasoft development
team struggled to keep the process for manually-written tests separate from the process for
automatically-generated/executed tests. Eventually, we learned that running manually-written
tests with our automated testing tools (Parasoft Jtest for Java, Parasoft C++test for C/C++,
Parasoft .TEST for .NET languages) was the best way to maintain our own manually-written tests
as the size of our test suite continued to grow. At this point, we switched to running all of our tests
with our automated testing tools and immediately reaped the benefits with our manually-written
tests.
For the Jtest development team, the problem was very simple: developers were not fixing the
tests that failed. Creating tests to go with requirements was not a problem. We wrote JUnit-based
Eclipse plug-in tests for most new features and issues that required code changes. However,
later changes to the code base caused these tests to start failing—either because the original
problems returned, or because the tests were sensitive to side effects.
In each case, someone needed to review the test failure and decide if it was an intended change
in behavior or an unwanted regression error. The open-source plug-in test runner from Eclipse
and a few scripts just did not give us enough visibility into which tests were failing from whom, for
what, since when. Instead, our nightly often produced a report that said 150 of our 3,000+ tests
failed. Each developer could not determine if their own tests failed unless they reviewed the full
list of failures—one at a time—to see if it was something that they should fix or if it was a task for
someone else. Our system was lacking accountability.
Initially, we tried asking all developers to review all test failures, but that took a lot of time and
never became a regular habit. Then, we tried designating one person to review the test failures
and distribute the work. However, that person was not fond of the job because it was tedious and
the distribution of tasks was not well-received by the others.
This past year, we started running our manually-written tests with Jtest, and we immediately
received the following benefits:
•
•
•
•
•
•
•
Each test failure was assigned to an individual developer.
The developer who created an offending test was automatically identified from CVS
history.
Our nightly server uploaded results to TCM so developers could import results into their
workspaces without needing to re-run the tests locally.
After review, if a test failure turned out to be caused by some one else’s changes, the
task could be reassigned via a right-click action within Jtest.
Our nightly server uploaded results to GRS, providing trend graphs of requirements
correlated to manually-written tests.
Jtest sent daily emails to each developer with their personal list of failing tests and a
personal trend graph for their own test results.
Jtest sent daily manager reports with overall test result trends and a breakdown showing
which developers had to fix how many tests.
As a result of this process change, test failures are being fixed within days instead of months.
-- Matt Love, Parasoft Software Development Manager
1
Appendix A: How to run manually-written unit test cases under
Jtest to get extra functionality
Preparation
The team architect performs the following steps:
1. Create a new Test Configuration for executing manually-written test cases. This should
be a custom Test Configuration based on the "Built-in> Run Unit Tests" Test
Configuration. In the Execution> Search tab, modify the settings as needed so that Jtest
will locate and test the manually-written tests
2. Add the team Test Configuration to TCM.
3. Configure Jtest to apply the designated “Run Unit Tests” Test Configuration to test the
new and modified code in the team’s code base at regular intervals (preferably nightly).
To setup this nightly run, setup a jtestcli run as usual, but customize the localsettings file
as follows to generate a customized report specific to user-defined test execution and
send it to the developers:
# TCM settings:
tcm.server.enabled=true
cotcm.server. name=tcm_server.company.com
report.tag=AEP Jtest – User Test Execution
# Mail settings:
report.mail.enabled=true
report.mail.cc=manager_user_name1;manager_user_name2
report.mail.server=mail.company.com
report.mail.domain=company.com
report.mail.subject=AEP Jtest – User Test Execution
# GRS reporting settings
grs.enabled=true
grs.server=grs_server.company.com
# Scope settings
scope.sourcecontrol=true
scope.author=true
scope.local=true
When the test is run, Jtest will access and run the manually-written test cases and assign any
identified regression test failures to specific developers. It will also do the following:
•
Send a summary report to the manager and reports to each developer (with just his or
her regression errors).
•
Upload the errors to the TCM server.
2
Usage
Every developer and/or tester performs the following step to add any test cases they have
written:
1. Add the test cases to the location specified in the team’s “Run Unit Tests” Test
Configuration (for example, under the “test” source folder in the same project as the
classes under test).
Every team developer performs the following steps each morning:
1. Each morning, review the emailed report to determine if his or her changes results in any
regression failures.
•
If no problems were reported, no additional steps are needed.
2. Import “My Recommended Results” into the GUI.
3. Respond to the results as follows:
•
If the previous behavior is still the correct behavior, fix the code that was broken.
•
If the functionality was modified intentionally, verify the new behavior (use the
“Change expected value” Quick Fix).
•
If the assertion is checking something that you don't want to check or something
that changes over time (i.e. if it is checking the day of the month), ignore the
assertion (use the “Ignore this assertion” Quick Fix).
•
If the test case doesn't check something you want to assert or if the test case is
no longer valid, delete the test case (use the “Skip Test Case” Quick Fix). For
instance, this might be appropriate if someone previously added validation
checks to some methods, but now the input generated is no longer valid.
4. Ensure that modified code and test cases are added to source control.
3
Appendix B: How to run existing CppUnit test cases under
C++test to get extra analysis and reporting functionality
C++test can automatically execute manually-written CppUnit tests, as long as they conform to the
supported subset documented in the C++test User Guide. Essentially, to execute these tests, you
need to set up a Test Configuration that locates CppUnit test directories, then run that Test
Configuration.
Preparation
The team architect prepares a specific Test Configuration to run existing CppUnit tests as
follows:
1. Create a new Test Configuration for executing manually-written test cases. This should
be a custom Test Configuration based on the "Built-in> Run Unit Tests" Test
Configuration. In the Execution> General tab, modify the Test suite file search patterns
as needed so that C++test will locate and test the CppUnit tests. The directory with these
tests must be either a subdirectory of the main project, or logically linked via a linked
folder (Eclipse version only).
2. Add the team Test Configuration to TCM.
3. If you have not already done so, configure your code projects to be used with C++test
(for details, see the C++test User’s Guide). If you are using the Eclipse-based version,
the projects will need to be set up in the GUI or imported from a workspace on the test
machine. If you are using the Visual Studio .NET plugin, no extra set up is necessary
(cpptestcli for the Visual Studio plugin will take the VS .sln solution file as a parameter).
4. Create a specific regression.localsettings file to include the following options:
# TCM settings:
tcm.server.enabled=true
tcm.server.name=tcm_server.company.com
report.tag=AEP C++test User Regression
# Mail settings:
report.mail.enabled=true
report.mail.cc=manager_user_name1;manager_user_name2
report.mail.server=mail.company.com
report.mail.domain=company.com
report.mail.subject=Cpptest - User Regression Results
# GRS reporting settings
grs.enabled=true
grs.server=grs_server.company.com
grs.log_as_nightly=true
# Scope settings
scope.sourcecontrol=true
scope.author=true
scope.local=true
4
When the test is run, C++test will access and run the manually-written test cases and assign any
identified regression test failures to specific developers. It will also do the following:
•
•
Send a summary report to the manager and reports to each developer (with just his or
her regression errors).
Upload the errors to the TCM server.
Usage
Every team developer performs the following steps each morning:
1. Review the emailed report to determine if his or her code changes result in any
regression failures.
•
If no problems were reported, no additional steps are needed.
2. Update the project from source control to incorporate any changes from source control.
Perform Refresh on the projects loaded in the workspace (for Eclipse version).
3. Import “My Recommended Tasks” into the C++test GUI using C++test> Import> My
Recommended Tasks or the corresponding toolbar button. This will import test case
failures related to code changes made by the specific developer.
4. Respond to the results as follows:
•
If the test is correct, and the failure is legitimate, fix the code that got broken.
•
If the functionality was modified intentionally, but the tests have not been
updated, verify the new behavior by applying the Validate right-click command to
the appropriate regression tasks.
•
If the failed assertion is checking something that you don't want to check or
something that changes over time (i.e. if it is checking the day of the month),
ignore the assertion (use the “Ignore assertion” pop-up menu option).
•
If the test case doesn't check something you want to assert or if the test case is
no longer valid, delete or disable the test case. For instance, this might be
appropriate if someone adds validation checks to some methods, and now the
inputs that were originally generated are no longer valid. To accomplish this,
select the test case in the editor by double-clicking on test case function name,
or make one or multiple selections in the Outline view, right-click, and select
C++test> Disable.. or Remove Test Case(s).
5. Ensure that the modified code and test cases are checked in to source control.
5
Appendix C - How to run existing NUnit test cases under .TEST
to get extra analysis and reporting functionality
Any existing NUnit tests can be run under .TEST in the same way that automatically generated
tests are run. You also get the benefit of being able to generate reports, use GRS for tracking the
tests, get coverage reports etc.
While most of the details for running the manually written tests remain the same as that for
automatically generated tests, the following points are worth pointing out:
•
To get proper coverage report, you should specify both the unit test project and the set of
projects over which you want the coverage computed. (Because for manually written
tests, .TEST does not have a way of knowing the basis for computing the coverage
unless you specify it.) For example, if you have the unit tests in a project called
MyUnitTests and this tests the classes in the projects Biz and Bar, your command line
could be:
dottestcli –solution foo.sln –publish –report reportsDir –resource
Foo/MyUnitTests –resource Foo/Biz –resource Foo/Bar –localsettings
mysettings.properties
•
For manually-written tests that you previously wrote, you may not want to use the stubs
that .TEST ships out of the box. Pre-existing NUnit tests were not written to use them, so
using the stubs may skew the results. We recommend that you do the following:
1. Make a copy of the “Run Tests and Check Assertions” Test Configuration.
2. Rename the Test Configuration to be something like “Run Manual Tests.”
3. Under the Execution tab, select “Do not use stubs.”
4. Save the Test Configuration. You can now use this Test Configuration for
running manual tests.
•
Even though the above bullet recommends disabling stubs for previously-written NUnit
tests, it is important to note that manually written tests can indeed make use of stubs. In
fact, that is one of the advantages you get in using .TEST. For any new manually-written
tests, you can create your own stubs (or use the ones that ship with .TEST) and include
them in a configuration that you create for running these tests. Please see the .TEST
User’s Guide for details on how to create and use stubs.
6
Contacting Parasoft
USA
101 E. Huntington Drive, 2nd Floor
Monrovia, CA 91016
Toll Free: (888) 305-0041
Tel: (626) 305-0041
Fax: (626) 305-3036
Email: info@parasoft.com
URL: www.parasoft.com
Europe
France: Tel: +33 (1) 64 89 26 00
UK: Tel: +44 (0)1923 858005
Germany: Tel: +49 89 4613323-0
Email: info-europe@parasoft.com
Asia
Tel: +886 2 6636-8090
Email: info-psa@parasoft.com
Other Locations
See http://www.parasoft.com/jsp/pr/contacts.jsp?itemId=268
© 2007 Parasoft Corporation
All rights reserved. Parasoft and all Parasoft products and services listed within are trademarks or registered trademarks
of Parasoft Corporation. All other products, services, and companies are trademarks, registered trademarks, or
servicemarks of their respective holders in the US and/or other countries.
7