org.dsource.descent.unittests.flute

Summary:
Flute is an interactive tool for executing D unit tests. It is based on Thomas Kuehne's UnittestWalker. Like UnittestWalker, it requires Flectioned to work correctly, and right now is only compatible with D version 1.x. It extends UnittestWalker by providing an interactive (console or network-based) way to find and execute unit tests.

However, while flute can be used interactively, it is mainly designed to be extended by automated testing tools, such as descent.unittest or build tools, and hopefully be integrated into a project like CruiseControl, etc. It also allows for execution of tests one at a time or execution of selected tests, which makes test automation more versatile (especially if running all the tests takes a long time.) To be truly used effectively, flute needs to be paired with a code analysis or other tool to identify/name unittests which is then executed by the fluted application. If all you want is to run all the unittests in the project, UnittestWalker is a better bet. If using the command line version directly, I would suggest redirecting stdin from a file to create testing suites.

A good example of a program that interfaces ith Flute via IPC is descent.unittest. You can find the relevant files at . In particular, check out , , and .

Usage:
Flute must be statically linked against an application, just like UnittestWalker. To do so, place it at the end of the build command, i.e. "dmd -unittest <your source code> flectioned.d flute .d" or "gdmd -fall-sources -unittest <your source code> flectioned.d flute .d". To run, simply start the generated executable. Note that your actual application cannot be started if you use flute .

Definitions:
  • A "line" is any number of ASCII characters (that may contain CR, LF or a CRLF pair, although they will only do so if the host program uses them, for example in the text for an assert() statement) followed by a system-specific line terminator.


Interface:
The interface is well-defined. That is, while it is designed to be human- readable, it is fully specified and can hopefully be processed by automated testing tools. The interface may change between versions.

There are two interfaces for Flute - a console I/O based one and a socket-based one. Since there's no way to pass parameters to Flute upon execution, the interface to use is spcified at compile-time, using the version switch FluteCommandLine. If FluteCommandLine is NOT active, Flute will instead bind to a local socket for IPC. It opens on port 30587 (Someday, I'll make a config file or somthing for that, but right now hardcoding it seems like a good option). When connected via thesocket, the exact same interface is preserved as the console I/O version. Thus, the following documentation applies equally well to both the command-line and network versions.

When the program is executed, one or more lines containing version information will be displayed. For this version, the version line will be " flute 0.1". Warnings may be displayed after this for tests with multiple names unless version(Flute_NoWarnings) has been specified. The program will then enter a loop where it will await input, process the given command, and await futher input. The commands are:
  • r test signature - (An r, followed by a space, followed by the signature or name of a test). Will run the specified test and print the results to stdout. See "test signature specification" for the specification of what the test signatures will look like. See "test result specification" for a specification of what the results will look like. See "Test Naming" for the specification of how test names are handled.
  • l - (An l alone on a line). Prints a list of all the tests in the project. One test specification will appear per line. Named tests will appear as their fully qualified test name. Unnamed tests will appear with their signature. The order in which tests are printed is the alphabetical order of their signatures, and lexical order for unittests within the same scope (that is, names hav nothing to do with the order).
  • a - (An a alone on a line). Will execute all the tests in the application. For each test, it will write a line containing "Running: " (without the quotes), then the signature of the test being run on a line. Then it will write the results of a test. After running all the tests, a line containing "SUMMARY: " will be written, then all of the following lines: "PASSED: #/#", "FAILED: #/#", "ERROR: #/#", each preceded by three spaces, where the first number sign in a line is replaced by the number of tests that met that condition, and the second in each line is replaced by the total number of executed tests. Any tests that caused an internal error will not be reported in any of the three categories, nor will they be included in the total. There will be a blank line between each test.
  • x - (An x alone on a line). Will exit the program.


Test Signature Specification:
A test signature is a way to uniquely identify a unittest in an application. Although linker symbols do this, they are not available to a code-analysis front-end. Thus, a signature form is needed that can be generated by code analysis, easily translated to and from linker symbols, and is (generally) human- readable.

A test signature consists of the fully-qualified name of the test's location, followed by a period, followed by the number of the test in that location in the lexical order the test appears (0-based). For example if you have:
 module foo.bar;

 unittest { /+ Test 1 +/ }

 class Baz {
     unittest { /+ Test 2 +/ }
 }

 unittest { /+ Test 3 +/ }
there will be three tests:
  • Test 1 is foo.bar.0
  • Test 2 is foo.bar.Baz.0
  • Test 3 is foo.bar.1


Test Names:
Instead of using signatures, names can be used to refer to tests as well. Since signatures are often long and difficult to type, this is often the preferred method. To add a name to a test, import org.dsource.descent.unittests.naming and insert use "mixin(test_name(test name));" somewhere in your test body. For example:
 module bacon.eggs;
 import org.dsource.descent.unittests.naming;

 class Sausage {
     unittest {
         mixin(test_name("spam"));
         // ...
     }
 }
To refer to a named test, you may either use the signature generated for it or use the test's name. The test's name can either be fully qualified or, if unambiguous, can appear with a colon preceding it. If there's more than one test in the application with the unqualified name, an error will result. In the example above, the test can be referred to as any of:
  • bacon.eggs.Sausage.0
  • bacon.eggs.Sausage.spam
  • :spam


Wildcards:
When specifying a test to run, you may use a "*" wildcard to indicate all the tests in a prticular package (including its subpackages), module, or aggregate. For example, in the foo.bar example above, "foo.*" and "foo.bar.*" would refer to all three tests, and "foo.bar.Baz.*" would refer to only the test within the class. When running a set of tests specified by a wildcard, the result will be the same as that defined under "run all tests".

Test Result Specification:
After a test is run, there are four possible results:
  • The test could succeed, in which case a line containing "PASSED" will be printed on a line.
  • The test could fail an assertion. A line containing "FAILED" will be printed, followed by the stack trace of the exception.
  • The test could throw an exception. A line containing "ERROR" will be printed, followed by the stack trace of the exception. The main rationale for treating test failures differently than exceptions is to allow automated tools to track failures vs. error conditions. However, the tool can't differentiate between assertions failed in the tests and assertions failed in the main program body, so a "FAILED" message can mean either.
  • An internal error could occur with the test runner (for example, the test is not found). In this case, a human-readable message that does not begin with "PASSED", "FAILED" or "ERROR" will be printed on a single line. If the test is not found, the message will be "Test test signature not found", where "test signature" will be replaced with the signature of the test. Other error messages may appear and are unspecified.


Stack Trace Specification:
The test runner has support for Flectioned's TracedException, but will work correctly even if the thrown exception is not a TracedException. The stack trace will begin with a line containing "Exception " and the name of the thrown exception. If the exception has a message, this will be followed by ": " and then the exception message.

If the exception is an assertion failure (AssertError in Phobos, AssertException in Tango), the line will instead be "Assertion failed in <filename> at line <line>" followed by ": " and a message if there is one. The rationale behind rewriting this exception is to smooth over differences between Tango and Phobos, which report their assert errors differently.

If the exception is a subclass of TracedException, this will be followed by the actual stack trace of the exception. Each line of the stack trace represents a stack frame that was executing when the exception was thrown. The stack frames will be reported in reverse order (the "unwinding" of the stack). Each line begins with "<<STE>> ", followed by the name of the executing function, followed by " (", followed by the file the function is defined on, then ":", then the line the function is dfined on (sadly not the line the exception was thrown from, I'm working on that part), and then ")". For example, one line could look like "<<STE>> com.initech.dbinterface.getCustomerById (dbinterface.d:420)".

Limitations:
  • No Unicode/internationalization support (planned)
  • Untested in low memory situations
  • Only tested with D 1.x (future versions will support D2)
  • No test suites/categorization (will possibly be in a future version, but the Descent front-end should support this when it gets released.)
  • Requires Flectioned (not likely to change)


BUGS:
If a class is inside a function, unittests in that class won't work. Keep this in mind when generating signatures in code analysis tools. For example, if you have:
 module foo.bar;

 unittest { /+ Test 1 +/ }

 void baz() {
     class Quux {
         unittest { /+ Test 2 +/ }
     }
 }

 unittest { /+ Test 3 +/ }
The middle test is inaccessible via flute . This applies to unittests in anonymous classes, too. I hope to fix this in a future version.

Authors:
Robert Fraser (fraserofthenight@gmail.com)

Version:
Almost 0.1

License:
Copyright (c) 2007 Robert Fraser (fraserofthenight@gmail.com)

All rights reserved. This program and the accompanying materials are made available under the terms of the Eclipse Public License v1.0 which accompanies this distribution, and is available at http://www.eclipse.org/legal/epl-v10.html

Page was generated with on Sat Jan 19 20:27:42 2008