Config file, autogenerate a main test module: [dunit] directory = path/to/some/tests directory = path/to/more/tests outfile = path/to/outfile.d buildcommand = dsss build $OUTFILE Issue command "dunit-build": module path.to.outfile; import dunit.api; import path.to.some.tests.m1; import path.to.some.tests.m2; import path.to.more.tests.m1; import path.to.more.tests.m2; mixin (DunitMain); Use DDL/xf Linker as a source for runtime reflection. External test app that can instantiate your test fixtures, run each public member in turn. This requires a functioning DDL. It also requires a way to distinguish tests from everything else. Allow constraints with associative arrays. This will require mucking about with aaA.d. Create a testing framework for my integration tests. Run a process, get the output, compare it to the expected output. Exception stack traces? Not all systems will have it. Different offsets in the resulting binary. Reformatting would change them. Solution: Keep tests small. Only check if the output contains the expected output. Maybe specify a regular expression (and run it on the entire output, not on each line, maybe) Parameterized tests ------------------- - Better syntax - When a test fails on a certain set of values, permute the values slightly to see the boundary condition. - Use some sort of binary search for continuous values (integers, reals) - Check typical boundary conditions - null - integer overflow - negative - zero - Allow the user to reject a particular row - Keep track of the rejection rate - If it's above a threshold, warn ("Less than 10% of inputs are accepted; consider restricting inputs.") The current IMultiTest design doesn't work very well with this. We want to notify IMultiTest when it fails and why it fails.