Currently used tests and frameworks
testing of reports
Runs all possible reports using the report cli interface, based on the example.gramps database. This test is not fully self-contained, in that it depends on various environment settings, such as your locale, your preferred name display formats, and your report options. Last, but not the least, the verification of the resulting reports is entirely manual.
See the "attached issues" at bugs tagged as found by runtest.sh
other report testing
See more specialized scripts in test/, status unknown.
Import/export test for GRAMPS.
From the file header:
* Import example XML data and create GRDB * Open produced GRDB, then ** check data for integrity ** output in all formats * Check resulting XML for well-formedness and validate it against DTD and RelaxNG schema. * Import ever file produced and run summary on it.
Currently broken, see 6951
Runs out-of-tree (not in gramps/) testing code, by looking for any test/*_Test.py files and executing the test suites therein. See the current code in test/*_Test.py for example and python standard unittest docs.
See the "attached issues" at Bugs tagged as found by RunAllTests.py
Two tests in GtkHandler testing code pop up the GRAMPS error dialog, but this is actually for testing the error reporting itself. Don't be scared by the dialog, it's expected. Your manual work is required to close the dialogs with the "Cancel" button. The relevant tests still pass (unless there's another bug there)...
One test currently fails, see 6940.
unit tests in the main tree
|gramps/gen/db/test/db_test.py||Doesn't use unittest module|
There used to be a way to run unit tests from within the main tree (src/ before gramps40, now gramps/). These seem currently broken; they're described at Unit Test Quickstart.
There is also semi-interactive testing via __main__ in some code:
- Relationship calculator testing
Manual test plan
See TestPlan.txt in gramps toplevel. I believe this is only done at a major release (like 4.0.0).
We currently don't have a record of tests executed, the platforms and environments they were run upon, and what code they covered. The only indirect evidence is available in open bugs, when people care to fill in these details. :-(
- revive the in-tree tests
- unify running all the tests, in- or out- tree
- Try switching from our runners to python-based unittest discovery mechanism.
- Integrate with "python setup.py"? (need to split interactive vs non-interactive first to allow fully automated runs)
- coverage analysis
- (needs server capacity to be hosted online) continous test status report, coverage, automatic deployment into win/mac/linux VMs (I can dream, can't I?)
- Unit Test Quickstart (obsolete? need to revive the in-tree unit tests, now that runtest.sh and RunAllTests.py run again!)