The JTS Geometry module is an implementation of ISO 19107 as specified by the GeoAPI interfaces. Implementations of the GeoAPI interfaces should obey the Liskov Substitution Principle. As such, there is considerable opportunity afforded to distinguish between two categories of tests. Isolation of the tests as described here has the advantage that tests need only be developed once, not for each implementation.
Main testing categories:
- testing correct geometric functionality which all GeoAPI 19107 implementations must provide; and
- testing correct functionality specific to this particular implementation.
The benefits, of course, are that each implementation can test their code against a pre-existing suite of tests, and can feed back their corrections should the test itself be found to be inaccurate. This creates a situation which facilitates increasing quality of tests (and hence increasing quality of code) in this highly complex subject area.
Examples of testing common functionality:
- Testing the results of computational geometry methods.
- Testing that classes can be instantiated with the appropriate factory.
- Testing various predefined "conformance levels" (e.g. predefined set of classes are instantiable and provide one of the following levels of service: 1] carry data-values only; 2] #1 plus simple operations; or 3] #2 plus complete operations)
- Testing that GeoAPI implementations appropriately handle changes to their defining Sets or Lists.
- Testing for correct behavior regarding mutable vs. immutable types.
Examples of testing implementation specific functionality:
- Testing mechanism for clearing cached objects.
- Testing perceived weaknesses particular to the implementation techniques used.
The general plan of attack is to write Abstract Test Cases synonomous with Abstract Tests. Both of these are Testing Patterns which provide for the testing of multiple implementations of an interface.
There are some questions which must be resolved regarding a test plan such as this. The first question is: Where should the common tests reside? Should these tests be a part of the GeoTools project or should they be part of GeoAPI? In either case, these tests should be completely isolated from GeoTools specific artifacts.
Once the location has been determined, how should such tests be packaged? With the Maven build system used by both GeoTools and GeoAPI, it is common to include a test directory populated with a barrage of JUnit tests. There are some issues with this arrangement which require resolution prior to developing a reusable test suite:
- Tests are generally not packaged for distribution, but in this case they must be. Does this imply that a testing module would consist only of abstract classes implementing tests, and that these "tests" would reside in the "src" directory in order to be packaged?
- Tests are classes which are generally instantiable so that they can be compiled and run. In this situation, the tests are incomple as they must be provided with the geometry factory relevant to the implementation they are required to test. Should the tests all be abstract, requiring the user to subclass each one, providing the required factories (as described on the Abstract Test page? Or since the required information for all tests is nearly identical, should we produce our own testing pattern?
- Lastly, and this is more of a design issue, clients should be capable of selecting predefined test suites (say to verify compliance with a particular conformance level) or individual tests.
Source of tests
Much of the computational geometry could be tested using the JTS test suite. JTS uses two testing harnesses: JUnit is used for some things; and a custom GUI-based harness is used to define and run geometric tests. This custom test harness facilitates the graphical design of tests, and can be run in graphical or text mode, but is not obviously compatible with an automated test harness like JUnit. (The user manual for this test suite ships with the JTS source code but is not obviously available on the JTS website. I have attached it to GEOT-831.) This testing approach has obvious benefits for a geometric package. The downside is that it is a manual testing process.
Ideally, I think it would be greatly advantageous to steal not only the predefined tests from the JTS test suite, but also their testing harness. Ideally, we could have a testing harness which operates on 19107 GeoAPI interfaces instead of JTS classes, and which behaves in a similar manner to JUnit (primary characteristic: automatically informing some supervisory system (maven) that tests are passing or failing.)
The effort required to adapt the JTS testing harness to GeoAPI must be compared to the effort required to develop and verify 100+ geometric tests without using their handy graphical tool. As I am not an expert in computational geometry, the hands-down winner is: stealing the harness. The source code for the test harness is available via anonymous CVS access (as described here) but not in the source-and-binary zipfiles of the JTS releases. CVS access also includes several JUnit tests of (currently unexamined) content. Presumably, these JUnit tests are not as "geometric" as the ones defined using the GUI.
Things which need to be examined:
- What is the coverage of the existing JUnit JTS tests, and can they be stolen?
- Is the GUI tester already adapted to the JUnit framework?
- How much effort would it be to make the custom JTS test harness work against GeoAPI's 19107 interfaces?