Regression Testing (msautotest)

Author:

Frank Warmerdam

Contact:

warmerdam at pobox.com

Author:

Jeff McKenna

Contact:

jmckenna at gatewaygeomatics.com

Last Updated:

2022-08-20

msautotest is a suite of test maps, data files, expected result images, and test scripts intended to make it easy to run an a set of automated regression tests on MapServer.

Getting msautotest

Skip this section if you've followed the Vagrant Usage steps.

msautotest is available from GitHub, and exists inside the main MapServer repository, in /MapServer/msautotest/. It can be fetched with something like:

git checkout https://github.com/MapServer/MapServer.git

You will notice that an msautotest subdirectory (inside MapServer) exists after your checkout.

Running msautotest

ملاحظة

It is possible that some features tested through msautotest may not be released as part of MapServer yet, so be sure to test against today's main from MapServer's git.

If you're using Vagrant as described in Vagrant Usage everything is installed and configured as needed and you may skip the next three paragraphs.

msautotest can be run with either Python 2 or Python 3 (but does not require Python MapScript), so if you don't have Python on your system, get and install it. More information on Python is available at https://www.python.org. Most Linux system have some version already installed.

msautotest also requires that the executables built with MapServer, notably map2img, legend, mapserv and scalebar, are available in the path. This can be accomplished by adding the MapServer build directory to my path.

csh:

% setenv PATH $HOME/mapserver:$PATH

bash/sh:

% PATH=$HOME/mapserver:$PATH

Verify that you can run stuff by typing 'map2img -v' in the msautotest directory:

% map2img -v
MapServer version 7.6.0-dev OUTPUT=PNG OUTPUT=JPEG OUTPUT=KML
    SUPPORTS=PROJ SUPPORTS=AGG SUPPORTS=FREETYPE SUPPORTS=CAIRO SUPPORTS=SVG_SYMBOLS
    SUPPORTS=SVGCAIRO SUPPORTS=ICONV SUPPORTS=FRIBIDI SUPPORTS=WMS_SERVER SUPPORTS=WMS_CLIENT
    SUPPORTS=WFS_SERVER SUPPORTS=WFS_CLIENT SUPPORTS=WCS_SERVER SUPPORTS=SOS_SERVER
    SUPPORTS=FASTCGI SUPPORTS=THREADS SUPPORTS=GEOS SUPPORTS=POINT_Z_M SUPPORTS=PBF
    INPUT=JPEG INPUT=POSTGIS INPUT=OGR INPUT=GDAL INPUT=SHAPEFILE

Starting with MapServer 7.6, you also need to install the pytest framework to run the test. This can be done, from the msautotest subdirectory, with

pip install -r requirements.txt

ملاحظة

pytest is already installed in Vagrant.

Now you are ready to run the tests. The tests are subdivided into categories, each as its own subdirectory, such as:

gdal
misc
mspython
php
query
renderers
sld
wxs

You can run all tests with just pytest from msautotest root directory. All standard options of pytest can be used to filter tests to run, or adjust the verbosity level of the output.

The following custom options are also available:

--strict_mode         Strict mode
--valgrind            Run under valgrind
--run_under_asan      Run under ASAN
--renderer=RENDERER

To run the "gdal" tests cd into the gdal directory and run the run_test.py script.

ملاحظة

Inside each folder ("gdal", "misc", etc.) you will first have to run the following, to download appropriate schemas:

python ../pymod/xmlvalidate.py -download_ogc_schemas

Unix:

./run_test.py

ملاحظة

msautotest requires Python's lxml library for some of its functionality. Be sure to install it, such as: apt-get install python-lxml

Windows:

python.exe run_test.py

The results in the gdal directory might look something like this:

% run_test.py -vv

 Cannot validate XML because SCHEMAS_OPENGIS_NET not found. Run "python ../pymod/xmlvalidate.py -download_ogc_schemas" from msautotest/wxs
 Test session starts (platform: linux2, Python 2.7.12, pytest 4.6.6, pytest-sugar 0.9.2)
 cachedir: .pytest_cache
 rootdir: /home/even/mapserver/mapserver/msautotest, inifile: pytest.ini
 plugins: env-0.6.2, random-order-1.0.4, sugar-0.9.2
 collecting ...
  gdal/run_test.py::test[connectionoptions_connectionoptions_png] ✓   1%
  gdal/run_test.py::test[wld_upsidedown_wld_upsidedown_png] ✓         2%
  gdal/run_test.py::test[classtest1_classtest1_png] ✓                 3%
...
  gdal/run_test.py::test[rgb_overlay_plug_rgb_overlay_plug_png] ✓     100%

 Results (23.32s):
       90 passed

In general you are hoping to see that no tests failed.

Tests of a single .map file can be run by specifying the .map filename as an argument of the run_test.py script.

Checking Failures

Because most msautotest tests are comparing generated images to expected images, the tests are very sensitive to subtle rounding differences on different systems, and subtle rendering changes in libraries like freetype gd, and agg. So it is quite common to see some failures.

These failures then need to be reviewed manually to see if the differences are acceptable and just indicating differences in rounding/rendering or whether they are "real" bugs. This is normally accomplished by visually comparing files in the "result" directory with the corresponding file in the "expected" directory. It is best if this can be done in an application that allows images to be layers, and toggled on and off to visually highlight what is changing. QGIS can be used for this.

Background

The msautotest suite was initially developed by Frank Warmerdam (warmerdam at pobox.com), who can be contacted with questions it.

The msautotest suite is organized as a series of .map files. The python scripts basically scan the directory in which they are run for files ending in .map. They are then "run" with the result dumped into a file in the result directory. A binary comparison is then done to the corresponding file in the expected directory and differences are reported. The general principles for the test suite are that:

  • The test data should be small so it can be easily stored and checked out without big files needing to be downloaded.

  • The test data should be completely contained within the test suite ... no dependencies on external datasets, or databases that require additional configuration. PostGIS and Oracle will require separate testing mechanisms.

  • The tests should be able to run without a significant deal of user interaction. This is as distinct from the DNR test suite described in FunctionalityDemo.

  • The testing mechanism should be suitable to test many detailed functions in relative isolation.

  • The test suite is not dependent on any of the MapScript environments, though I think it would be valuable to extend the testsuite with some mapscript dependent components in the future (there is a start on this in the mspython directory).

Result Comparisons

For map2img tests The output files generated by processing a map is put in the file results/<mapfilebasename>.png (regardless of whether it is PNG or not). So when gdal/256_overlay_res.map is processed, the output file is written to gdal/results/256_overlay_res.png. This is compared to gdal/expected/256_overlay_res.png. If they differ the test fails, and the "wrong" result file is left behind for investigation. If they match the result file is deleted. If there is no corresponding expected file the results file is moved to the expected directory (and reported as an "initialized" test) ready to be committed to CVS.

For tests using RUN_PARMS, the output filename is specified in the RUN_PARMS statement, but otherwise the comparisons are done similarly.

The initial comparison of files is done as a binary file comparison. If that fails, for image files, there is an attempt to compare the image checksums using the GDAL Python bindings. If the GDAL Python bindings are not available this step is quietly skipped.

If you install the PerceptualDiff program (http://pdiff.sourceforge.net/) and it is in the path, then msautotest will attempt to use it as a last fallback when comparing images. If images are found to be "perceptually" the same the test will pass with the message "result images perceptually match, though files differ." This can dramatically cut down the number of apparent failures that on close inspection are for all intents and purposes identical. Building PerceptualDiff is a bit of a hassle and it will miss some significant differences so it's use is of mixed value.

For non-image results, such as xml and html output, the special image comparisons are skipped.

REQUIRES - Handling Build Options

Because MapServer can be built with many possible extensions, such as support for OGR, GDAL, and PROJ.4, it is desirable to have the testsuite automatically detect which tests should be run based on the configuratio of MapServer. This is accomplished by capturing the version output of "map2img -v" and using the various keys in that to decide which tests can be run. A directory can have a file called "all_require.txt" with a "REQUIRES:" line indicating components required for all tests in the directory. If any of these requirements are not met, no tests at all will be run in this directory. For instance, the gdal/all_require.txt lists:

REQUIRES: INPUT=GDAL OUTPUT=PNG

In addition, individual .map files can have additional requirements expressed as a REQUIRES: comment in the mapfile. If the requirements are not met the map will be skipped (and listed in the summary as a skipped test). For example gdal/256_overlay_res.map has the following line to indicate it requires projection support (in addition to the INPUT=GDAL and OUTPUT=PNG required by all files in the directory):

# REQUIRES: SUPPORTS=PROJ

RUN_PARMS: Tests not using map2img

There is also a RUN_PARMS keyword that may be placed in map files to override a bunch of behaviour. The default behaviour is to process map files with map2img, but other programs such as mapserv or scalebar can be requested, and various commandline arguments altered as well as the name of the output file. For instance, the following line in misc/tr_scalebar.map indicates that the output file should be called tr_scalebar.png, the commandline should look like "[SCALEBAR] [MAPFILE] [RESULT]" instead of the default "[MAP2IMG] -m [MAPFILE] -o [RESULT]".

RUN_PARMS: tr_scalebar.png [SCALEBAR] [MAPFILE] [RESULT]

For testing things as they would work from an HTTP request, use the RUN_PARMS with the program [MAPSERV] and the QUERY_STRING argument, with results redirected to a file.

# RUN_PARMS: wcs_cap.xml [MAPSERV] QUERY_STRING='map=[MAPFILE]&SERVICE=WCS&VERSION=1.0.0&REQUEST=GetCapabilities' > [RESULT]

For web services that generate images that would normally be prefixed with the Content-type header, use [RESULT_DEMIME] to instruct the test harness to script off any http headers before doing the comparison. This is particularly valuable for image results so the files can be compared using special image comparisons.

# Generate simple PNG.
# RUN_PARMS: wcs_simple.png [MAPSERV] QUERY_STRING='map=[MAPFILE]&SERVICE=WCS&VERSION=1.0.0&REQUEST=GetCoverage&WIDTH=120&HEIGHT=90&FORMAT=GDPNG&BBOX=0,0,400,300&COVERAGE=grey&CRS=EPSG:32611' > [RESULT_DEMIME]

Result File Pre-processing

As mentioned above the [RESULT_DEMIME] directive can be used for image file output from web services (ie. WMS GetMap requests).

For text, XML and HTML output it can also be helpful to apply other pre-processing to the output file to make comparisons easier. The [RESULT_DEVERSION] directive in the RUN_PARMS will apply several translations to the output file including:

  • stripping out the MapServer version string which changes depending on build options and version.

  • manipulating the format of exponential numbers to be consistent across platforms (changes windows e+0nn format to e+nn).

  • strip the last decimal place off floating point numbers to avoid unnecessary sensitivity to platform specific number handling.

  • blank out timestamps to avoid "current time" sensitivity.

In some cases it is also helpful to strip out lines matching a particular pattern. The [STRIP:xxx] directive drops all lines containing the indicated substring. Multiple [STRIP:xxx] directives may be included in the command string if desired. For instance, the error reports from the runtime substitution validation test (misc/runtime_sub.map) produces error messages with an absolute path in them which changes for each person. The following directive will drop any text lines in the result that contain the string ShapefileOpen, which will be error messages in this case:

# RUN_PARMS: runtime_sub_test001.txt [MAPSERV] QUERY_STRING='map=[MAPFILE]
&mode=map&layer=layer1&name1=bdry_counpy2' > [RESULT_DEVERSION] [STRIP:ShapefileOpen]

What If A Test Fails?

When running the test suite, it is common for some tests to fail depending on vagaries of floating point on the platform, or harmless changes in MapServer. To identify these compare the results in result with the file in expected and determine what the differences are. If there is just a slight shift in text or other features it is likely due to floating point differences on different platforms. These can be ignored. If something has gone seriously wrong, then track down the problem!

It is also prudent to avoid using output image formats that are platform specific. For instance, if you produce TIFF it will generate big endian on big endian systems and therefore be different at the binary level from what was expected. PNG should be pretty safe.

TODO

  • Add lots of tests for different stuff! Very little vector testing done yet.

  • Add something to run tests on the server and report on changes.

Adding New Tests

  • Pick an appropriate directory to put the test in. Feel free to start a new one for new families of testing functionality.

  • Create a minimal map file to test a particular issue. I would discourage starting from a "real" mapfile and cutting down as it is hard to reduce this to the minimum.

  • Give the new mapfile a name that hints at what it is testing without making the name too long. For instance "ogr_join.map" tests OGR joins. "rgb_overlay_res_to8bit.map" tests RGB overlay layers with resampling and converting to 8bit output.

  • Put any MapServer functionality options in a # REQUIRES: item in the header as described in the internal functioning topic above.

  • Write some comments at the top of the .map file on what this test is intended to check.

  • Add any required datasets within the data directory beneath the test directory. These test datasets should be as small as possible! Reuse existing datasets if at all possible.

  • run the "run_test.py" script.

  • verify that the newly created expected/<testname>.png file produces the results you expect. If not, revise the map and rerun the test, now checking the results/<testname>.png file. Move the results/<testname>.png file into the expected directory when you are happy with it.

  • add the .map file, and the expected/<testname>.png file to the repository when you are happy with them. For example,

% git add mynewtest.map expected/mynewtest.png
% git commit -m "My new test" mynewtest.map expected/mynewtest.png

You're done!