The repository:
The Visual Analytics Benchmarks Repository contains resources to
improve the evaluation of visual analytics technology.
Benchmarks contains datasets and tasks,
as well as materials describing the uses
of those benchmarks (the results of analysis, contest entries, controlled
experiment materials etc.) Most benchmarks contain ground truth described in a
solution provided with the benchmark, allowing accuracy metrics to be computed.
Looking for datasets to download? See the LIST OF BENCHMARKS
OFFICIAL URL of this site is: www.cs.umd.edu/hcil/varepository
How you can contribute:
To contribute new Benchmarks or suggest the addition of new examples of use of the existing Benchmarks: Please contact
Catherine Plaisant
History:
This repository replaces and extends the
Information Visualization Benchmarks Repository
started in 2003, with datasets from the InfoVis Contest.
Starting in 2006 the VAST Contests and later the VAST Challenges which took place at the IEEE VAST
symposium provided the 1st set of Benchmark with ground truth and solutions.
Full credit and provenance information is given separately for each benchmark.
A service of the SEMVAST Project
Managed by the HCIL, University of Maryland and IVPR, University of Massachusetts Lowell
Developed by Swetha Reddy, Patrick Stickney, and John Fallon under the supervision of Georges Grinstein and Catherine Plaisant
Support for the development of the Repository was provided from 2007 to 2011 by the National Science Foundation under a Collaborative Research Grant to the following three institutions:
IIS-0713087 Plaisant, Catherine, University of Maryland, College Park
IIS-0712770 Scholtz, Jean, Battelle Memorial Institute
IIS-0713198 Grinstein, Georges, University of Massachussetts, Lowell
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation