Automated benchmarking in content-based image retrieval

Bibtex entry :

@techreport { VG:MMM2001,
    author = { Henning M{\"u}ller and Wolfgang M{\"u}ller and St{\'e}phane Marchand-Maillet and David McG. Squire and Thierry Pun },
    title = { Automated benchmarking in content-based image retrieval },
    institution = { University of Geneva },
    year = { 2001 },
    number = { 01.01 },
    month = { May },
    url = { },
    abstract = { Benchmarking has always been a crucial problem in content-based image retrieval (CBIR). A key issue is the lack of a common access method to retrieval systems, such as SQL for relational databases. The Multimedia Retrieval Mark-up Language (MRML) solves this problem by standardizing access to CBIR systems (CBIRSs). Other difficult problems are also shortly addressed, such as obtaining relevance judgments and choosing  a database for performance comparison. In this article we present a fully automated benchmark for CBIRSs based on MRML, which can be adapted to any image database and almost any kind of relevance judgment. The test evaluates the performance of positive and negative relevance feedback, which can be generated automatically from the relevance judgments. To illustrate our purpose, a freely available, non-copyright image collection is used to evaluate our CBIRS, \emph{Viper}. All scripts described here are also freely available for download. },
    url1 = { },
    vgclass = { report },
    vgproject = { viper },

Keywords: machine learning, information geometry, data mining, Big Data, affective information retrieval (recherche d'information), information visualisation, content-based image and video retrieval (CBIR, CBR, CBVR, CBMR, CBMIR), information mining, classification, multimedia and multimodal information management, semantic web, knowledge base (RDF, OWL, XML, metadata, auto-annotation, description), multimodal information fusion