Automated Benchmarking in Content-based Image Retrieval

Bibtex entry :

@inproceedings { VG:MMS2001a,
    author = { Henning M{\"u}ller and Wolfgang M{\"u}ller and David McG. Squire and St{\'e}phane Marchand-Maillet and Thierry Pun },
    title = { Automated Benchmarking in Content-based Image Retrieval },
    booktitle = { Proceedings of the 2001 IEEE International Conference on Multimediaand Expo (ICME2001) },
    year = { 2001 },
    address = { Tokyo, Japan },
    month = { August },
    url = { },
    abstract = { Benchmarking has always been a crucial problem in content-based imageretrieval (CBIR). A key issue is the lack of a common access methodto retrieval systems, such as SQL for relational databases. The MultimediaRetrieval Mark-up Language (MRML) solves this problem by standardizingaccess to CBIR systems (CBIRSs). Other difficult problems are alsoshortly addressed, such as obtaining relevance judgments and choosinga database for performance comparison. In this article we presenta fully automated benchmark for CBIRSs based on MRML, which can beadapted to any image database and almost any kind of relevance judgment.The test evaluates theperformance of positive and negative relevancefeedback, which can be generated automatically from the relevancejudgments. To illustrate our purpose, a freely available, non-copyrightimage collection is used to evaluate our CBIRS, Viper. All scriptsdescribed here are also freely available for download. },
    owner = { steph },
    timestamp = { 2008.05.04 },
    vgclass = { refpap },
    vgproject = { viper },

Keywords: machine learning, information geometry, data mining, Big Data, affective information retrieval (recherche d'information), information visualisation, content-based image and video retrieval (CBIR, CBR, CBVR, CBMR, CBMIR), information mining, classification, multimedia and multimodal information management, semantic web, knowledge base (RDF, OWL, XML, metadata, auto-annotation, description), multimodal information fusion