Performance Evaluation in Content-Based Image Retrieval

Bibtex entry :

@inproceedings { VG:Mar2001,
    author = { St{\'e}phane Marchand-Maillet },
    title = { Performance Evaluation in Content-Based Image Retrieval },
    booktitle = { Multimedia Content-Based Indexing and Retrieval {(MMCBIR} 2001) },
    year = { 2001 },
    address = { INRIA Rocquencourt, Paris, France },
    month = { September },
    note = { Invited contribution },
    abstract = { Content-based image retrieval (CBIR) has now reached a mature stage.Search techniques are well-categorized and several research prototypesor commercial products are available. However, CBIR true performanceis still difficult to quantify. Setting up a CBIR benchmark is aheavy task and can only be done via the collaboration of all partiesinvolved in the research and development of CBIR prototypes and relatedcommercial products. The Benchathlon effort proposes to create sucha context in which CBIR will be evaluated thoroughly and objectively.In this paper, we present the Benchathlon and its objectives in moredetails. The goal of CBIR benchmarking has been divided into variousparallel and inter-related sub-tasks. One essential such task isthe definition of ground truth data. Since no such data exists, theimage collection is to be constructed from scratch. Copyright issuesshould be resolved so as to be able to freely distribute, extendand modify this collection. Further, different sub-collections shouldbe available for different specialized applications. It is also acknowledgedhere that no unique ground truth exists. Techniques to account foruser subjectivity should therefore be developed. Considering theeffort involved, tools for easing the task of data annotation needalso to be designed. Related to this is the definition of objectivequantitative performance measures. These measures should be boththorough and orthogonal. In other words, they should allow for acomplete evaluation and highlight weaknesses and strengths of theCBIR system under evaluation. The goal being both to compare systemsand to help system developers to profile their techniques. To usethis data in practical evaluation, there is also the need for definingstandard test queries and result sets. Domain-specific constraintswill strongly influence the design of such test cases. Another aspectis the feasibility of CBIR benchmarking. This imposes the definitionof a flexible software architecture enabling automated benchmarkingwhile leading to little (optimally no) programming overhead. Again,legal issues about the openness of the systems under evaluation shouldbe accounted for. In our paper, we also shortly present the solutionsproposed by the Viper team at University of Geneva. These realizationsare gathered under the umbrella of our GIFT project where the centralfeature is the Multimedia Retrieval Markup Language (MRML), an XML-basedcommunication protocol that we think is a necessary tool for enablingCBIR benchmarking. We describe the architecture of our MRML-basedbenchmark and sketch results for the Viper search engine. },
    owner = { steph },
    timestamp = { 2008.05.04 },
    vgclass = { refpap },
    vgproject = { viper },
}
--

Keywords: machine learning, information geometry, data mining, Big Data, affective information retrieval (recherche d'information), information visualisation, content-based image and video retrieval (CBIR, CBR, CBVR, CBMR, CBMIR), information mining, classification, multimedia and multimodal information management, semantic web, knowledge base (RDF, OWL, XML, metadata, auto-annotation, description), multimodal information fusion