ComMon SensE: Cross-Modal Search Engine

STATUS: TEST PHASE This demo may be unstable…

The CMSE is our initiative to develop a framework for cross-modal multimedia search.

The CMSE is first a feature extraction library. Based on the OpenCV framework, it is able to process images of any type and extract many features related to

  • Color
  • Texture
  • Edges
  • Faces

The CMSE also accounts for the textual modality and indexes it as the classical bags-of-words.

The CMSE is then an indexing engine built around our defined indexing and retrieval strategies (see references below).

Some references (see also our list of publications)

  • Bruno, É., Moënne-Loccoz, N., & Marchand-Maillet, S. (2008). Design of multimodal dissimilarity spaces for retrieval of multimedia documents. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(9), 1520-1533.
  • Kludas, J., Bruno, E., & Marchand-Maillet, S. (2008). Can Feature Information Interaction help for Information Fusion in Multimedia Problems?. To appear in Multimedia Tools and Applications Journal special issue on “Metadata Mining for Image Understanding”.
  • Bruno, E., Kludas, J., & Marchand-Maillet, S. (2007). Combining Multimodal Preferences for Multimedia Information Retrieval. In Proc. of International Workshop on Multimedia Information Retrieval, Augsburg, Germany.
old/demos/common_sense.txt · Last modified: 2010/04/19 13:28 (external edit)
--

Keywords: machine learning, information geometry, data mining, Big Data, affective information retrieval (recherche d'information), information visualisation, content-based image and video retrieval (CBIR, CBR, CBVR, CBMR, CBMIR), information mining, classification, multimedia and multimodal information management, semantic web, knowledge base (RDF, OWL, XML, metadata, auto-annotation, description), multimodal information fusion