The PinView consortium combines pioneering application expertise with a solid machine learning background in content-based information retrieval. We will develop new information retrieval principles needed for replacing or complementing explicit search queries.
The research will facilitate a prototype of a proactive personal information navigator that allows retrieval of multimodal information (still images, text, video) available on the web and versatile databases. During browsing and searching with a task-dependent interface, the goals of the user will be inferred from explicit and implicit feedback signals and interaction (eye movements, pointer traces and clicks, speech) complemented with social filtering.
The collected rich multimodal responses from the user are processed with new advanced machine learning methods to infer the implicit topic of the user's interest as well as the sense in which it is interesting in the current context.
For more information see the project web page and the group page.
Last updated on 28 May 2008 by Antti Ajanki - Page created on 20 May 2008 by Antti Ajanki