CLEF promotes the systematic evaluation of information access systems, primarily through experimentation on shared tasks.
Eight labs are offered at CLEF 2012.
Seven labs will follow a "campaign-style" evaluation practice
for specific information access problems in the tradition of past CLEF campaign tracks:
- CHiC Cultural Heritage in CLEF a benchmarking activity to investigate systematic and large-scale evaluation of cultural heritage digital libraries and information access systems.
- CLEF-IP a benchmarking activity to investigate IR techniques in the patent domain
- ImageCLEF a benchmarking activity on the experimental evaluation of image classification and retrieval, focusing on the combination of textual and visual evidence
- INEX a benchmarking activity on the evaluation of XML retrieval
- PAN a benchmarking activity on uncovering plagiarism, authorship and social software misuse
- QA4MRE a benchmarking activity on the evaluation of Machine Reading systems through Question Answering and Reading Comprehension Tests
- RepLab a benchmarking activity on reputation management technologies
One lab will be run as a workshop organized as speaking and discussion session to explore issues of evaluation methodology, metrics, and processes in
information access and closely related fields:
- CLEFeHealth 2012 workshop on Cross-Language Evaluation of Methods, Applications, and Resources for eHealth Document Analysis