Challenges of Reliable, Realistic and Comparable Active Learning Evaluation

by Daniel Kottke, Adrian Calma, Denis Huseljic, Georg Krempl and Bernhard Sick

Active learning has the potential to save costs by intelligent use of resources in form of some expert’s knowledge. Nevertheless, these methods are still not established in real-world applications as they can not be evaluated properly in the specific scenario because evaluation data is missing. In this article, we provide a summary of different evaluation methodologies by discussing them in terms of being reproducible, comparable, and realistic. A pilot study which compares the results of different exhaustive evaluations suggests a lack in repetitions in many articles. Furthermore, we aim to start a discussion on a gold standard evaluation setup for active learning that ensures comparability without reimplementing algorithms.

Published at Interactive Adaptive Learning Workshop 2017 co-located with the ECMLPKDD in Skopje, Mazedonia.

Paper: [PDF]

Poster: [PDF]

Video: [PDF]