Is one annotation enough? A data-centric image classification benchmark for noisy and ambiguous label estimation.

Schmarje, Lars, Grossmann, Vasco, Zelenka, Claudius, Dippel, Sabine, Kiko, Rainer , Oszust, Mariusz, Pastell, Matti, Stracke, Jenny, Valros, Anna, Volkmann, Nina and Koch, Reinhard (Submitted) Is one annotation enough? A data-centric image classification benchmark for noisy and ambiguous label estimation. Open Access arXiv e-prints . DOI 10.48550/arXiv.2207.06214.

[thumbnail of 2207.06214.pdf]
Preview
Text
2207.06214.pdf - Submitted Version
Available under License Creative Commons: Attribution 4.0.

Download (19MB) | Preview

Supplementary data:

Abstract

High-quality data is necessary for modern machine learning. However, the acquisition of such data is difficult due to noisy and ambiguous annotations of humans. The aggregation of such annotations to determine the label of an image leads to a lower data quality. We propose a data-centric image classification benchmark with ten real-world datasets and multiple annotations per image to allow researchers to investigate and quantify the impact of such data quality issues. With the benchmark we can study the impact of annotation costs and (semi-)supervised methods on the data quality for image classification by applying a novel methodology to a range of different algorithms and diverse datasets. Our benchmark uses a two-phase approach via a data label improvement method in the first phase and a fixed evaluation model in the second phase. Thereby, we give a measure for the relation between the input labeling effort and the performance of (semi-)supervised algorithms to enable a deeper insight into how labels should be created for effective model training. Across thousands of experiments, we show that one annotation is not enough and that the inclusion of multiple annotations allows for a better approximation of the real underlying class distribution. We identify that hard labels can not capture the ambiguity of the data and this might lead to the common issue of overconfident models. Based on the presented datasets, benchmarked methods, and analysis, we create multiple research opportunities for the future directed at the improvement of label noise estimation approaches, data annotation schemes, realistic (semi-)supervised learning, or more reliable image collection.

Document Type: Article
Additional Information: Accepted at NeurIPS 2022
Keywords: Computer Vision and Pattern Recognition (cs.CV)
Refereed: No
Open Access Journal?: Yes
Publisher: Cornell University
Related URLs:
Date Deposited: 17 Jan 2023 08:05
Last Modified: 17 Jan 2023 08:05
URI: https://oceanrep.geomar.de/id/eprint/57775

Actions (login required)

View Item View Item