Learning from noisy labels with deep neural networks: A survey

H Song, M Kim, D Park, Y Shin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Deep learning has achieved remarkable success in numerous domains with help from large
amounts of big data. However, the quality of data labels is a concern because of the lack of …

A survey of uncertainty in deep neural networks

J Gawlikowski, CRN Tassi, M Ali, J Lee, M Humt… - Artificial Intelligence …, 2023 - Springer
Over the last decade, neural networks have reached almost every field of science and
become a crucial part of various real world applications. Due to the increasing spread …

Part-based pseudo label refinement for unsupervised person re-identification

Y Cho, WJ Kim, S Hong… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Unsupervised person re-identification (re-ID) aims at learning discriminative representations
for person retrieval from unlabeled data. Recent techniques accomplish this task by using …

Generating training data with language models: Towards zero-shot language understanding

Y Meng, J Huang, Y Zhang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Pretrained language models (PLMs) have demonstrated remarkable performance in various
natural language processing tasks: Unidirectional PLMs (eg, GPT) are well known for their …

Robust training under label noise by over-parameterization

S Liu, Z Zhu, Q Qu, C You - International Conference on …, 2022 - proceedings.mlr.press
Recently, over-parameterized deep networks, with increasingly more network parameters
than training samples, have dominated the performances of modern machine learning …

Spice: Semantic pseudo-labeling for image clustering

C Niu, H Shan, G Wang - IEEE Transactions on Image …, 2022 - ieeexplore.ieee.org
The similarity among samples and the discrepancy among clusters are two crucial aspects
of image clustering. However, current deep clustering methods suffer from inaccurate …

Comparing kullback-leibler divergence and mean squared error loss in knowledge distillation

T Kim, J Oh, NY Kim, S Cho, SY Yun - arXiv preprint arXiv:2105.08919, 2021 - arxiv.org
Knowledge distillation (KD), transferring knowledge from a cumbersome teacher model to a
lightweight student model, has been investigated to design efficient neural architectures …

Scarf: Self-supervised contrastive learning using random feature corruption

D Bahri, H Jiang, Y Tay, D Metzler - arXiv preprint arXiv:2106.15147, 2021 - arxiv.org
Self-supervised contrastive representation learning has proved incredibly successful in the
vision and natural language domains, enabling state-of-the-art performance with orders of …

Deep learning with label differential privacy

B Ghazi, N Golowich, R Kumar… - Advances in neural …, 2021 - proceedings.neurips.cc
Abstract The Randomized Response (RR) algorithm is a classical technique to improve
robustness in survey aggregation, and has been widely adopted in applications with …

Boundary smoothing for named entity recognition

E Zhu, J Li - arXiv preprint arXiv:2204.12031, 2022 - arxiv.org
Neural named entity recognition (NER) models may easily encounter the over-confidence
issue, which degrades the performance and calibration. Inspired by label smoothing and …