2021 | ML | | Pt | MetaLabelNet: Learning to Generate Soft-Labels from Noisy-Labels |
2020 | ML | ICPR | Pt | Meta Soft Label Generation for Noisy Labels |
2020 | RL | | | Learning Adaptive Loss for Robust Learning with Noisy Labels |
2020 | LNC | | | ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks |
2020 | DP | | | Identifying Mislabeled Data using the Area Under the Margin Ranking |
2020 | R | | | Limited Gradient Descent: Learning With Noisy Labels |
2020 | NC | | | Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning |
2020 | LNC | | | Temporal Calibrated Regularization for Robust Noisy Label Learning |
2020 | NC | | | Parts-dependent Label Noise: Towards Instance-dependent Label Noise |
2020 | NC | | | Class2Simi: A New Perspective on Learning with Label Noise |
2020 | LNC | | | Learning from Noisy Labels with Noise Modeling Network |
2020 | LNC | | | ExpertNet: Adversarial Learning and Recovery Against Noisy Labels |
2020 | R | | Pt | Early-Learning Regularization Prevents Memorization of Noisy Labels |
2020 | LNC | | | ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks |
2020 | SC | CVPR | Pt | Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization |
2020 | SIW | CVPR | Tf | Distilling Effective Supervision from Severe Label Noise |
2020 | NC | CVPR | | Training Noise-Robust Deep Neural Networks via Meta-Learning |
2020 | LNC | CVPR | | Global-Local GCN: Large-Scale Label Noise Cleansing for Face Recognition |
2020 | SIW | ECCV | | Graph convolutional networks for learning with few clean and many noisy labels |
2020 | SIW | ECCV | | NoiseRank: Unsupervised Label Noise Reduction with Dependence Models |
2020 | R | ICLR | | Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee |
2020 | R | ICLR | | Can Gradient Clipping Mitigate Label Noise? |
2020 | SSL | ICLR | Pt | DivideMix: Learning with Noisy Labels as Semi-supervised Learning |
2020 | SC | AAAI | | Self-Paced Robust Learning for Leveraging Clean Labels in Noisy Data |
2020 | LNC | IJCAI | | Learning with Noise: Improving Distantly-Supervised Fine-grained Entity Typing via Automatic Relabeling |
2020 | SIW | IJCAI | | Label Distribution for Learning with Noisy Labels |
2020 | RL | IJCAI | | Can Cross Entropy Loss Be Robust to Label Noise? |
2020 | SC | WACV | | Learning from noisy labels via discrepant collaborative training |
2020 | LNC | WACV | | A novel self-supervised re-labeling approach for training with noisy labels |
2020 | SC | ICML | | Searching to Exploit Memorization Effect in Learning from Corrupted Labels |
2020 | ML | ICML | | SIGUA: Forgetting May Make Learning with Noisy Labels More Robust |
2020 | R | ICML | Pt | Improving Generalization by Controlling Label-Noise Information in Neural Network Weights |
2020 | RL | ICML | | Normalized Loss Functions for Deep Learning with Noisy Labels |
2020 | RL | ICML | | Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates |
2020 | SC | ICML | | Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels |
2020 | O | ICML | | Deep k-NN for Noisy Labels |
2020 | LNC | ICML | | Error-Bounded Correction of Noisy Labels |
2020 | O | ICML | | Does label smoothing mitigate label noise? |
2020 | DP | ICML | | Learning with Bounded Instance- and Label-dependent Label Noise |
2020 | O | ICML | | Training Binary Neural Networks through Learning with Noisy Supervision |
2019 | SIW | NIPS | Pt | Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting |
2019 | RL | ICML | | On Symmetric Losses for Learning from Corrupted Labels |
2019 | O | ICLR | Pt | SOSELETO: A Unified Approach to Transfer Learning and Training with Noisy Labels |
2019 | LNC | ICLR | | An Energy-Based Framework for Arbitrary Label Noise Correction |
2019 | NC | NIPS | Pt | Are Anchor Points Really Indispensable in Label-Noise Learning? |
2019 | O | NIPS | Pt | Combinatorial Inference against Label Noise |
2019 | RL | NIPS | Pt | L_DMI : A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise |
2019 | O | CVPR | | MetaCleaner: Learning to Hallucinate Clean Representations for Noisy-Labeled Visual Recognition |
2019 | LNC | ICCV | | O2U-Net: A Simple Noisy Label Detection Approach for Deep Neural Networks |
2019 | SC | ICCV | * | Co-Mining: Deep Face Recognition with Noisy Labels |
2019 | O | | | NLNL: Negative Learning for Noisy Labels |
2019 | R | | Pt | Using Pre-Training Can Improve Model Robustness and Uncertainty |
2019 | SSL | | | Robust Learning Under Label Noise With Iterative Noise-Filtering |
2019 | ML | CVPR | Pt | Learning to Learn from Noisy Labeled Data |
2019 | ML | | | Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels |
2019 | RL | | Keras | Symmetric Cross Entropy for Robust Learning with Noisy Labels |
2019 | RL | | Caffe | Improved Mean Absolute Error for Learning Meaningful Patterns from Abnormal Training Data |
2019 | LQA | CVPR | | Learning From Noisy Labels By Regularized Estimation Of Annotator Confusion |
2019 | SIW | CVPR | Caffe | Noise-Tolerant Paradigm for Training Face Recognition CNNs |
2019 | SIW | ICML | Pt | Combating Label Noise in Deep Learning Using Abstention |
2019 | SIW | | | Robust Learning at Noisy Labeled Medical Images: Applied to Skin Lesion Classification |
2019 | SC | ICML | Keras | Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels |
2019 | SC | ICML | Pt | How does Disagreement Help Generalization against Label Corruption? |
2019 | SC | CVPR | | Learning a Deep ConvNet for Multi-label Classification with Partial Labels |
2019 | SC | | | Curriculum Loss: Robust Learning and Generalization against Label Corruption |
2019 | SC | | | SELF: Learning to Filter Noisy Labels with Self-Ensembling |
2019 | LNC | CVPR | Pt | Graph Convolutional Label Noise Cleaner: Train a Plug-and-play Action Classifier for Anomaly Detection |
2019 | LNC | ICCV | | Photometric Transformer Networks and Label Adjustment for Breast Density Prediction |
2019 | LNC | CVPR | Pt | Probabilistic End-to-end Noise Correction for Learning with Noisy Labels |
2019 | LNC | TGRS | Matlab | Hyperspectral image classification in the presence of noisy labels |
2019 | LNC | ICCV | | Deep Self-Learning From Noisy Labels |
2019 | NC | AAAI | Tf | Safeguarded Dynamic Label Regression for Noisy Supervision |
2019 | NC | ICML | Pt | Unsupervised Label Noise Modeling and Loss Correction |
2018 | O | ECCV | | Learning with Biased Complementary Labels |
2018 | O | | | Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks |
2018 | R | ICLR | Keras | Dimensionality Driven Learning for Noisy Labels |
2018 | R | ECCV | | Deep bilevel learning |
2018 | SSL | WACV | | A semi-supervised two-stage approach to learning from noisy labels |
2018 | ML | | | Improving Multi-Person Pose Estimation using Label Correction |
2018 | RL | NIPS | | Generalized cross entropy loss for training deep neural networks with noisy labels |
2018 | LQA | ICLR | Repo | Learning From Noisy Singly-Labeled Data |
2018 | LQA | AAAI | | Deep learning from crowds |
2018 | SIW | CVPR | Repo | Iterative Learning With Open-Set Noisy Labels |
2018 | SIW | | Tf | Learning to Reweight Examples for Robust Deep Learning |
2018 | SIW | CVPR | Tf | Cleannet: Transfer Learning for Scalable Image Classifier Training with Label Noise |
2018 | SIW | | | ChoiceNet: Robust Learning by Revealing Output Correlations |
2018 | SIW | IEEE | | Multiclass Learning with Partially Corrupted Labels |
2018 | SC | NIPS | Pt | Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels |
2018 | SC | IEEE | | Progressive Stochastic Learning for Noisy Labels |
2018 | SC | ECCV | Sklearn | Curriculumnet: Weakly supervised learning from large-scale web images |
2018 | LNC | CVPR | Chainer | Joint Optimization Framework for Learning with Noisy Labels |
2018 | LNC | TIFS | Pt, Caffe, Tf | A light CNN for deep face representation with noisy labels |
2018 | LNC | WACV | | Iterative cross learning on noisy labels |
2018 | NC | NIPS | Pt | Using trusted data to train deep networks on labels corrupted by severe noise |
2018 | NC | ISBI | | Training a neural network based on unreliable human annotation of medical images |
2018 | NC | IEEE | | Deep learning from noisy image labels with quality embedding |
2018 | NC | NIPS | Tf | Masking: A new perspective of noisy supervision |
2017 | O | | | Learning with Auxiliary Less-Noisy Labels |
2017 | R | | | Regularizing neural networks by penalizing confident output distributions |
2017 | R | | Pt | mixup: Beyond Empirical Risk Minimization |
2017 | MIL | CVPR | | Attend in groups: a weakly-supervised deep learning framework for learning from web data |
2017 | ML | ICCV | | Learning from Noisy Labels with Distillation |
2017 | ML | | | Avoiding your teacher's mistakes: Training neural networks with controlled weak supervision |
2017 | ML | | | Learning to Learn from Weak Supervision by Full Supervision |
2017 | RL | AAAI | | Robust Loss Functions under Label Noise for Deep Neural |
2017 | LQA | ICLR | | Who Said What: Modeling Individual Labelers Improves Classification |
2017 | LQA | CVPR | | Lean crowdsourcing: Combining humans and machines in an online system |
2017 | SC | NIPS | Tf | Decoupling" when to update" from" how to update" |
2017 | SC | NIPS | Tf* | Active bias: Training more accurate neural networks by emphasizing high variance samples |
2017 | SC | | Tf | MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels |
2017 | SC | | Sklearn | Learning with confident examples: Rank pruning for robust classification with noisy labels |
2017 | SC | NIPS | | Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks |
2017 | LNC | IEEE | | Self-Error-Correcting Convolutional Neural Network for Learning with Noisy Labels |
2017 | LNC | IEEE | | Improving crowdsourced label quality using noise correction |
2017 | LNC | | | Fidelity-weighted learning |
2017 | LNC | CVPR | | Learning From Noisy Large-Scale Datasets With Minimal Supervision |
2017 | NC | CVPR | Keras | Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach |
2017 | NC | ICLR | Keras | Training Deep Neural-Networks Using a Noise Adaptation Layer |
2016 | EM | KBS | | A robust multi-class AdaBoost algorithm for mislabeled noisy data |
2016 | R | CVPR | | Rethinking the inception architecture for computer vision |
2016 | SSL | AAAI | | Robust semi-supervised learning through label aggregation |
2016 | ML | NC | | Noise detection in the Meta-Learning Level |
2016 | RL | | | On the convergence of a family of robust losses for stochastic gradient descent |
2016 | RL | ICML | | Loss factorization, weakly supervised learning and label noise robustness |
2016 | SIW | ICLR | Matlab | Auxiliary image regularization for deep cnns with noisy labels |
2016 | SIW | CVPR | Caffe | Seeing Through the Human Reporting Bias: Visual Classifiers From Noisy Human-Centric Labels |
2016 | SC | ECCV | Repo | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition |
2016 | NC | ICDM | Matlab | Learning deep networks from noisy labels with dropout regularization |
2016 | NC | CASSP | Keras | Training deep neural-networks based on unreliable labels |
2015 | O | | | Learning discriminative reconstructions for unsupervised outlier removal |
2015 | EM | | | Rboost: label noise-robust boosting algorithm based on a nonconvex loss function and the numerically stable base learners |
2015 | MIL | CVPR | | Visual recognition by learning from web data: A weakly supervised domain generalization approach |
2015 | RL | NIPS | | Learning with symmetric label noise: The importance of being unhinge |
2015 | RL | NC | | Making risk minimization tolerant to label noise |
2015 | LQA | | | Deep classifiers from image tags in the wild |
2015 | SIW | TPAMI | Pt | Classification with noisy labels by importance reweighting |
2015 | SC | ICCV | Website | Webly supervised learning of convolutional networks |
2015 | NC | CVPR | Caffe | Learning From Massive Noisy Labeled Data for Image Classification |
2015 | NC | ICLR | | Training Convolutional Networks with Noisy Labels |
2014 | R | | | Explaining and harnessing adversarial examples |
2014 | R | JMLR | | Dropout: a simple way to prevent neural networks from overfitting |
2014 | SC | | Keras | Training Deep Neural Networks on Noisy Labels with Bootstrapping |
2014 | NC | | | Learning from Noisy Labels with Deep Neural Networks |
2014 | LQA | | | Learning from multiple annotators with varying expertise |
2013 | EM | | | Boosting in the presence of label noise |
2013 | RL | NIPS | | Learning with Noisy Labels |
2013 | RL | IEEE | | Noise tolerance under risk minimization |
2012 | EM | | | A noise-detection based AdaBoost algorithm for mislabeled data |
2012 | RL | ICML | | Learning to Label Aerial Images from Noisy Data |
2011 | EM | | | An empirical comparison of two boosting algorithms on real data sets with artificial class noise |
2009 | LQA | | | Supervised learning from multiple experts: whom to trust when everyone lies a bit |
2008 | LQA | NIPS | | Whose vote should count more: Optimal integration of labels from labelers of unknown expertise |
2006 | RL | JASA | | Convexity, classification, and risk bounds |
2000 | EM | | | An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization |