Awesome
Papers related to 'Less Than One'-Shot (LO-Shot) Learning
Papers found in this repo
Paper 1 - 'Less Than One'-Shot Learning: Learn N Classes from M<N Samples
Preprint - https://arxiv.org/abs/2009.08449
Published - In AAAI 2021 Proceedings
Code and appendix - Paper1 directory
TL;DR - Explore the decision landscapes generated by soft-label k-Nearest Neighbors classifiers in the 'less than one'-shot learning setting.
Press coverage - LO-Shot Learning has received significant press coverage.
Paper 2 - Optimal 1-NN Prototypes for Pathological Geometries
Preprint - https://arxiv.org/abs/2011.00228
Published - In PeerJ Computer Science
Code - Paper2 directory
TL;DR - Design optimal 1-NN prototypes even in pathological cases where most prototype methods fail.
Paper 3 - One Line to Rule Them All: Generating LO-Shot Soft-Label Prototypes
Preprint - https://arxiv.org/abs/2102.07834
Published - In IJCNN 2021 Proceedings
Code - Paper3 directory
TL;DR - Represent your training dataset with fewer prototypes than even the number of classes found in the data.
Paper 4 - Can humans do less-than-one-shot learning?
Preprint - https://arxiv.org/abs/2202.04670
Published - In CogSci 2022 Proceedings
Code - LOSLP directory
TL;DR - Humans can also do LO-shot learning.
Paper 5 - Using Compositionality to Learn Many Categories from Few Examples
Preprint - https://osf.io/preprints/psyarxiv/upn8e
Published - In CogSci 2024 Proceedings
Code - TBA
TL;DR - Humans are better at LO-shot learning when using techniques like compositional generalization.
Papers found in other repos
Paper - Soft-Label Dataset Distillation and Text Dataset Distillation
Preprint - https://arxiv.org/abs/1910.02551v3
Code - https://github.com/ilia10000/dataset-distillation
TL;DR - Experiments with soft-label dataset distillation (an algorithm for generating small synthetic datasets that train neural networks to the same performance as when training on the original data) provided the first evidence of LO-Shot Learning in neural networks.