Home

Awesome

MAML-TensorFlow

An elegant and efficient implementation for ICML2017 paper: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

Highlights

How-TO

  1. Download mini-Imagenet from here and extract them as :
	miniimagenet/	
	├── images	
		├── n0210891500001298.jpg  		
		├── n0287152500001298.jpg 		
		...		
	├── test.csv	
	├── val.csv	
	└── train.csv	
	└── proc_images.py
	

then replace the path by your actual path in data_generator.py:

		metatrain_folder = config.get('metatrain_folder', '/hdd1/liangqu/datasets/miniimagenet/train')
		if True:
			metaval_folder = config.get('metaval_folder', '/hdd1/liangqu/datasets/miniimagenet/test')
		else:
			metaval_folder = config.get('metaval_folder', '/hdd1/liangqu/datasets/miniimagenet/val')
  1. resize all raw images to 84x84 size by
python proc_images.py
  1. train
python main.py

Since tf.summary is time-consuming, I turn it off by default. uncomment the 2 lines to turn it on:

	# write graph to tensorboard
	# tb = tf.summary.FileWriter(os.path.join('logs', 'mini'), sess.graph)

	...
	# summ_op
	# tb.add_summary(result[1], iteration)

and then minitor training process by:

tensorboard --logdir logs
  1. test
python main.py --test

As MAML need generate 200,000 train/eval episodes before training, which usually takes up to 6~8 minutes, I use an cache file filelist.pkl to dump all these episodes for the first time and then next time the program will load from the cached file. It only takes several seconds to load from cached files.

generating episodes: 100%|█████████████████████████████████████████| 200000/200000 [04:38<00:00, 717.27it/s]