Today, I looked at the fastai notebook for reviewing some of the fastai library.
It was a great review. This is a perfect notebook for beginners. There wasn't a ton of implementation, but there were notable sections. I particularly liked the section "Image Recognizers Can Tackle Non-Image Tasks". An inspiring line from Jeremy Howard:
You shouldn't think of approaches like the ones described here as "hacky workarounds," because actually they often (as here) beat previously state-of-the-art results. These really are the right ways to think about these problem domains.
Most fastai code looks like this (at least beginner):
path = untar_data(URLs.CAMVID_TINY)
dls = SegmentationDataLoaders.from_label_func(
path, bs=8, fnames = get_image_files(path/"images"),
label_func = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}',
codes = np.loadtxt(path/'codes.txt', dtype=str)
)
learn = unet_learner(dls, resnet34)
learn.fine_tune(8)
Above is an image segmentation model. I like how small it is, but it is super shorthand. Some might argue a little unreadable, but I do like how quickly one could bootstrap some models.
I noticed that this was a part of the intro to fastai, one of the beginner tutorials. See here.
Overall, great review of basics (overfitting, validation vs test, etc). Also some great history to brush up on and learn (I had no idea Walter Pitts was homeless!)
Excited to implement the mnist basics tomorrow!