More Sophisticated TF Example

Started last weekend, continuing now. This example was a 2 layer ReLU.

Took another day to finish it. The final construct is still running (20,000 iterations, on iteration 3200 after 10′ maybe, so maybe another hour or so), but it’s performing as advertised. The deeper example forced me to think about some of the constructs and additional topics such as pooling and dropout. Not sure what drives the dimensionality of the outputs from each layer. This example used 2 layers. The first one creates 32 features for a 14 x 14 image. I think I’m interpreting that right. It actually creates 32 for the full 28 x 28 image, but pooling picks the maximum value for each 2 x 2 sub-array. Then it creates 64 features on a 7 x 7 array from the first array’s output. Then it creates 1024 features from a the last array which are then used to create a 10 feature readout layer. I admit to confusion about the role of the densely connected layer (the one which creates the 1024 final features). But mechanically, the package works very nicely.

 

Leave a comment