TF

Continuing to play with TensorFlow. The more complex NN I ran based on the TF tutorial/example got to 99.05% accuracy. It’s clear I need a little deeper understanding of the pieces, but it’s coming together a bit. It’s clear that these examples are very dependent on the convolution kernels. These provide the numeric input to the nodes in each layer. Crossing the results of the convolution across the image gives a global sense of what the kernel says about the image. So, for example, if a kernel is sensitive to finding a dot surrounded by blank pixels, and the image consists of a bunch of dots, then I’d expect equal weights being given to all pixels to correctly label it. Each of the kernels corresponds to a different feature being assessed across the image. The weights also get at where in the image the feature from a kernel is found.

My plan is to go onto the next piece of the TF tutorial, focused on mechanics.

Thinking ahead to applying this to graph data.

Leave a comment