Below are 3 new images from the ‘preprocess the data’ section of the dataset:
The array below from the ‘make predictions’ section represents a list of probabilities corresponding to each of the 10 different image classes we specified. The softmax function transform an array of normal numbers in probabilities between 0 and 1 that sum to 1. In this case it appears that the second value in the array shows that the software is nearly 100% confident that the image should be classified in image class 1, which corresponds to a trouser. The argmax function for this image produces the value 1, also corresponding to the image class trouser.
Below are 2 new images from the ‘verify predictions’ section of the dataset:
The image below from the ‘train model’ section of the data shows that the model predicted the image was an ankle boot. This matches the label ‘9’ that was produced by the argmax function. We did not use the softmax function in this case because we simply wanted the label from the different classes which run sequentially 0-9, and did not want to see the probabilities that the image was each of the classes. These probabilites were calcualted by the following code:
predictions_single = probability_model.predict(img)
Below is a plot of 25 handwritten numbers from the new mnist dataset with their labels indicated below each image. The accuracy of the test dataset and training dataset are also included.
Below are 2 images of numbers, along with their graphs of predicted labels.
The mnist dataset seems to be more accurate than the fashion dataset. I think this is because the numbers are simpler images, and therefore more discernable to the software, and easier to sort into classes.