Unsupervised segmentation (unlabeled regions of interest, ROIs) and autoencoder (AE)-based classification were used to classify differences in cavitation patterns in knees and digits using the stained images (n=20-30 images/group). Each image was divided into 256 x 256 pixel patches, and a convolutional neural network (CNN)-based unsupervised segmentation was used to identify ROIs. These patches were subsequently fed into a CNN-based AE whose latent space layer was connected to a classifier for input patch classification. The AE was trained using the ROIs identified by the unsupervised segmentation, and the image classes were used to train the classifier. Whole image classifications were determined by maximum voting of the patch results and evaluated by accuracy.