Biowulf High Performance Computing at the NIH
YOLO3: a Keras implementation of the "you only look once" algorithm

YOLO is a deep learning-based approach to object detection. Prior work on object detection repurposed classifiers to perform detection. Instead, YOLO frames object detection as a regression problem to spatially separated bounding boxes and associated class probabilities.

References:

Documentation
Important Notes

Interactive job
Interactive jobs should be used for debugging, graphics, or applications that cannot be run as batch jobs.

Allocate an interactive session and run the program. Sample session:

[user@biowulf]$ sinteractive --mem=16g --gres=gpu:p100:1,lscratch:10 -c4
[user@cn3316 ~]$ module load YOLO 
Copy a sample ("raccoon") dataset, a configuration file config.json and a pretrained weights file raccoon.h5 to your current filder:
[user@cn3316 ~]$ cp -r $YOLO_DATA/* . 
Generate anchors for your dataset (optional):
[user@cn3316 ~]$ gen_anchors.py -c config.json
Start the training process:
[user@cn3316 ~]$ train.py -c config.json
...
Seen labels:    {'raccoon': 217}

Given labels:   ['raccoon']

Training on:    ['raccoon']
...
Loading pretrained weights.
...
resizing:  352 352
resizing:  384 384
Epoch 1/50
resizing:  352 352
resizing:  448 448
...
Epoch 00001: loss improved from inf to 9.20953, saving model to raccoon.h5
Epoch 2/50
resizing:  416 416
resizing:  352 352
...

Prediction:
[user@cn3144 ~]$ predict.py -c config.json -i train_image_folder/raccoon-1.jpg
...
The result will be stored in the folder /output:
[user@cn3144 ~]$ show_image.py output/raccoon-1.jpg




Evaluation:
[user@cn3144 ~]$ evaluate.py -c config.json
...
raccoon: 1.0000
mAP: 1.0000
This module also includes a YOLO annotation tool, which allows a user prepare/label hit own data for training.
. To rum the annotation tooll, first download sample data:
[user@cn3144 ~]$ cp -r $YOLO_DATA2/* .
This command will copy two folders, "Images" and "Labels", to your current folder, together with text file "classes.txt". The data is a set of slides of cancer tissue images with two types of staining: "pink" and blue". Annotation of these images means specifying a bounding box that includes the object(s) on an image and saving the coordinated of this box. The data in the "Image" folder are formatted according to the requirements of the annotation tool. The "Labels" folder will be used for storing the results of annotation. Annotation of the image data means specifying manually a bounding box that includes the object(s) on each image and then saving the coordinated of this box. To start the annotation GUI, type:
[user@cn3144 ~]$ annotation_tool.py 


Type "pinkblue" into the Image Dir window, and then click on "Load". The first of images will appear. Using the slider, mark the the positions of the upper left and lower right corners of the bounding box of an object on the image by clicking on appropriate locations in the image; then click on the button "Next >>" to proceed to the next image, etc.




Exit the interactive session:
[user@cn3144 ~]$ exit
salloc.exe: Relinquishing job allocation 46116226
[user@biowulf ~]$