Danny Malter

Data Science Manager - Accenture
M.S. in Predictive Analytics - DePaul University

Danny Malter

Me
Malter Analytics
GitHub
LinkedIn
YouTube Channel
Twitter
Kaggle

Other Work
General Assembly
AriBall

Media
Built In

Using YOLO for Object Detection

This post follows through instructions for using YOLO (You Only Look Once) with video object detection. This demonstration was done with a g3.8xlarge Deep Learning AMI (Ubuntu) Version 6.0 (ami-bc09d9c1). The weights for this particular YOLO model were trained on the COCO dataset, which consists of 182 different labels.

Requirements

GitHub Repo

Build the Library

cd darkflow
python setup.py build_ext inplace

Processing a Video

The below code assumes a video called video1.mp4 is uploaded in a data folder. You will also need to make sure that you have the proper cfg and weight files in the respective folders. In this case, the model that I am using is yolo.cfg and yolov2.weights.

If using CPU, you can get rid of the GPU flag any the threshold can be changed or removed. For a 1 minute video on a g3.8xlarge machine, it took roughly 10 minutes for the video to fully complile at a rate of 3.04 frames per second.

python flow --model cfg/yolo.cfg --load bin/yolov2.weights --demo data/video1.mp4 --threshold 0.25 --gpu 1.0 --saveVideo

Results

YOLO Example

Video from MLB.com. Full video can be seen here.


Walking down Michigan Avenue in Chicago.


comments powered by Disqus