Google investigates neural networks to a better Pedestrian Detection With GPUs systems in real time

Google investigates neural networks to better detect pedestrians in real time -
Google investigates neural networks to better detect pedestrians in real time -

Caltech develops a benchmark to check the effectiveness of Pedestrian Detection With GPUs systems in real time. Systems must pass a difficult test, “watch” 10 hours of video recorded from a car. 250,000 2,300 different frames where pedestrians appear. A difficult test where the failure rate is usually 50% range. A fact which shows that there is still much work to do to achieve effective systems.

Google, with its autonomous car , is one that is making more progress in their investigations related to automatic detection of pedestrians on the road. This week has presented a paper in which he explains how using neural network is able to reduce the failure rate to 26.2% .

More accurate, but it is still not enough

One of the common problems of the detection of elements in real time is the conflict of two variables: speed versus accuracy. For example, there are systems that can detect people at 100 frames per second but the hit ratio is very low. Google, in this case has reduced failures considerably slower but sufficient amount time: 15 fps.



As Google explains, getting two values is the great challenge for this technology and neural networks have improved efficiencies substantially. To make this possible, they have used a GPU Nvidia Tesla K20.  A component of high performance but within the context that is often used (supercomputers) are two models below the pointer over Nvidia K80.

There is also a model among both the K40. In any case, it is interesting that this new algorithm using a powerful GPU but not the best in its category have been reduced by half failures. This makes us wonder if with the models could further improve the figure.

The research presented is a step for automatically detecting and those working in it believe there is still room for improvement in the algorithm they created. Tests on the benchmark Caltech should also serve to analyze the failures and see where you went wrong and how it could increase the accuracy.