top of page

Predicting Traffic Signs with Neural Networks

This is a brief presentation of my analysis, click here for my full iPython notebook.

Introduction

There's nothing more important than safety when you are driving a vehicle. Have you ever commuted to work, school, or a frequent place only to arrive with no recollection of what happened during your drive? Do you feel safe? I trust myself, so yes. But not always. The ability that we have to do such things comes from our brain being able to use past events to guide us to drive while continuing to learn as are aware of our surroundings. What we get used to are things like weather, traffic signs, motion, and other objects that we react to. (Hopefully) Our brain uses it to predict what we are likely to do, which could be simply driving to work while avoiding obstacles. However, what if one day, we weren't able to be entirely safe due to fatigue or another factor that our brain cannot simply react fast enough to?

Here comes the necessity of autonomous vehicles. An autonomous vehicle is capable of sensing the environment and navigating successfully without human input. Something that could drastically change the world is giving vehicles the ability to think and recollect events so that it could learn how to drive. To do this, we could use neural networks so that a car could have abilities such as image recognition so they can learn and be able to label objects much faster and accurately than a human brain ever could.

Methods

To teach our machine how to use neural networks to make predictions, we are going to use deep learning from TensorFlow. Deep learning is a field of machine learning that uses algorithms inspired by how neurons function in the human brain. TensorFlow is a machine learning framework that Google created to design, build, and train deep learning models. The name, "TensorFlow", is derived from how neural networks perform on multidimensional data arrays or tensors. It's a flow or tensors, just like how the human brain has a flow of neurons!

Data

We will be using traffic signs taken from Belgium to train and test our neural network. Click here for the source. There are 4575 images with 62 different labels in our training set that we will use to test for prediction accuracy.

Notice above, how each image has different lighting, shape, angle, and size? Are these features relevant for us to understand what each sign means? For most of us, yes. It gives us more certainty when our inhibitions are hindered from fatigue or stress. But for a neural network? Probably not. We need to do some feature extractions to assist our machine in making faster predictions. Some things that we could do is resize, recolor, and flatten.

Before Feature Extraction (What we like)

After Feature Extraction (What computers like)

Evaluating our Neural Network

Here, we test each new image from our test set without labels against our 4575 training images that do have labels and see how it performs. Let's take 10 random photos and see how it does.

With 10 photos, it got 9 right! Nice. At least it didn't miss a stop sign or that would have been a real problem.

Conclusion

With the rise of virtual reality, limitless availability data, and developing artificial intelligence, we should utilize everything that we can to improve our lifestyles. I feel that safety is essential to anything that moves and is in contact with human-beings. Better A.I. for cars could be made with neural networks. What could be next?

Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
bottom of page