Giving Robots an Infant’s Sense of Vision

Laksh Bhambhani
3 min readNov 29, 2020

--

After the first part of this mini series on detecting human emotions through audio features, it is the right choice to teach the robot to detect emotion through image features. For the second part of my three part mini series on AI & robotics, I am demonstrating an experimental Face Emotion Recognition project to explore its potential.

Here’s a quick overview of the project:

Project Description: Using a cascade classifier to detect faces in a frame and a custom sequential model to detect emotions using that frame in real-time.

Data Set: Here is the link to the data

  • Includes male and female
  • Contains data separated into train and validation and further into angry, disgust, fear, happy, neutral, sad, surprise

Furthermore, let’s divide the project into 2 parts:

  • Model: loading the data and building, training, and testing the model
  • Detecting faces: Finding regions of interest (ROI) or faces in a frame and real-time emotion detection based on them

Model

To prepare the data for our model, we would have to make a train and validation generator and then read in each file from the data folder. We also have to rescale the images so it is easy to deal with them. We can use this code sample:

Now, onto building the model:

Finally, if running this code for the first time, or the model needs to be retrained, this code sample should be used:

It uses the Adam optimizer and categorical crossentropy as its loss function. Finally, it stores the weight generates in a .h5 file so it could be loaded quickly when run after this.

Detecting Faces

Before we analyze emotions from faces in a frame, we need to start a video source, prepare the image and find faces/regions of interest (ROI) in it. To add to that, sometimes we just need to load the model instead of training it and to do all that, we can use this code sample:

Finally, let’s make a prediction and draw a rectangle around the face for the output frame for all faces in the original frame.

Conclusion

Similar to how an infant starts crying even if you look at them with an angry face, this model can be used by robots to know and judge our emotions while looking at us. Moreover, it’s an addition to recognizing human emotions through voice features from the last part of this mini series.

This code was tested on a Raspberry Pi 4 powered Humanoid — Shelbot (one of my ongoing projects)

Full code for this project can be found here.

The next part of this mini series will be focusing on developing another one of the five senses of an infant

About Me

Github: https://github.com/LakshBhambhani

LinkedIn: https://www.linkedin.com/in/lakshbhambhani/

Laksh Bhambhani is a Student Ambassador in the Inspirit AI Student Ambassadors Program. Inspirit AI is a pre-collegiate enrichment program that exposes curious high school students globally to AI through live online classes. Learn more at https://www.inspiritai.com/.

--

--

Laksh Bhambhani
Laksh Bhambhani

No responses yet