Helping Robots Know and Recognize their Loved Ones

Laksh Bhambhani
3 min readDec 12, 2020

After the first and second part of this mini series on infant robots detecting human emotions through audio and facial features, we want the robot to know and recognize the people it talks to (just like a human infant). For the third part of my three part mini series on AI & robotics, I am demonstrating an experimental Face Recognition project to explore its potential.

Here’s a quick overview of the project:

Project Description: Using the face-recognition library to detect face location and encodings in a frame and compare them to know encodings to recognize people in real time.

Data Set: Since this project focuses on recognizing specific people, there isn’t a specific data set that we’re going to use. Rather, you would be including the images of the people you want it to recognize.

Furthermore, let’s divide the project into 2 parts:

  • Creating face encodings: Finding face locations and encodings using downloaded images of known people from google and local images
  • Real time face recognition: Comparing faces found in the frame to known encodings and finding a match based on that

Creating Face Encodings

Before we can match faces found in the frames, we need to find and store face encodings in an object since this library uses those to compare and match faces.

“For face recognition, the algorithm notes certain important measurements on the face — like the color and size and slant of eyes, the gap between eyebrows, etc. All these put together define the face encoding — the information obtained out of the image — that is used to identify the particular face.”

We can use this code sample:

Alternatively, you can download images from download or manually and run through those images in a loop to help the model recognize the people from the photos downloaded (depends on what you download but should mostly be celebrities)

Real Time Face Recognition

For recognizing people in real time, we would have to open a camera source to constantly capture frames and find all faces in those. Those faces would then be converted to encodings which would be used to match it to the known faces by the finding the ‘distance’ or difference. Finally, you would get the name of the person using the index integer that you received after comparing the encodings. Use this code sample:

Conclusion

Similar to how an infant starts to know his/her parents within a year, this code can be used by a robot to know and recognize its ‘loved ones’.

This code was tested on a Raspberry Pi 4 powered Humanoid — Shelbot (one of my ongoing projects)

Full code for this project can be found here.

Laksh Bhambhani is a Student Ambassador in the Inspirit AI Student Ambassadors Program. Inspirit AI is a pre-collegiate enrichment program that exposes curious high school students globally to AI through live online classes. Learn more at https://www.inspiritai.com/.

--

--