Face Recognition based Authentication System using IoT

by May 14, 2020Projects

Pi Camera>If face Recognized > Device unlocked for 10 sec

Introduction:

In this project, our motive is to grant access to our target device to only those persons whose faces are added as an authorized user in our system.

  • If a known face is detected the target device is unlocked for the next few seconds and auto locks after that time period.
  • Also, the photo of the last authorized person who accessed the device is stored at server io.adafruit.com for manual verification if required.
  • In case an unknown face is detected, the target device remains locked.
  • Also, the photo of the last unauthorized person who tried accessing the device is saved in local server, as well as updated at io.adafruit.com for manual verification.

This system can be installed at any security checkpoints, For example:

  • A door lock which opens for authorized persons only

Code :

You can get the entire code from the below Github Repository link: https://github.com/htgdokania/Face_Recognition_based_Security_check

Steps to follow:

  • STEP1: Send Image from Raspberry pi to a local Server (In my case Ubuntu Desktop).
  • STEP2: Recognize faces in the frame and set Authentication accordingly.
  • STEP3: Send detected face along with authentication to io.adafruit.com
  • STEP4: Read Updated values from io.adafruit.com and turn the target device On/Off. Also,auto-lock the device after an interval of 10sec for added security.
  • STEP5: Add Manual Assitance button to turn on/off the device.

STEP 1: Send Image from Raspberry pi to a Server (In my case Ubuntu Desktop)

  • First setup Raspberry pi with raspbian operating system .Refer here
  • Also, install the picamera on Raspberry pi and make sure it is enabled. Refer Here

 Now, we need two scripts:

  • A server (presumably on a fast machine, my case Ubuntu Desktop) which listens for a connection from the Raspberry Pi, and
  • A client that runs on the Raspberry Pi and sends a continuous stream of images to the server.

For Reference read the picamera documentation (HERE).

NOTE: Always run the Server first ,before running the client code on the raspberry pi .

Code for client.py (Run on Raspberry pi )

import io
import socket
import struct
import time
import picamera

# create socket and bind host
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect(('192.168.31.7', 8000))#replace with the server ip address
connection = client_socket.makefile('wb')

try:
    with picamera.PiCamera() as camera:
        camera.resolution = (640, 480)      # pi camera resolution
        camera.framerate = 15               # 15 frames/sec
        time.sleep(2)                       # give 2 secs for camera to initilize
        start = time.time()
        stream = io.BytesIO()
        
        # send jpeg format video stream
        for foo in camera.capture_continuous(stream, 'jpeg', use_video_port = True):
            connection.write(struct.pack('<L', stream.tell()))
            connection.flush()
            stream.seek(0)
            connection.write(stream.read())
            stream.seek(0)
            stream.truncate()
    connection.write(struct.pack('<L', 0))
except:
    pass
Run client.py from the terminal in raspberry pi
  • Code for Server part (Run on ubuntu Desktop): For the server part we have written a code main.py in which we read the frame over network stream, as well as process the same for further actions.(explained in further steps)

STEP 2: Recognize faces in the frame (if any ) and grant Authentication accordingly (Server-side main.py)

First, install the required modules in your server (Ubuntu Desktop). Refer here and here.

For this simply open a terminal and run these commands:

$ pip3 install opencv-python
$ pip3 install numpy
$ pip3 install face_recognition

Import the required modules:

import numpy as np
import cv2
import socket
import face_recognition
import os
import time
from data_feed import send_image,authorize #written/explained later

Next,we define a class named SecurityCheck() .The required functions are defined within it.

  • The first step is to Initialize the required variables for the face recognition part as well as the image streaming part within __init__().
  • Also, call load_known_faces() to load authorized face data.
  • Finally, call the streaming() function to start reading the frames.
class SecurityCheck(object):
    def __init__(self, host, port):
        self.start=time.time()
        
        # Initialize some variables for face Recognition part
        self.process_this_frame=0
        self.face_locations = []
        self.face_encodings = []
        self.face_names = []
        # Network streaming  part
        self.server_socket = socket.socket()
        self.server_socket.bind((host, port))
        self.server_socket.listen(0)
        self.connection, self.client_address = self.server_socket.accept()
        self.connection = self.connection.makefile('rb')
        self.host_name = socket.gethostname()
        self.host_ip = socket.gethostbyname(self.host_name)

        #Call the function to load known face data 
        self.load_known_faces()
        #start streaming
        self.streaming()

Next, we define load_known_faces() function which loads the data of all the faces present inside the folder and assigns them as authorized faces.

    def load_known_faces(self):
        ## Load the known_faces images data from the folder
        folder="known_faces"
        self.known_face_names=[]
        self.known_face_encodings=[]

        for filename in os.listdir(folder):
            (file, ext) = os.path.splitext(filename)
            self.known_face_names.append(file)
            image=face_recognition.load_image_file(folder+'/'+filename)
            print(folder+'/'+filename)
            known_face_encoding= face_recognition.face_encodings(image)[0]
            self.known_face_encodings.append(known_face_encoding)

For this to work , we need to add a single image of all the authorized persons in a folder named “known_faces“. The filename should be the name of the person in the image. For example harsh.png

known_faces folder contents

Next, let’s define the streaming() function to start reading frames from the Raspberry pi camera,

This frame is further processed to look for faces

  • In the below code, at line 19, we have successfully loaded a frame on the server from Picamera.
  • Next, we call process_frame() to look for authorized faces and update the “face_names” list.
  • Finally, we call the display_frame() function and send data to the server(io.adafruit.com).
    def streaming(self):
        try:
            print("Host: ", self.host_name + ' ' + self.host_ip)
            print("Connection from: ", self.client_address)
            print("Streaming...")
            print("Press 'q' to exit")
            
            # need bytes here
            stream_bytes = b' '
            while True:
                stream_bytes += self.connection.read(1024)
                first = stream_bytes.find(b'\xff\xd8')
                last = stream_bytes.find(b'\xff\xd9')

                if first != -1 and last != -1:
                    jpg = stream_bytes[first:last + 2]
                    stream_bytes = stream_bytes[last + 2:]
                    self.frame = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
                    self.frame=cv2.flip(self.frame,0) #To correct camera orientation.(in my case)
                    #at this point we have our image frame
                    #call function  to recognise face 
                    self.process_frame()            
                    #display
                    self.display_frame()

                    if cv2.waitKey(1) & 0xFF == ord('q'):
                        break                    
        finally:
            self.connection.close()
            self.server_socket.close()

Next,define process_frame() function to detect and Recognize faces .

  • The below part updates the face_names list along with the face_locations list.
  • Unauthorized faces will be named “unknown” while authorized faces will be named by their known names.
    def process_frame(self):
        speed=3 #processing speed for face Recognition to improve speed
        small_frame = cv2.resize(self.frame, (0, 0), fx=0.25, fy=0.25) # 1/4th frame
        rgb_small_frame = small_frame[:, :, ::-1] # convert to RGB

        if ((self.process_this_frame%speed)==0): 
            # Find all the faces and face encodings in the current frame of video
            self.face_locations = face_recognition.face_locations(rgb_small_frame)
            self.face_encodings = face_recognition.face_encodings(rgb_small_frame, self.face_locations)
            self.face_names = []
            for face_encoding in self.face_encodings:
                # See if the face is a match for the known face(s)
                matches = face_recognition.compare_faces(self.known_face_encodings,face_encoding)
                name = "Unknown"
                face_distances = face_recognition.face_distance(self.known_face_encodings,face_encoding)
                best_match_index = np.argmin(face_distances)
                if matches[best_match_index]:
                    name = self.known_face_names[best_match_index] # if match found rename to the matched name
                self.face_names.append(name)

        if(self.process_this_frame==speed):
            self.process_this_frame=0
        else:
            self.process_this_frame+=1

  • Next, call display_frame() function to modify the frame and draw a box and write name around the recognized faces.
  • Also, we call another function send_adafruit() along with the name of the face detected from within this function. This further sends the photo/current frame along with authentication to the server at io.adafruit.com
    def display_frame(self):
        # Display the results
        for (top, right, bottom, left),name in zip(self.face_locations, self.face_names):
            #call function to send info on server
            self.send_adafruit(name)
            # Scale back up face locations since the frame we detected in was scaled to 1/4 size
            top *= 4
            right *= 4
            bottom *= 4
            left *= 4
            # Draw a box around the face
            cv2.rectangle(self.frame, (left, top), (right, bottom), (0, 0, 255), 2)
            # Draw a label with a name below the face
            cv2.rectangle(self.frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
            font = cv2.FONT_HERSHEY_DUPLEX
            cv2.putText(self.frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
        # Display the resulting image
        cv2.imshow('Video', self.frame)

STEP3: Send detected face along with authentication to io.adafruit.com

  • Inside the send_adafruit() function, we first check if the last call was at least 5 secs ago.
  • If this condition satisfies, the photo along with authorization is sent to the server for both known as well as unknown faces.
    def send_adafruit(self,name):
        check=time.time()-self.start
        print("check=",check)
        if (check>5):                   # check if 5 secs went by , since last authentication was sent.
            if (name=='Unknown'):
                authorize(0)
                send_image(self.frame,name)
                cv2.imwrite('unknown_faces/unknown'+'{}.png'.format(int(time.time())),self.frame)
            else:
                authorize(1)
                send_image(self.frame,name)                
            self.start=time.time()  # Reset the start value.

When an unauthorized/unknown person is detected, we also save the frame in our local server(Ubuntu Desktop) within the “unknown_faces folder along with its timestamp (Shown Below).

Unknown faces also saved at local server (Desktop)
if __name__ == '__main__':
    # host, port
    h, p = '', 8000 # '' represents listening to all devices
    SecurityCheck(h, p)

cv2.destroyAllWindows() #Finally close Windows

The function “authorize” and “send_image” are written within data_feed.py, which is imported in the beginning.

Now, we will write the code for data_feed.py

  • For this part, we first need to register a free account at io.adafruit.com. (click here)
  • Next, create three feeds namely “known“, “unknown“, “lock” and complete setup by following the steps (as shown below):
Go to Dashboards and click Actions>Create a New Dashboard
fill in the required information and click on Create

Next, Create a new block to display the image received,and associate it with feed name ‘known

Select the Image block by clicking on it
Select the feed name “known” to be associated with this block (You can create a new feed by typing a new name and click create). Then click on Next step
type in block title “Last known _face”

Similarly , create another image block and associate it with feed name ‘unknown

Select feed name “unknown” to be associated with this block (You can create a new feed by typing a new name and click create).Then click on Next step
Finally click on Create/update block

Next, we need one indicator block ,that indicates the status of our device (On/Off).

Select indicator block by clicking on it
Select the feed name “lock” to be associated with this block (You can create a new feed by typing a new name and click create). Then click on Next step
Set the condition. Here if the value of feed ‘lock’ is >0 it is on else off. So at 1, it is on and at 0 it is off. Finally, click on Create block
This is how the screen looks after creating the above 3 blocks and sending the data accordingly.
  • The below part is called from within the send_adafrutit() function mentioned above. It sends the image and status value to unlock the device.

Code for “data_feed.py” is written below:-

import time
import base64
import os
import cv2
from Adafruit_IO import Client, Feed, RequestError

ADAFRUIT_IO_KEY = 'YOUR_AIO_KEY'
ADAFRUIT_IO_USERNAME = 'USERNAME'
aio = Client(ADAFRUIT_IO_USERNAME, ADAFRUIT_IO_KEY)

def send_image(frame,name):
  frame=cv2.resize(frame,(300,300))
  cv2.imwrite(name+'.jpg',frame)
  print('Camera: SNAP!Sending photo....')
  cam_feed = aio.feeds('known')
  if (name=='Unknown'):
    cam_feed = aio.feeds('unknown')
                
  with open(name+'.jpg', "rb") as imageFile:
      image = base64.b64encode(imageFile.read()) # encode the b64 bytearray as a string for adafruit-io
      image_string = image.decode("utf-8")
      try:
        aio.send(cam_feed.key, image_string)
        print('Picture sent to Adafruit IO')
      except:
        print('Sending to Adafruit IO Failed...')
  time.sleep(2)# camera capture interval, in seconds

def authorize(status):
  print('Camera: SNAP!.Sending info...')
  lock_feed=aio.feeds('lock')
  try:
    aio.send(lock_feed.key, status)
    print('Authorization sent to Adafruit IO')
  except:
    print('Sending to Adafruit IO Failed...')
  time.sleep(2)# camera capture interval, in seconds

Replace username and AIO key with your credentials in the above code.
You can get the same from here

STEP4: Read Updated values from io.adafruit.com

  • Turn the target device On/Off based on the values read from the server.
  • Also,auto-lock the device after an interval of 10sec for added security, once it is unlocked.

For now, we have connected Green and Red LEDs through a 220ohm resistor to the raspberry pi’s GPIO pins to represent the device status.

  • Below code “lock_unlock.py“, should run on Raspberry pi.
import time
import base64
import os
from Adafruit_IO import Client, Feed, RequestError
import RPi.GPIO as GPIO

ADAFRUIT_IO_KEY = 'YOUR_AIO_KEY'
ADAFRUIT_IO_USERNAME = 'USERNAME'

aio = Client(ADAFRUIT_IO_USERNAME, ADAFRUIT_IO_KEY)
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
LED_Green=26
LED_Red=19
GPIO.setup(LED_Green,GPIO.OUT)
GPIO.setup(LED_Red,GPIO.OUT)
lock_feed=aio.feeds('lock')

while 1:
    data=aio.data('lock')
    try:
        print('processing..')
        for d in data:
            status=d.value
        print(status)
        if (status=='1'):
            GPIO.output(LED_Green,GPIO.HIGH)
            GPIO.output(LED_Red,GPIO.LOW)
            time.sleep(10)
            #auto lock if unlocked after 10 sec
            aio.send(lock_feed.key,0)
        else:
            GPIO.output(LED_Green,GPIO.LOW)
            GPIO.output(LED_Red,GPIO.HIGH)
        
            
    except:
        print(' Failed...')
    
    time.sleep(4)
Note:use a seperate terminal to run  this code
Replace username and AIO key with your credentials in the above code.

The below Video Demonstrates : face recognition>Device ON>10sec interval>Device OFF

Green -Represents device unlocked when authorized face detected.
Red– Represents locked device (auto-locks after 10secs)

STEP5: Add Manual Assistance button to turn ON/OFF device

There might be situations when we need to grant authorization to an unknown person.

  • This can be done manually by a simple toggle button on the server(io.adafruit.com).
  • This button triggers to change the value of ‘lockfeed which in turn updates the indicator block.
  • Also,lock_unlock.py code running on Pi updates the LED status .

First, we need to add a new toggle button block by following the below steps

Select toggle button by clicking on it
Select new feed name “lockdevice” to be associated with this block (You can create a new feed by typing a new name and click create). Then click on Next step
Enter a block title as shown and click on Create block
This is how dashboard looks finally

Now , TRIGGER the‘lock’ feed when the Manual Assistance button is toggled.

For this, First, we need to create a new trigger as shown below to set the ‘lock ‘ feed value to 1, when the button is set to ‘ON.

click on Actions>create a new trigger
Select Reactive Trigger
Fill in the conditions and body (as shown here)

Similarly, create another trigger to set ‘lock ‘ feed value to 0 , when the button is set to ‘OFF

Fill in the conditions and body (as shown here)
The two triggers we need for operating the toggle button ON and OFF action
when the manual assistance button is set to ON,indicator status changes to Green (as ‘lock’ feed becomes 1)
when the manual assistance button is set to OFF,indicator status changes to Red(as ‘lock’ feed becomes 0)

THAT’S IT!! System is Ready to be Implemented at Desired Location

Creating a multiplication Skill in Alexa using python

Written By Monisha Macharla

Hi, I'm Monisha. I am a tech blogger and a hobbyist. I am eager to learn and explore tech related stuff! also, I wanted to deliver you the same as much as the simpler way with more informative content. I generally appreciate learning by doing, rather than only learning. Thank you for reading my blog! Happy learning!

RELATED POSTS

How to Simulate IoT projects using Cisco Packet Tracer

How to Simulate IoT projects using Cisco Packet Tracer

In this tutorial, let's learn how to simulate the IoT project using the Cisco packet tracer. As an example, we shall build a simple Home Automation project to control and monitor devices. Introduction Firstly, let's quickly look at the overview of the software. Packet...

How to design a Wireless Blind Stick using  nRF24L01 Module?

How to design a Wireless Blind Stick using nRF24L01 Module?

Introduction Let's learn to design a low-cost wireless blind stick using the nRF24L01 transceiver module. So the complete project is divided into the transmitter part and receiver part. Thus, the Transmitter part consists of an Arduino Nano microcontroller, ultrasonic...

How to implement Machine Learning on IoT based Data?

How to implement Machine Learning on IoT based Data?

Introduction The industrial scope for the convergence of the Internet of Things(IoT) and Machine learning(ML) is wide and informative. IoT renders an enormous amount of data from various sensors. On the other hand, ML opens up insight hidden in the acquired data....

Smart Display Board based on IoT and Google Firebase

Smart Display Board based on IoT and Google Firebase

Introduction In this tutorial, we are going to build a Smart Display Board based on IoT and Google Firebase by using NodeMCU8266 (or you can even use NodeMCU32) and LCD. Generally, in shops, hotels, offices, railway stations, notice/ display boards are used. They are...

Smart Gardening System – GO GREEN Project

Smart Gardening System – GO GREEN Project

Automation of farm activities can transform agricultural domain from being manual into a dynamic field to yield higher production with less human intervention. The project Green is developed to manage farms using modern information and communication technologies....

How to build a Safety Monitoring System for COVID-19

How to build a Safety Monitoring System for COVID-19

It is expected that the world will need to battle the COVID-19 pandemic with precautious measures until an effective vaccine is developed. This project proposes a real-time safety monitoring system for COVID-19. The proposed system would employ an Internet of Things...

VIDEOS – FOLLOW US ON YOUTUBE

EXPLORE OUR IOT PROJECTS

IoT Smart Gardening System – ESP8266, MQTT, Adafruit IO

Gardening is always a very calming pastime. However, our gardens' plants may not always receive the care they require due to our active lifestyles. What if we could remotely keep an eye on their health and provide them with the attention they require? In this article,...

How to Simulate IoT projects using Cisco Packet Tracer

In this tutorial, let's learn how to simulate the IoT project using the Cisco packet tracer. As an example, we shall build a simple Home Automation project to control and monitor devices. Introduction Firstly, let's quickly look at the overview of the software. Packet...

All you need to know about integrating NodeMCU with Ubidots over MQTT

In this tutorial, let's discuss Integrating NodeMCU and Ubidots IoT platform. As an illustration, we shall interface the DHT11 sensor to monitor temperature and Humidity. Additionally, an led bulb is controlled using the dashboard. Besides, the implementation will be...

All you need to know about integrating NodeMCU with Ubidots over Https

In this tutorial, let's discuss Integrating NodeMCU and Ubidots IoT platform. As an illustration, we shall interface the DHT11 sensor to monitor temperature and Humidity. Additionally, an led bulb is controlled using the dashboard. Besides, the implementation will be...

How to design a Wireless Blind Stick using nRF24L01 Module?

Introduction Let's learn to design a low-cost wireless blind stick using the nRF24L01 transceiver module. So the complete project is divided into the transmitter part and receiver part. Thus, the Transmitter part consists of an Arduino Nano microcontroller, ultrasonic...

Sending Temperature data to ThingSpeak Cloud and Visualize

In this article, we are going to learn “How to send temperature data to ThingSpeak Cloud?”. We can then visualize the temperature data uploaded to ThingSpeak Cloud anywhere in the world. But "What is ThingSpeak?” ThingSpeak is an open-source IoT platform that allows...

Amaze your friend with latest tricks of Raspberry Pi and Firebase

Introduction to our Raspberry Pi and Firebase trick Let me introduce you to the latest trick of Raspberry Pi and Firebase we'll be using to fool them. It begins with a small circuit to connect a temperature sensor and an Infrared sensor with Raspberry Pi. The circuit...

How to implement Machine Learning on IoT based Data?

Introduction The industrial scope for the convergence of the Internet of Things(IoT) and Machine learning(ML) is wide and informative. IoT renders an enormous amount of data from various sensors. On the other hand, ML opens up insight hidden in the acquired data....

Smart Display Board based on IoT and Google Firebase

Introduction In this tutorial, we are going to build a Smart Display Board based on IoT and Google Firebase by using NodeMCU8266 (or you can even use NodeMCU32) and LCD. Generally, in shops, hotels, offices, railway stations, notice/ display boards are used. They are...

Smart Gardening System – GO GREEN Project

Automation of farm activities can transform agricultural domain from being manual into a dynamic field to yield higher production with less human intervention. The project Green is developed to manage farms using modern information and communication technologies....