Introduction:
In this project, our motive is to grant access to our target device to only those persons whose faces are added as an authorized user in our system.
- If a known face is detected the target device is unlocked for the next few seconds and auto locks after that time period.
- Also, the photo of the last authorized person who accessed the device is stored at server io.adafruit.com for manual verification if required.
- In case an unknown face is detected, the target device remains locked.
- Also, the photo of the last unauthorized person who tried accessing the device is saved in local server, as well as updated at io.adafruit.com for manual verification.
This system can be installed at any security checkpoints, For example:
- A door lock which opens for authorized persons only
Code :
You can get the entire code from the below Github Repository link: https://github.com/htgdokania/Face_Recognition_based_Security_check
Steps to follow:
- STEP1: Send Image from Raspberry pi to a local Server (In my case Ubuntu Desktop).
- STEP2: Recognize faces in the frame and set Authentication accordingly.
- STEP3: Send detected face along with authentication to io.adafruit.com
- STEP4: Read Updated values from io.adafruit.com and turn the target device On/Off. Also,auto-lock the device after an interval of 10sec for added security.
- STEP5: Add Manual Assitance button to turn on/off the device.
STEP 1: Send Image from Raspberry pi to a Server (In my case Ubuntu Desktop)
- First setup Raspberry pi with raspbian operating system .Refer here
- Also, install the picamera on Raspberry pi and make sure it is enabled. Refer Here
Now, we need two scripts:
- A server (presumably on a fast machine, my case Ubuntu Desktop) which listens for a connection from the Raspberry Pi, and
- A client that runs on the Raspberry Pi and sends a continuous stream of images to the server.
For Reference read the picamera documentation (HERE).
NOTE: Always run the Server first ,before running the client code on the raspberry pi .
Code for client.py (Run on Raspberry pi )
import io
import socket
import struct
import time
import picamera
# create socket and bind host
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect(('192.168.31.7', 8000))#replace with the server ip address
connection = client_socket.makefile('wb')
try:
with picamera.PiCamera() as camera:
camera.resolution = (640, 480) # pi camera resolution
camera.framerate = 15 # 15 frames/sec
time.sleep(2) # give 2 secs for camera to initilize
start = time.time()
stream = io.BytesIO()
# send jpeg format video stream
for foo in camera.capture_continuous(stream, 'jpeg', use_video_port = True):
connection.write(struct.pack('<L', stream.tell()))
connection.flush()
stream.seek(0)
connection.write(stream.read())
stream.seek(0)
stream.truncate()
connection.write(struct.pack('<L', 0))
except:
pass
- Code for Server part (Run on ubuntu Desktop): For the server part we have written a code main.py in which we read the frame over network stream, as well as process the same for further actions.(explained in further steps)
STEP 2: Recognize faces in the frame (if any ) and grant Authentication accordingly (Server-side main.py)
First, install the required modules in your server (Ubuntu Desktop). Refer here and here.
For this simply open a terminal and run these commands:
$ pip3 install opencv-python $ pip3 install numpy $ pip3 install face_recognition
Import the required modules:
import numpy as np
import cv2
import socket
import face_recognition
import os
import time
from data_feed import send_image,authorize #written/explained later
Next,we define a class named SecurityCheck() .The required functions are defined within it.
- The first step is to Initialize the required variables for the face recognition part as well as the image streaming part within __init__().
- Also, call load_known_faces() to load authorized face data.
- Finally, call the streaming() function to start reading the frames.
class SecurityCheck(object):
def __init__(self, host, port):
self.start=time.time()
# Initialize some variables for face Recognition part
self.process_this_frame=0
self.face_locations = []
self.face_encodings = []
self.face_names = []
# Network streaming part
self.server_socket = socket.socket()
self.server_socket.bind((host, port))
self.server_socket.listen(0)
self.connection, self.client_address = self.server_socket.accept()
self.connection = self.connection.makefile('rb')
self.host_name = socket.gethostname()
self.host_ip = socket.gethostbyname(self.host_name)
#Call the function to load known face data
self.load_known_faces()
#start streaming
self.streaming()
Next, we define load_known_faces() function which loads the data of all the faces present inside the folder and assigns them as authorized faces.
def load_known_faces(self):
## Load the known_faces images data from the folder
folder="known_faces"
self.known_face_names=[]
self.known_face_encodings=[]
for filename in os.listdir(folder):
(file, ext) = os.path.splitext(filename)
self.known_face_names.append(file)
image=face_recognition.load_image_file(folder+'/'+filename)
print(folder+'/'+filename)
known_face_encoding= face_recognition.face_encodings(image)[0]
self.known_face_encodings.append(known_face_encoding)
For this to work , we need to add a single image of all the authorized persons in a folder named “known_faces“. The filename should be the name of the person in the image. For example harsh.png
Next, let’s define the streaming() function to start reading frames from the Raspberry pi camera,
This frame is further processed to look for faces
- In the below code, at line 19, we have successfully loaded a frame on the server from Picamera.
- Next, we call process_frame() to look for authorized faces and update the “face_names” list.
- Finally, we call the display_frame() function and send data to the server(io.adafruit.com).
def streaming(self):
try:
print("Host: ", self.host_name + ' ' + self.host_ip)
print("Connection from: ", self.client_address)
print("Streaming...")
print("Press 'q' to exit")
# need bytes here
stream_bytes = b' '
while True:
stream_bytes += self.connection.read(1024)
first = stream_bytes.find(b'\xff\xd8')
last = stream_bytes.find(b'\xff\xd9')
if first != -1 and last != -1:
jpg = stream_bytes[first:last + 2]
stream_bytes = stream_bytes[last + 2:]
self.frame = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
self.frame=cv2.flip(self.frame,0) #To correct camera orientation.(in my case)
#at this point we have our image frame
#call function to recognise face
self.process_frame()
#display
self.display_frame()
if cv2.waitKey(1) & 0xFF == ord('q'):
break
finally:
self.connection.close()
self.server_socket.close()
Next,define process_frame() function to detect and Recognize faces .
- The below part updates the face_names list along with the face_locations list.
- Unauthorized faces will be named “unknown” while authorized faces will be named by their known names.
def process_frame(self):
speed=3 #processing speed for face Recognition to improve speed
small_frame = cv2.resize(self.frame, (0, 0), fx=0.25, fy=0.25) # 1/4th frame
rgb_small_frame = small_frame[:, :, ::-1] # convert to RGB
if ((self.process_this_frame%speed)==0):
# Find all the faces and face encodings in the current frame of video
self.face_locations = face_recognition.face_locations(rgb_small_frame)
self.face_encodings = face_recognition.face_encodings(rgb_small_frame, self.face_locations)
self.face_names = []
for face_encoding in self.face_encodings:
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(self.known_face_encodings,face_encoding)
name = "Unknown"
face_distances = face_recognition.face_distance(self.known_face_encodings,face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = self.known_face_names[best_match_index] # if match found rename to the matched name
self.face_names.append(name)
if(self.process_this_frame==speed):
self.process_this_frame=0
else:
self.process_this_frame+=1
- Next, call display_frame() function to modify the frame and draw a box and write name around the recognized faces.
- Also, we call another function send_adafruit() along with the name of the face detected from within this function. This further sends the photo/current frame along with authentication to the server at io.adafruit.com
def display_frame(self):
# Display the results
for (top, right, bottom, left),name in zip(self.face_locations, self.face_names):
#call function to send info on server
self.send_adafruit(name)
# Scale back up face locations since the frame we detected in was scaled to 1/4 size
top *= 4
right *= 4
bottom *= 4
left *= 4
# Draw a box around the face
cv2.rectangle(self.frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(self.frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(self.frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
# Display the resulting image
cv2.imshow('Video', self.frame)
STEP3: Send detected face along with authentication to io.adafruit.com
- Inside the send_adafruit() function, we first check if the last call was at least 5 secs ago.
- If this condition satisfies, the photo along with authorization is sent to the server for both known as well as unknown faces.
def send_adafruit(self,name):
check=time.time()-self.start
print("check=",check)
if (check>5): # check if 5 secs went by , since last authentication was sent.
if (name=='Unknown'):
authorize(0)
send_image(self.frame,name)
cv2.imwrite('unknown_faces/unknown'+'{}.png'.format(int(time.time())),self.frame)
else:
authorize(1)
send_image(self.frame,name)
self.start=time.time() # Reset the start value.
When an unauthorized/unknown person is detected, we also save the frame in our local server(Ubuntu Desktop) within the “unknown_faces“ folder along with its timestamp (Shown Below).
if __name__ == '__main__':
# host, port
h, p = '', 8000 # '' represents listening to all devices
SecurityCheck(h, p)
cv2.destroyAllWindows() #Finally close Windows
The function “authorize” and “send_image” are written within data_feed.py, which is imported in the beginning.
Now, we will write the code for data_feed.py
- For this part, we first need to register a free account at io.adafruit.com. (click here)
- Next, create three feeds namely “known“, “unknown“, “lock” and complete setup by following the steps (as shown below):
Next, Create a new block to display the image received,and associate it with feed name ‘known‘
Similarly , create another image block and associate it with feed name ‘unknown‘
Next, we need one indicator block ,that indicates the status of our device (On/Off).
- The below part is called from within the send_adafrutit() function mentioned above. It sends the image and status value to unlock the device.
Code for “data_feed.py” is written below:-
import time
import base64
import os
import cv2
from Adafruit_IO import Client, Feed, RequestError
ADAFRUIT_IO_KEY = 'YOUR_AIO_KEY'
ADAFRUIT_IO_USERNAME = 'USERNAME'
aio = Client(ADAFRUIT_IO_USERNAME, ADAFRUIT_IO_KEY)
def send_image(frame,name):
frame=cv2.resize(frame,(300,300))
cv2.imwrite(name+'.jpg',frame)
print('Camera: SNAP!Sending photo....')
cam_feed = aio.feeds('known')
if (name=='Unknown'):
cam_feed = aio.feeds('unknown')
with open(name+'.jpg', "rb") as imageFile:
image = base64.b64encode(imageFile.read()) # encode the b64 bytearray as a string for adafruit-io
image_string = image.decode("utf-8")
try:
aio.send(cam_feed.key, image_string)
print('Picture sent to Adafruit IO')
except:
print('Sending to Adafruit IO Failed...')
time.sleep(2)# camera capture interval, in seconds
def authorize(status):
print('Camera: SNAP!.Sending info...')
lock_feed=aio.feeds('lock')
try:
aio.send(lock_feed.key, status)
print('Authorization sent to Adafruit IO')
except:
print('Sending to Adafruit IO Failed...')
time.sleep(2)# camera capture interval, in seconds
Replace username and AIO key with your credentials in the above code.
STEP4: Read Updated values from io.adafruit.com
- Turn the target device On/Off based on the values read from the server.
- Also,auto-lock the device after an interval of 10sec for added security, once it is unlocked.
For now, we have connected Green and Red LEDs through a 220ohm resistor to the raspberry pi’s GPIO pins to represent the device status.
- Below code “lock_unlock.py“, should run on Raspberry pi.
import time
import base64
import os
from Adafruit_IO import Client, Feed, RequestError
import RPi.GPIO as GPIO
ADAFRUIT_IO_KEY = 'YOUR_AIO_KEY'
ADAFRUIT_IO_USERNAME = 'USERNAME'
aio = Client(ADAFRUIT_IO_USERNAME, ADAFRUIT_IO_KEY)
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
LED_Green=26
LED_Red=19
GPIO.setup(LED_Green,GPIO.OUT)
GPIO.setup(LED_Red,GPIO.OUT)
lock_feed=aio.feeds('lock')
while 1:
data=aio.data('lock')
try:
print('processing..')
for d in data:
status=d.value
print(status)
if (status=='1'):
GPIO.output(LED_Green,GPIO.HIGH)
GPIO.output(LED_Red,GPIO.LOW)
time.sleep(10)
#auto lock if unlocked after 10 sec
aio.send(lock_feed.key,0)
else:
GPIO.output(LED_Green,GPIO.LOW)
GPIO.output(LED_Red,GPIO.HIGH)
except:
print(' Failed...')
time.sleep(4)
Note:use a seperate terminal to run this code
Replace username and AIO key with your credentials in the above code.
The below Video Demonstrates : face recognition>Device ON>10sec interval>Device OFF
STEP5: Add Manual Assistance button to turn ON/OFF device
There might be situations when we need to grant authorization to an unknown person.
- This can be done manually by a simple toggle button on the server(io.adafruit.com).
- This button triggers to change the value of ‘lock‘ feed which in turn updates the indicator block.
- Also,lock_unlock.py code running on Pi updates the LED status .
First, we need to add a new toggle button block by following the below steps
Now , TRIGGER the‘lock’ feed when the Manual Assistance button is toggled.
For this, First, we need to create a new trigger as shown below to set the ‘lock ‘ feed value to 1, when the button is set to ‘ON‘.
Similarly, create another trigger to set ‘lock ‘ feed value to 0 , when the button is set to ‘OFF‘