According to a study, street lights consume about 19% of the world’s energy. Almost 21 TWh of energy is consumed solely by street lights. We do need street lights but their usage can definitely be reduced. Now the traditional photosensor can be used but in this tutorial, we will try and implement something different. Street lights energy conservation uses an RPI and camera to try to find the visuality and the population of a place and hence decide the intensity of light. It is decided by using HSV image format and population by object detection with YOLO.
Content
Visuality
What do you understand by the visuality of an image? It can be understood by how distinctly every object can be seen in the image or maybe how much light is required for the image to be clear. For example, an image captured during the day is much more visible than an image captured at night. So if it’s only about night and day why not use the photosensor? What a photosensor does is as it gets dark it turns on the lights, but we don’t need that. Not all places at night are dark, because of adds, restaurants, headlights of cars passing by, almost every urbanized street today is lit up even without the street lights, so why do we need them? We can definitely avoid them and save power.
1. HSV
To understand HSV(Hue-Saturation-Value) format we will start from the basics. The digital image that we see is usually seen is RGB(Red Blue Green) format. In this format, the image is represented by a 3-dimensional matrix, where the 2 dimensions are the height and width of the image and the third is the channels red, blue, and green. Take an example of a small 3-pixel image.
Like these many formats exist and one such of importance is HSV format. It has three layers Hue, Saturation, and Value, and helps perceive image from a human eye point of view. Also, it has a big advantage over the other format since it separates the chroma from luma.
- Hue – The chroma or color value of the image tells us which color is used when.
- Saturation – The “colorfulness of a stimulus relative to its own brightness”, which simply means how much of the color is used.
- Value – The luma or the brightness of the image is value. So if a color has 0 brightness, it will be black.
2. Algorithm
Since we need to find the visuality we will be using HSV format. To start with coding install OpenCV in your system.
pip install opencv-python
The import it in a new python file:
import cv2
Capture an image using this library :
# Access webcamp and start the video
vid = cv2.VideoCapture(0)
capturing = True
while(capturing):
# Capture a single frame as image
ret, frame = vid.read()
capturing = False
# Stop recording
vid.release()
cv2.destroyAllWindows()
Now by default the frame captured is in RGB format which needs to be converted :
img_hsv = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2HSV)
Now to develop a proper algorithm or rather a formula that can detect the visuality of image, analyze the HSV values of dark and bright images. In OpenCV, the maximum values of HSV channels are 179, 255, and 255 respectively. For visuality, we will not need the chroma of the image but mostly the intensity(value) and a small contribution of saturation.
Download a relatively dark image and import it in a new file:
dark = cv2.imread("dark.jpg")
Convert it to HSV and plot the respective values:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
Now extract the properties into plotting library and plot channel by channel:
array=np.array(IMAGE)
img_hsv = matplotlib.colors.rgb_to_hsv(array[...,:3])
# pull out just the hue channel
lu=img_hsv[...,0].flatten()
plt.hist(lu,256)
plt.show()
Similarly take a lighter image and compare values:
bright = cv2.imread("bright.jpg")
As you can see the Value parameters varies drastically between the images. Also, there is a difference in saturation values. Concluding from this we will use Value as the major parameter and saturation as minor to determine if led/street lights should stay ON/OFF.
3. Power
Using the above parameters we determine the intensity of light and with hid and trial various formulae can be used to relate intensity and parameters, I have one which we will be coding today. This is a simple one as the parameters are more or less inversely proportional to intensity. Start by finding the average values of Saturation and Value:
avg_saturation = np.mean(img_hsv[:, :,1])
avg_value = np.mean(img_hsv[:, :, 2])
print(avg_saturation, avg_value)
Now the main formula :
led_val = (255-(avg_value/1.2)-(avg_saturation/4))
Calculate the analog power that the lights have to output:
if(led_val <130):
power = 0
else:
power = ((led_val-100)/125) *255
print(power)
Population
Now we come to the second part of the project. In this section, we will use object detection to get a conclusion about whether the street light at that place is necessary or not. The number of people, cars, bikes, trucks present in the vicinity of the place will determine if the street light is required or not.
1. Object Detection
There are many states of the art object detection techniques such as Haar cascade, SSD, and YOLO. And in this tutorial YOLO will be used since it is the most efficient in time complexity as well as output. Another advantage of YOLO being its detection capabilities even in darkness. All its weights and files can be found in this link: https://pjreddie.com/darknet/yolo/
2. YOLOV3
YOLO(You Only Look Once) is the best object detection technique yet. It is not only the fastest but also has the most accuracy. It takes the image in question as input, divides them into a certain fixed number of boxes. Each box predicts some bounding boxes that it thinks has objects, then the output is determined by the confidence level of the boxes. Then each box is passed through an SVM classifier. As you can see above the output is outstanding. Also, there have been many versions over the years, each one trained better than the last with better neural networks in play.
Before coding import 3 files must be downloaded in the current working directory, namely the weights , the labels and a ‘yolov3.cfg’ file.
Start by importing some dependencies to a new python file:
import numpy as np
import argparse
import time
import cv2
import os
Declare a new function which takes in an image and outputs after object detection:
def yolo(image):
(H,W)=image.shape[:2]
boxes=[]
confidences=[]
classIDs=[]
In the same function declare arguments compulsory for YOLO:
ap =argparse.ArgumentParser()
# removes all bounding boxes beyond a certain confidence
ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections")
# To define a threshold when finding the union of many boxes
ap.add_argument("-t", "--threshold", type=float, default=0.3, help="threshold when applying non-maxima suppression")
args=vars(ap.parse_args())
YOLO is trained on coco dataset and has its labels:
LABELS =open("coco.names").read().strip().split("\n")
For each new object detected, assign a new color to it:
np.random.seed(42)
COLORS = np.random.randint(0, 255, size=(len(LABELS), 3), dtype="uint8")
3. Prediction
To predict the output from YOLO load the network, its files and use them to get the output.
Load the weights and ‘.cfg’ file from the directory, also get the layers :
print("[INFO] loading YOLO from disk")
net=cv2.dnn.readNetFromDarknet("yolov3.cfg", "yolov3.weights")
ln=net.getLayerNames()
ln=[ln[i[0]-1] for i in net.getUnconnectedOutLayers()]
Predict values from YOLO network and classifier:
blob=cv2.dnn.blobFromImage(image,1/255.0, (416,416), swapRB=True, crop=False)
# proving network with input
net.setInput(blob)
start=time.time()
# getting output
layerOutputs=net.forward(ln)
end=time.time()
print("[INFO] YOLO took {:.6f} seconds".format(end-start))
Now use the output to extract information and return Labels and classes as output:
for output in layerOutputs:
for detection in output:
scores=detection[5:]
classID=np.argmax(scores)
confidence=scores[classID]
if(confidence>args["confidence"]):
box=detection[0:4]*np.array([W, H, W, H])
(centreX, centreY, width, height) = box.astype("int")
x=int(centreX-(width/2))
y=int(centreY-(height/2))
boxes.append([x,y,int(width),int(height)])
confidences.append(float(confidence))
classIDs.append(classID)
return LABELS, classIDs
For our purpose we will not be using the full algorithm. We do not need to draw bounding boxes, or even find the confidence, since even if a single person is present the lights must be turned on.
4. Labels
Using the classes determined by the algorithm we draw a conclusion on whether to keep the lights on or turn them off. Suppose the image has people in it, or vehicles then we would need the lights, then turn them on otherwise not. In the main python file call the YOLO method passing the image captured.
power = led_power(frame)
LABELS, classIDs = yolo(frame)
Using the class ID and labels append all the values in a list:
objects = []
for i in range(len(classIDs)):
objects.append(LABELS[classIDs[i]])
Define another list to compare :
classes = ['person', 'bicycle', 'car', 'motorbike', 'bus', 'truck']
Define a new function to find and return if there are objects present in the area which need the light.
def common_member(a, b):
a_set = set(a)
b_set = set(b)
if (a_set & b_set):
return True
else:
return False
Raspberry Pi
Raspberry pi is a small computer. It mostly uses Rasbian as its operating system but, as of today, can support almost anything. From home automation to self-driving cars, it is used in most hardware projects. We will use it to capture images from the camera, process them, and get an output.
1. Requirements
To complete the whole project somm hardware requirements are must.
- Raspberry Pi
- Camera
- LEDs(3-4)
- Resistors
2. Setup
Connect the Camera to the raspberry pi camera port. Add 3-4 LEDs to the GPIO pins of the raspberry pi with the resistors. Follow the circuit diagram given below:
3. Integration
Finally, add the code to connect the LEDs to the GPIO pins. The LEDs act as a demo to our proof of concept. Create a new python file and import the following:
import RPI.GPIO as GPIO
Create a new function to initialize and set the pins of the LEDs:
def init():
GPIO.setmode(GPIO.BOARD)
# voltage pin
GPIO.setup(2, GPIO.OUT)
GPIO.setup(3, GPIO.OUT)
GPIO.setup(4, GPIO.OUT)
# ground pin
GPIO.setup(5, GPIO.OUT)
GPIO.output(5, GPIO.LOW)
return [2, 3, 4, 5]
A new function to switch the LEDs on:
def ON(power, pins):
for i in range(len(pins)-1):
GPIO.PWM(pins[i], power)
Another function to switch the LEDs off:
def OFF(pins):
for i in range(len(pins)-1):
GPIO.PWM(pins[i], 0)
Output
In the main python file call all the respective functions and print the output in the terminal as well as show it by lighting the LEDs.
With this, we come to the end of our tutorial for Street Lights energy conservation using raspberry pi, camera, and other core python libraries. I hope you learned something new today including the YOLOV3 algorithm. If any doubts remain or errors pop up, try reading the respective documentation or comment below. The entire working code and original directory of this project can be found in this GitHub repository:https://github.com/Shaashwat05/Street_lights