How to detect the face of live video Streaming and blur the face Using Opencv

Rishabh Arya
3 min readAug 3, 2023

Problem Statment:-

To detect the human face from that live video stream and Identify and get it’s/their’s position and blur the part of a detected face and keep the rest part as it is.

In this problem, we have used the opencv library/tool to resolve this problem.

What is Opencv?

OpenCV is a great tool for image processing and performing computer vision tasks. It is an open-source library that can be used to perform tasks like face detection, objection tracking, landmark detection, and much more.

How to install opencv on your system?

Firstly, open the Command prompt and install the python opencv library using given below command

pip install opencv-python

and we will also need of Haar cascade classifier file to detect the face of the video. Now we will read about the Haar cascade classifier.

What is the Haar Cascade Classifier?

Haar Cascade Classifier is an Object Detection Algorithm used to identify faces in an image or a real-time video. It employs a machine learning approach for visual object detection which is able to process images very quickly and achieve high detection rates.

Face Detection with Haar Cascade Classifier -

OpenCV uses Haar Feature-based Cascade Classifiers and comes with pre-trained XML files of various Haar Cascades. and Face detection with Haar Cascades algorithm needs a lot of positive images and negative images to train the classifier. The model created from this training is available in the OpenCV GitHub repository https://github.com/opencv/opencv/tree/master/data/haarcascades.

Now, we will start writing the code.

Firstly, import the library

import cv2

Sets the video source to the default webcam, which OpenCV can easily capture.

cap = cv2.VideoCapture(0)

we capture the video. The read() function reads one frame from the video source, which in this example is the webcam. This returns:

  1. The actual video frame read (one frame on each loop)
  2. A return code
while True:
ret, img = cap.read()

In this code, we will use the detectMultiScale() function to detect the
faces of the video.

face  = model.detectMultiScale(img)

Now, we will find the coordinates in our video. We use these coordinates to blur the face in our video.

x1 = face[0][0]
y1 = face[0][1]
x2 = face[0][2] + x1
y2 = face[0][3] + y1
cimg = img[y1:y2 , x1:x2]
blur_img = cv2.blur(cimg, (50,50))
img[y1:y2 , x1:x2] = blur_img

Now, we will show the video using the imshow() function and cv2 waitkey() allows you to wait for a specific time in milliseconds until you press “enter” on the keyword and destroyAllWindows() simply destroys all the windows we created.

cv2.imshow(“Blurred Face”,img)
if cv2.waitKey(100) == 13:
break
cv2.destroyAllWindows()

Demo Video-

So that’s all I guess you have got an idea of how to detect the face of live video Streaming and blur the face Using Opencv.

Thank you for reading this article…..

--

--

Rishabh Arya

I am an active learner who likes to challenge every problem with a can-do mindset in order to make any idea a reality.