American Journal of Data Mining and Knowledge Discovery
Volume 2, Issue 1, March 2017, Pages: 8-14

Motion Detection of Some Geometric Shapes in Video Surveillance

Larbi Guezouli*, Hanane Belhani

LaSTIC Laboratory, Department of Computer Science, University of Batna, Batna, Algeria

Email address:

(L. Guezouli)
(H. Belhani)

*Corresponding author

To cite this article:

Larbi Guezouli, Hanane Belhani. Motion Detection of Some Geometric Shapes in Video Surveillance. American Journal of Data Mining and Knowledge Discovery. Vol. 2, No. 1, 2017, pp. 8-14. doi: 10.11648/j.ajdmkd.20170201.12

Received: December 21, 2016; Accepted: January 6, 2017; Published: January 30, 2017


Abstract: Motion detection is a live issue. Moving objects are an important clue for smart video surveillance systems. In this work we try to detect the motion in video surveillance systems. The aim of our work is to propose solutions for the automatic detection of moving objects in real time with a surveillance camera. We are interested by objects that have some geometric shape (circle, ellipse, square, and rectangle). Proposed approaches are based on background subtraction and edge detection. Proposed algorithms mainly consist of three steps: edge detection, extracting objects with some geometric shapes and motion detection of extracted objects.

Keywords: Video Surveillance, Motion Detection, Real-Time System, Pattern Recognition


1. Introduction

Nowadays, video surveillance systems called Closed Circuit TeleVision are useful in several fields of security. In this regard, we are working to develop smart solutions for video surveillance that detects automatically moving objects with some geometric forms in real time.

There are a lot of problems to be resolved in this field of research. The first step in motion detection process is the detection of the background. In this step, problems are associated to extraction conditions and cause poor detection of the background. We cite among these problems the lighting changes, repetitive movements, clutter and non-rigid objects (moving) appeared in the foreground.

In real-time detection, the motion detection is affected by this poor detection of background. Therefore, the reliable detection of moving objects is facilitated by the detection of a good background image [1-5].

Several challenges may arise from the nature of video surveillance systems. These challenges are as follows [6]:

Brightness or illumination changes: A basic model must be adapted to gradual and sudden changes in the appearance of the environment as well as the progressive illumination changes including the change in light intensity outdoors during the day (e.g., clouds moving).

Dynamic background: A natural scene usually consists of dynamic objects. These dynamic objects can be composed by shaking trees, swaying curtains, undulating surface waters, waving flags, etc.

Moving object: If a foreground object leaves the scene, it will create a ghost (regions that are detected as moving but do not correspond to moving objects), then the background model must adapt this object as back-plan. For example: If a parked car leaving the scene, the corresponding region should be accepted as part of the background.

Video noise: The images video may contain brightness or color variations in video sequences called noises. A model of background extraction must face these degraded videos affected by different types of noises, such as sensor noise or compression artifacts.

Existing approaches have several limitations, which motivate our proposals [7-10]. Among the problems:

The real-time operation.

The detection of moving objects with some geometric shape (square, rectangle, circle).

For this, the main goal of our research is to provide solutions that detect moving objects with certain geometry in real time.

In our work, we seek to propose a method which can reduce the effect of the challenges described above and can run a fairly effective and efficient manner in terms of results and response time.

2. Motion Detection in Literature

Before seeing the existing methods we have to define some aspects such as: motion detection, motion estimation, background modeling, background subtraction, and motion-based segmentation.

Motion detection methods

The algorithm of such method try to find in which points of the image a movement took place [11].

Motion estimation methods

Motion estimation is motion detection by adding the constraint that provided result must be quantitative. They are methods which consist in processing the displacement of objects in a video sequence after the calculation of the correlation between two successive images in order to predict the change of position. [12]

Background modeling methods

In this kind of methods, a background model of a scene is created (without any moving object). [11]

Background subtraction methods

The subtraction of the background is the operation which logically follows the modeling of the background in order to obtain motion detection. A difference in absolute value between the background model and current image is carried out in order to obtain motion detection. [13]

Motion-based segmentation methods

This kind of methods try to segment each image to regions which present homogeneity in motion by using optical flow or spatiotemporal derivatives of light intensity.

In literature, various methods are proposed in literature to decide which parts of images correspond to mobile objects such as: motion detection without background modeling, local modeling of background, semi-local modeling of background, and global modeling of background:

2.1. Detection without Background Modeling

These methods consist in the detection of motion by calculating the importance of visible motion of each pixel. This importance is calculated by using temporal derivative of the luminous intensity, the spatio-temporal entropy of the image, and the norm of the optical flow [11].

2.2. Local Modeling of Background

To each point is associated a value or a function to model the appearance of the background at this point. The model appearance of the background at a point depends only on the observations that took place at this point. The other pixels do not intervene [11].

2.3. Semi-Local Modeling of Background

Like the previous family of methods, to each point is associated a value or a function to model the appearance of the background at this point, but the model appearance of the background at a point depends on observations that took place in the vicinity of this point [11].

2.4. Global Modeling of Background

These methods use all the observations to build a model of the background [11].

3. Proposed Approaches

In this section we present proposed approaches to find moving objects that have specific shapes in real time. We propose two different approaches.

3.1. Approach Based on Background Subtraction

In this approach, we seek to detect moving objects that have specific geometric shape. This approach is decomposed into several steps:

The first step consists in the detection of background, i.e. the image of the scene without movement (constituting stationary objects). Images acquired for the first time can be used to detect the model of the ideal background. The detection of background is determined in an iterative manner to solve occlusions problems of objects in the image built by the foreground. The Modified Moving Average [14] (MMA) is used to calculate the average of the first K frames to generation the model of the original background. For each pixel (x,y), the corresponding value of the current background model Bt(x,y) is calculated using the following formula:

(1)

Such as:

Bt-1 (x, y) is the previous background model.

It (x, y) is the current video image captured.

t is the number of captured image.

In the second step we process the background image and images acquired after that. It is necessary to eliminate noises:

Color Space Conversion as GrayScale, YCrCb. These color spaces are expected to be more robust to shadows and brightness changes as RGB.

Application of smoothing (filtering or blurring) to eliminate noise.

The third step is dedicated to the subtraction of the background from the foreground, i.e. model the image of moving objects without background. Several algorithms are available [15]. This step requires the background and acquired image at time t. An absolute difference between the two images is applied to calculate the difference image. After, the resulting image is segmented by performing segmentation by thresholding:

Pixels of the difference image are replaced by black pixels (0) if they are pixels of the background.

Pixels of the difference image are replaced by white pixels (255) if they are pixels of the foreground.

(2)

The threshold  is used to determine if the pixel is a pixel of the background or of the foreground. It is obtained through a mathematical formula [14] as follows:

(3)

such as:

λ is the coefficient of inhibition, where the value varies depending on the environment and the reference value is equal to 2, for this case, it is equal to 1.

N. M is the size of images.

It is the current captured image.

B is the image of the background created by the previous step.

This threshold reflects global changes in the scene, he takes a low value if there is a small change in brightness in the image and it increases if there are many changes.

Figure 1. General scheme of proposed approach based on background subtraction.

Another preprocessing to improve the detection of objects in the foreground is needed. Unfortunately, some residual noise remains on the segmented difference image. We apply some morphological operations like dilation, erosion, opening and closing [16].

The fourth step represents the core of our work. It consists in the detection of moving objects that have some geometric shape i.e. based on geometric and spatial characteristics of objects. The following processes are applied:

The edge detection using Canny detector [17] which guarantees good detection (low error rate), good location and clarity of response (no false positives).

Geometric information (convex contours, the number of vertices or corners of the contours, the surface contours) are used to extract contours of objects with specific geometric shape (Circle, ellipse, square, rectangle).

The image of detected objects is built. This image contains only extracted objects.

The last step consists in motion detection. Using one of the similarity measures (SAD: Sum of Absolute Differences) between built images of detected objects (image at time t and image at time t-1).

In this step we detect the motion by using the approach of the Sum of Absolute Differences (SAD) [18]. This approach is mainly used for measuring the similarity between two images. SAD is given by the following equation:

(4)

such as:

It-1 is the built image at time (t-1).

It is the built image at time (t).

N. M is the size of images.

A value of distance greater than the threshold means that there is a motion in the current image.

3.2. Approach Based on Outlines Detection

In this approach, we seek to detect moving objects that have specific geometric shape by using captured images.

This approach is decomposed into several steps:

The first step consists in the preprocessing of acquired images that is necessary to remove noise and other problems:

Color Space Conversion as GrayScale, YCrCb. These color spaces are expected to be more robust to shadows and brightness changes as RGB.

Application of smoothing to eliminate noise.

In the second step we try to detect outlines using Canny detector [17]. We need only convex and closed contours.

The third step is dedicated to the extraction of objects that have specific geometric shape (square, rectangle, circle, and ellipse). The extraction of objects is based on geometric characteristics of contours (the convex contours, the number of contour vertices, calculation of angles, and the surface contour) for sensing shapes of objects. The image of detected objects is built.

Figure 2. General scheme of proposed approach based on outlines detection.

The last step represents the motion detection process using constructed images (image at time t and image at time t-1) by the previous step. Motion detection is done as in the first approach.

4. Evaluation and Experiments

4.1. Work Tools

Used programming language is C++ under Microsoft Visual Studio 2013 programming environment and OpenCV library 2.4.10 (Open Source Computer Vision Library).

4.2. Evaluation Tools and Tests

To evaluate our work we used several software and hardware tools as shown in Table 1.

Table 1. Software and Hardware Evaluation Tools.

Software Tools
Operating system Microsoft Windows 8.1 Professional
Compiler C++ under Microsoft Visual Studio Ultimate 2013
Library OpenCv 2.4.10
Hardware Tools
Processor Intel(R) Core(TM) i3-380M (2.53 GHz)
RAM 6.00 Go
Camera 1.3M HD Webcam

4.3. Results

We present in this section results obtained by different experiments applied to our system.

Figure 3 shows the implementation of the approach based on background subtraction. The goal is to detect moving objects with specific geometric shape (square, rectangle, circle, and ellipse).

Figure 3. Scheme of the approach based on background subtraction.

In Figure 4, we present the implementation of the approach based on outlines detection using captured images to detect edges used in the extraction process of moving objects with specific geometric shape.

Figure 4. Scheme of the approach based on outlines detection.

4.4. Evaluation

To check performances of our approaches, we did some experiments and assessments.

4.4.1. Execution time and Complexity

The execution time is very important in our system because we need a real-time system. We present here the average of execution time (in sequential and parallel cases) of our system.

Table 2. The Time Mean of Basic System Modules.

Modules Sequential time Parallel time
Background Detector 9.89 sec 3.36 sec
Foreground Detector 0.62 sec 0.15 sec
Motion Detector 0.70 sec 0.14 sec

Figure 5. Comparison between sequential and parallel execution time.

4.4.2. Detection Time

The detection time of a moving object is dependent upon detection of its contour. This detection depends on other characteristics inter alia the speed of the object. The speed has great influence on the appearance of the object in a clearer way to detect its outline and shape. It can be calculated based on the position of the object at two moments: previous and current time. The position of the object is extracted using spatial moments (HuMoments) [19].

(5)

such as:

V: The speed of the object.

Pi-1 and Pi are positions of the center of object in two captured images where it appeared (the current image i where it is detected, and the previous one i-1).

Ti and Ti-1: Times of images i and i-1.

In our experiments, we try to calculate the speed limit where the system can detect the edge of an object and its form. We use an example of a video sequence captured with an object in circular motion. Table 3 and Table 4 present the values of the distance between two successive positions of the object and the difference in time and speed. The distance is calculated is in pixels, then it is converted to meters (m) using drawing scale. The time is presented in seconds (sec) and the speed in (m / sec).

Table 3. Values of Distance Traveled, Time Consumed and Speed of a Moving object in a Video Sequence (Approach based on background subtraction).

pi-pi-1 0.2 0.48 0.36 0.68 0.50 0.80 0.96 0.8 No detection
ti-ti-1 0.5 1.01 0.6 0.99 0.52 0.62 0.66 0.53
speed 0.4 0.47 0.6 0.69 0.96 1.29 1.45 1.5

Table 4. Values of Distance Traveled, Time Consumed and Speed of a Moving object in a Video Sequence (Approach based on outlines detection).

pi-pi-1 0.004 0.02 0.025 0.36 0.06 0.6 0.68 0.8 No detection
ti-ti-1 0.23 0.23 0.22 1.52 0.23 0.89 0.89 0.67
speed 0.02 0.09 0.11 0.24 0.27 0.67 0.76 1.19

According to these results, we found that the first approach cannot detect moving objects with speed greater than 1.5m/sec and the second one cannot detect moving objects with speed greater than 1.19 m/sec.

4.4.3. Metric Evaluation

We used recall r and precision p measures and F1 measure to evaluate performances of our system.

Definitions of these measures and their formulas are given as follows:

Recall: The ratio between the number of correctly detected objects which have the desired geometric shape and the total number of existing objects in the scene which have the desired geometric shape.

(6)

Precision: The ratio between the number of correctly detected objects which have desired geometric shape and the total number of detected objects (which have or have not the desired geometric shape).

(7)

such as:

TP: The number of correctly detected objects which have desired geometric shape.

FN: The number of objects in the scene that are not detected and have the desired geometric shape.

FP: The total number of detected objects of the scene which don’t have the desired geometric shape.

F1-measure: The score can be interpreted as a weighted average of the precision and recall.

(8)

F1 reaches its best value to 1 and its bad value to 0.

To calculate both measures recall and precision, we use video as images captured in different rooms where there are objects with different positions (close, far) to the camera and different forms (small, large). Thus, different features are used: color of objects (light or dark), number of objects, background (one or more colors, light or dark) and speed of objects).

We present results of different tests carried out in the following tables. We use video sequences where each sequence is a set of captured images.

Table 5 and Table 6 present values of different measures (recall, precision and F1-measure) obtained from proposed approaches.

Table 5. Values of the Recall, Precision and F1-Measure Obtained With the Approach Based on Background Subtraction.

Scenes Number of appearances of sought objects TP FP FN R P F1
One object in the scene 150 124 1 26 0.83 0.99 0.90
Moving objects 287 271 28 16 0.94 0.91 0.92
Object with some sides 150 115 20 35 0.77 0.85 0.81
Brightness changing 150 112 30 38 0.75 0.79 0.77

By examining results of the Table 5, we note that:

Obtained values show that proposed approach based on the subtraction of the background allows the detection of objects with some geometric shape (circular or quadrilateral).

Despite the FP and FN, F1 is better and close to 1.

The appearance of false positives is due to the occlusion of other objects, shadows or in the case of brightness change.

Table 6. Values of the Recall, Precision and F1-Measure Obtained With the Approach Based on Outlines Detection.

Scenes Number of appearances of objects sought TP FP FN R P F1
One moving object 150 120 9 30 0.80 0.93 0.86
Objects within other objects 350 290 20 60 0.83 0.93 0.88
Lots of selected contours 150 116 140 34 0.77 0.45 0.57
Object with some sides (e.g. cube) 150 60 35 90 0.40 0.63 0.49

By examining results of the Table 6, we note that:

Obtained values show that proposed approach based on outlines detection in captured images allows the detection of objects with specific geometric shape (circular or quadrilateral). Its effect is especially apparent in the detection of objects that are within other objects.

For precision values in the two first cases, they are better because the detection of false positives FP is minimal (not many false contours).

There are a lot of detected contours. The measurement values (recall, precision and F1 score) are low because of the appearance of FP.

Also, for items that have some sides, obtained measurement values are low, and it shows that the detection of these objects can be decreased due to the non-detection of contours during their movement.

5. Conclusion

This work deals with the field of video surveillance systems. These systems can be used in many application areas: security of premises, detection of accidents, fires, robotics, object recognition,... The video is the media treated in such systems. Among the most important steps in video surveillance systems the motion detection. This step involves the detection of moving objects in video sequences captured by the surveillance camera. The motion detection stage is among the most studied problems in the field of video analysis where many research works focus on this problem.

Our work focuses on the problem of the detection of moving objects with specific geometric shape. This detection is done in real time. As part of this work, we have proposed two approaches:

The first one focuses on two basic steps: modeling the background to build the image of the scene (without moving objects) and subtracting the background from the foreground image which allows getting moving objects.

The second approach exerts directly on captured images. The edge detection step is applied to extract only edges of objects having required forms (circular or quadrilateral).

Each of the two approaches has some advantages and limitations.

With the aim of making our system as a real-time one, we applied the parallelism to some modules of the system (in both proposed approaches).

A set of experiments was conducted to assess the performances of proposed approaches in terms of detection of objects with popular shapes (the limit of speed of objects, recall and precision). Our evaluations have shown that proposed approaches were able to detect moving objects with sought forms despite noticed limitations.

According to our experiments, we noticed that our system has limits on object detection:

The problem of intersection of objects, which produces new shapes and complicates the detection of moving objects;

In the second approach, there is a problem in detecting moving 3D objects that have multiple faces that produce multiple crossing outlines;

Shadows affect the detection of the contours of objects;

Our system cannot detect moving objects that move at a speed greater than 1.5 m/s for the first approach and 0.87 m/s for the second approach.


References

  1. Ait Fares, W., Détection et suivi d'objets par vision fondés sur segmentation par contour actif basé région. 2013, Université Paul Sabatier - Toulouse III.
  2. K. Amaleswarao, G. Vijayadeep, and U. Shivaji, Improved Background Matching Framework for Motion Detection. International Journal of Computer Trends and Technology (IJCTT), 2013. 4 (8): p. 2873-2877.
  3. T. Deepika and P. S. Babu, Motion Detection In Real-Time Video Surveillance With Movement Frame Capture And Auto Record. International Journal of Innovative Research in Science, Engineering and Technology (IJIRSET), 2014. 3 (1): p. 146-149.
  4. Singh, B., et al. Motion detection for video surveillance. in Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on. 2014.
  5. Chen, B. H. and S. C. Huang, An Advanced Moving Object Detection Algorithm for Automatic Traffic Monitoring in Real-World Limited Bandwidth Networks. IEEE Transactions on Multimedia, 2014. 16 (3): p. 837-847.
  6. Brutzer, S., et al. Evaluation of background subtraction techniques for video surveillance. in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. 2011.
  7. Bari, N., N. Kamble, and P. Tamhankar, Android based object recognition and motion detection to aid visually impaired. International Journal of Advances in Computer Science and Technology, 2014. 3 (10): p. 462-466.
  8. Seo, J. W. and S. D. Kim, Recursive On-Line and Its Application to Long-Term Background Subtraction. IEEE Transactions on Multimedia, 2014. 16 (8): p. 2333-2344.
  9. Yun, K., et al. Motion Interaction Field for Accident Detection in Traffic Surveillance Video. in 22nd International Conference on Pattern Recognition (ICPR). 2014. Stockholm, Sweden.
  10. Foggia, P., A. Saggese, and M. Vento, Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts Based on Color, Shape, and Motion. IEEE Transactions on Circuits and Systems for Video Technology, 2015. 25 (9): p. 1545-1556.
  11. Nicolas, V., Suivi d'objets en mouvement dans une séquence vidéo. 2007, Centre universitaire des Saints-Pères.
  12. Boudjemma, A., Estimation du mouvement dans une séquence d'images par approche probabiliste. 2011, University of Mouloud MAMMERI, Tizi-Ouzou.
  13. BOUIROUGA, H., Reconnaissance des scènes vidéo pour adulte. 2012, Université Mohammed V, Agdal, Moroco.
  14. Balakrishnan, An improved motion detection and tracking of active blob for video surveillance, in Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT). 2013: Tiruchengode, India. p. 1-7.
  15. Benezeth, Y., et al. Review and evaluation of commonly-implemented background subtraction algorithms. in International Conference on Pattern Recognition. 2008. Tampa, United States: IEEE.
  16. RONSE, C. Opérations morphologiques de base: dilatation, érosion, ouverture et fermeture binaires. 2013 [cited 2016 9th of June 2016]; Available from: https://dpt-info.u-strasbg.fr/~cronse/TIDOC/MM/deof.html.
  17. John, C., A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986. 8 (6): p. 679-698.
  18. Chandana, S. Real time video surveillance system using motion detection. in 2011 Annual IEEE India Conference. 2011.
  19. Hu, M.-K., Visual pattern recognition by moment invariants. IEEE Transactions on Information Theory, 1962. 8 (2): p. 179-187.

Article Tools
  Abstract
  PDF(1093K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931