Open Access Paper
28 December 2022 Monitoring system based on virtual reality technology and video processing technology
Qinqin Liu, Changyong Zhu, Haifei Zhang
Author Affiliations +
Proceedings Volume 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022); 1250659 (2022) https://doi.org/10.1117/12.2662655
Event: International Conference on Computer Science and Communication Technology (ICCSCT 2022), 2022, Beijing, China
Abstract
With the needs of the rapid development of society, video surveillance systems are more and more used in all aspects of life, but the current video surveillance systems mostly use PCs as platforms and cameras for long-term uninterrupted recording. This not only consumes high power, but also has poor portability and consumes a huge amount of storage devices. It will also be extremely tedious in the face of complex and massive redundant data when looking back at video to find key information afterwards. Based on virtual reality technology and video processing technology, this paper studies the design of monitoring system. This paper tests and analyzes the designed embedded system. Experiments show that all functional modules of the system work normally, and compared with ordinary video surveillance systems, the output frame rate is good and can reach 25FPS.

1.

INTRODUCTION

With the progress of the times and the improvement of living standards, people’s wealth level is also increasing, and more and more attention is paid to the safety issues in life and work. The most widely used and most representative security system is the video surveillance system. Compared with other security methods, video surveillance is intuitive, comprehensive and can save backtracking, and is widely used in various fields. With the gradual maturity of virtual reality technology, a brand-new immersive interactive experience of video is becoming more and more popular. By combining virtual reality technology with video surveillance technology, it will give users a brand-new interactive experience of viewing surveillance video, allowing users to more intuitively and accurately grasp the situation of the surveillance area in the process of viewing surveillance video. Give users an immersive sense of immersion1-2.

In the design research of monitoring system, Aydemir et al. proposed a home isolation violation monitoring solution based on Internet of Things (IoT) and cloud computing technology3. Deep learning convolutional neural networks are trained in the cloud using images of isolated objects for face recognition. Monitoring IoT nodes expects individuals to scan their faces regularly to prevent them from leaving quarantine. Abdillah et al. proposed an advanced wide-area monitor design using a machine learning approach called multi-output least squares support vector machines4. The application of kernel methods in multi-output least squares support vector machines provides higher accuracy and flexibility of prediction results, and the intensive computation time can also be reduced.

Based on virtual reality technology and video processing technology, this paper studies the design of monitoring system, and runs a motion detection and marker tracking algorithm based on OpenCV designed in this system. It can drive a universal USB camera, monitor the target area, store video data

only when an object moves, and mark and track its outline in the video. In this way, on the basis of solving the power consumption and portability, the loss, occupation and processing complexity of the storage device can also be significantly reduced.

2.

DESIGN AND RESEARCH OF MONITORING SYSTEM

2.1

Design indicators

In order to realize the core of the intelligent video surveillance system based on virtual reality technology, this paper proposes the following functional indicators and performance indicators after investigation:

Functional indicators:

  • (1) The local monitoring client can collect VR panoramic monitoring video, real-time monitoring, and manage monitoring video, and can push the collected monitoring video to the streaming media server;

  • (2) The local monitoring client can detect moving targets in the monitoring area, and send an alarm by email when a suspicious target is found;

  • (3) The remote monitoring client can obtain the VR panoramic monitoring video from the streaming media server through the specified URL, and can play the VR panoramic monitoring video normally. It supports split-screen playback, user motion tracking, and touch-screen interaction5, 6.

Performance indicators:

  • (1) The delay of remote monitoring shall not exceed 2 seconds;

  • (2) The success rate of moving target detection is not less than 90%.

2.2

Real-time video analysis function requirements

This module provides analytical data for the augmented reality dynamic tagging function to analyze and identify some of the main information contained in the video. Its main functions include obtaining real-time video streams of surveillance cameras, analyzing and identifying data on the server side and storing data7, 8.

  • (1) Capture real-time video stream

    The capture function of the video stream is mainly to capture the monitoring video of the monitoring front-end equipment. The surveillance camera is identified according to its corresponding network IP address, and the captured video is transmitted to the server for processing.

  • (2) Server analysis and identification

    For the collected video stream, the TensorFlow framework and the R-CNN target detection algorithm are used to classify and identify the video stream, and the processing results are output, and the main information such as driving and pedestrians are marked in the form of dynamic labels for users to view. Pedestrians who stay in place for more than 180s will also be marked, prompting the responsible police officers to pay attention to tasks with suspicious behavior.

  • (3) Data storage

    Use a distributed database to store the captured video for playback. Using MySQL database to store analysis results, these data can be used for statistical analysis and report generation.

2.3

PC augmented reality presentation requirements

This module is the core function of the augmented reality intelligent video surveillance system and is closely related to security protection. Since the video stream playback requires the SDK provided by a third party and the SDK does not support browser calls, the augmented reality function module of the entire PC side includes the development of special programs and web applications. Special programs mainly involve augmented reality related functions and real-time video analysis functions, and web applications mainly include the design and development of related functions such as sky maps. The specific requirements of the web application will be described in detail in the functional requirements of map services and other sections. The functional requirements of the client are as follows:

  • (1) Monitoring video playback control

    This function is realized by a special program, which needs to call up the real-time monitoring video of various monitoring devices in the monitoring system and control the playback and stop playback. Users can view real-time monitoring according to their needs, and can enable real-time video analysis to make the information in the video clear at a glance9, 10.

  • (2) Label function

    This function is mainly to display the corresponding preset information when the video is playing. The results of real-time video analysis can be displayed in the video in the form of dynamic tags. The preset information can be managed uniformly through the label management function in the system security control module.

  • (3) Picture-in-picture function

    This function includes two parts. First, when the monitoring video on the main interface of the dedicated program is playing, we can view the real-time video of the selected camera in the form of picture-in-picture by clicking the label of the camera on the screen. There is also a picture-in-picture area in the lower right corner of the main interface, which is used to display a flat map near the monitoring device. This combination of the whole and the parts makes the layering of the picture more vivid, and presents the information required by the user in a three-dimensional manner.

  • (4) Interactive 3D function

    This function is mainly for viewing the 3D model of the building by clicking on the label of the building on the screen when the video is playing. The 3D model has a strong sense of space, and users can enter the interior space of the model from a first-person perspective, and intuitively observe the internal structure of the building. By clicking on the camera inside the building, it can be recalled to view its surveillance video. Viewing video in this form allows users to more easily conduct investigations in combination with the on-site environment. We can also view the floor plan of each floor through the floor menu, and view the corresponding monitoring through the device icon in the floor plan11, 12.

2.4

Algorithm research

  • (1) Thresholding

    It is to perform thresholding on single-channel arrays, and select the basic thresholding function. When the thresholding type adopts binary thresholding, the description of equation (1) is satisfied:

    00184_PSISDG12506_1250659_page_3_1.jpg

    In the formula, before thresholding, it is a Mat type array src(x, y), after thresholding it is a Mat type array dst(x, y), the threshold is set to thresh, and the maximum value of each pixel is maxval.

  • (2) Corrosion treatment

    If the target area is A, the core area is B, and the processed area is dst(x, y), then the process of corrosion treatment can be expressed as:

    00184_PSISDG12506_1250659_page_3_2.jpg

    The mathematical expression for inflation is:

    00184_PSISDG12506_1250659_page_3_3.jpg

  • (3) Eliminate noise

    Gaussian filter is generally used for convolution noise reduction, and then the function of removing blurred points in the picture is realized. For the two-dimensional Gaussian distribution, equation (4) can be used to describe:

00184_PSISDG12506_1250659_page_3_4.jpg

The standard deviation σ value is 0.1.

3.

EXPERIMENTAL RESEARCH ON MONITORING SYSTEM

3.1

Overall scheme design of the system

On the basis of in-depth study of embedded development and visual image processing technology, through sufficient market research, a video surveillance system with virtual reality technology and video processing technology is designed.

For real-time monitoring of the target area, the video data is stored only when objects move or foreign objects intrude in the target area, and the moving objects are marked and tracked in real time in the video. According to the analysis of the appeal function, the system should include the following functions:

  • (1) The system can drive a universal UVCUSB camera as a video capture device, and use a USB storage device as a local video storage device, and can automatically identify, mount, call, and hot-plug during the entire working process of the system.

  • (2) Real-time monitoring of the target area, running OpenCV motion detection and monitoring algorithms, the system will store the video data if and only when an object moves or a foreign object intrudes, otherwise it will be discarded, and the real-time time will be added to the current video frame information.

  • (3) In the acquired video data, the moving objects need to be marked and tracked in real time, and stored at the specified time interval as a distinction.

  • (4) During the operation of the system, an external LCD touch screen device can be used for system control and video output.

  • (5) The development board can be connected with a PC to perform functions such as status printing and code update.

3.2

Function introduction

According to the hardware block diagram combined with the system requirement function analysis, the main module functions and data interaction between modules are as follows:

ARM core processor: The core processor of the system, based on the embedded ARM architecture, maintains the stable operation of the entire system, is the premise guarantee for all software operations and the processing center of data, and serves as the operating platform for OpenCV motion detection and monitoring algorithms. The video data is processed in real time, and finally passed to the touch screen display module for display, and the storage device module is used for video storage.

Camera: Video capture device. In this design, a universal drive-free camera with USB interface is used as an image capture module, which supports hot-swap function, monitors the target area in real time, and transmits the collected video data to the core processing through the V4L2 driver interface. device.

Touch screen display: A screen interaction device that receives video data processed by the core processor and displays it through an external HDMI/RGB/LVDS display. At the same time, it can send instructions to the core processor through the touch screen to control the system running state.

Storage device: video storage and reading device, U disk, hard disk, etc. with USB interface receive video data from the core processor and store it at a specified time interval as a distinction, which is convenient for backtracking.

PC: The host computer can communicate with the development board system through the onboard console serial port, which is used to debug the project, update the code and print the running status of the system.

3.3

Workflow

According to the above hardware architecture and module functions, fully considering the stability of the system and the independent interaction between modules, the working flow chart of the system is designed as shown in Figure 1.

Figure 1.

System work flow chart.

00184_PSISDG12506_1250659_page_5_1.jpg

After the system starts running, read the Linux kernel image, build an overall framework according to the configuration of the system in the kernel, mount the root file system after the kernel is started, and register peripheral drivers such as V4L2 and graphics cards. After the whole system is running normally, execute the Qt application, in the Qt program, call the compiled third-party dependent library to read the data obtained by the camera in real time, and run the designed OpenCV dynamic detection and tracking algorithm. After algorithm processing, the moving objects in the video frame are identified and marked, and finally outputted through the screen display and saved to an external USB storage device; during the whole process, the user can adjust the operating status of the system through the external touch display at any time. Such as start, pause, and exit.

4.

EXPERIMENT ANALYSIS OF MONITORING SYSTEM

4.1

Video frame rate

The video frame rate is often expressed as FPS. The choice of the video frame rate is very important. Currently, the video frame rate on the market is generally between 24 frames per second and 30 frames per second. Because the limit of human eye recognition is about 24 frames/second, once it is lower than 24 frames/second, the picture will feel stuck, but it is not as high as possible, because once the frame rate is greater than 30 frames/second, the file size of the video will be too large, resulting in a waste of storage space.

In this design, the frame rate of the stored video is specified by the VideoWriter class constructor, and the video sampling frame rate of the system is determined by the internal timer timer1, so when setting the frame rate of this video, it is necessary to Combine the changes of the above two parameters to make adjustments. In order to express intuitively, take the project setting frame rate (FPS) as the abscissa and the actual output frame rate (FPS) as the ordinate, and draw the relationship between the set frame rate and the actual output frame rate as a line graph. The specific data are shown in Table 1.

Table 1.

The relationship between the set frame rate and the actual output frame rate.

Actual output frame rate101111.213.51415161718.5
Project setting frame rate101112131415161718
Actual output frame rate20.52222.823.82524.824.524.324.1
Project setting frame rate212223242526272829

It can be seen from the analysis in Figure 2, in this design, when the set frame rate is lower than 25FPS, the actual output frame rate of the video increases with the increase of the set frame rate. When the frame rate is set higher than 25FPS, because the embedded system is used, and the algorithm operation has a certain complexity, once the frame rate is set higher than 25FPS, the system may not process in time, so the frame rate is capped. Its actual output frame rate will remain the same or even decrease slightly. In summary, when using VideoWriter to set the frame rate, the parameter should be set to 25. Correspondingly, the value of the internal timer timer1 can be set between 32 ms and 40 ms.

Figure 2.

Frame rate test line chart.

00184_PSISDG12506_1250659_page_6_1.jpg

4.2

Performance optimization

Through the overall performance test of the augmented reality intelligent video surveillance system, the performance of the system can meet the basic performance requirements, but there are still parts that need to be optimized and improved.

For the playback of surveillance video RSTP stream, real-time analysis function and web application, after performance optimization, the comparison results are shown in Table 2.

Table 2.

Comparison table before and after web application optimization.

Serial numberFeaturesTime before optimization (s)Time after optimization (s)Performance improvement rate (%)
1Open the list8.123.01170
2Open permissions, user list3.411.1210
3Level 1-16 map loading1.60.7129
4Level 16 and above map loading9.21.5513
Average performance improvement256% (average of the lift rates above)

As can be seen from Figure 3, the performance improvement rates after optimization are 170%, 210%, 129% and 513% respectively, all of which have been greatly improved.

Figure 3.

Web application optimization analysis.

00184_PSISDG12506_1250659_page_7_1.jpg

5.

CONCLUSIONS

In this society that is increasingly pursuing the intelligence of all things, the research in the field of computer vision as the eye of objects has also grown rapidly, and has achieved applications and achievements in more and more fields. This paper designs a video surveillance system with virtual reality technology and video processing technology through full research and analysis of the market. Compared with the traditional video surveillance system, this design uses an embedded processor as the system platform, which solves the shortcomings of the current PC-side surveillance system, such as high power consumption, large size and poor portability. Compared with the traditional video surveillance system, the intelligent video surveillance system based on virtual reality technology implemented in this paper is more intelligent, intuitive and accurate, and has no monitoring blind spots. At the same time, it gives users a new immersive interactive experience when viewing surveillance video, which has great practical application value.

ACKNOWLEDGEMENTS

Source: Young and Middle-aged Backbone Teacher Training Program of Nantong Institute of Technology; Ideological and Political Course and Curriculum Ideological and Political Special Course Construction Project Digital Video and Audio Processing (2020JKS034).

REFERENCES

[1] 

Atta, R. M., “Cost-effective vital signs monitoring system for COVID-19 patients in smart hospital,” Health and Technology, 12 (1), 239 –253 (2022). https://doi.org/10.1007/s12553-021-00621-y Google Scholar

[2] 

Zermane, H. and Drardja, A., “Development of an efficient cement production monitoring system based on the improved random forest algorithm,” The International Journal of Advanced Manufacturing Technology, 120 (3-4), 1853 –1866 (2022). https://doi.org/10.1007/s00170-022-08884-z Google Scholar

[3] 

Aydemir, F. and Cilkaya, E., “A system design for monitoring the violation of home quarantine,” IEEE Consumer Electronics Magazine, (99), 1 –1 (2021). Google Scholar

[4] 

Abdillah, M. and Setiadi, H., “Advanced wide-area monitoring system design for electrical power system,” International Review on Modelling and Simulations (IREMOS), 13 (6), 362 –372 (2020). https://doi.org/10.15866/iremos.v13i6.17734 Google Scholar

[5] 

Emma, Stewart, Michael, et al., “Data-driven approach for monitoring, protection, and control of distribution system assets using micro-PMU technology,” CIRED - Open Access Proceedings Journal, 2017 (1), 1011 –1014 (2017). https://doi.org/10.1049/oap-cired.2017.0416 Google Scholar

[6] 

Brando, M P., Sa-Couto, P. and Gomes, G., et al., “Description of an integrated e-health monitoring system in a Portuguese higher education institution: The e.cuidHaMUstm program,” Global Health Promotion, 29 (1), 65 –73 (2021). https://doi.org/10.1177/1757975920984222 Google Scholar

[7] 

Mach, V., Adamek, M., Sevcik, J., et al., “Design of an internet of things based real-time monitoring system for retired patients,” Bulletin of Electrical Engineering and Informatics, 10 (3), 1648 –1657 (2021). https://doi.org/10.11591/eei.v10i3 Google Scholar

[8] 

Marletta, V., “Design of an FBG based water leakage monitoring system, case of study: An FBG pressure sensor,” IEEE Instrumentation and Measurement Magazine, 24 (5), 75 –82 (2021). https://doi.org/10.1109/MIM.2021.9491010 Google Scholar

[9] 

Nugraha, A. T. and Priyambodo, D., “Design of a monitoring system for hydroganics based on Arduino Uno R3 to realize sustainable development goal’s number 2 Zero Hunger,” Journal of Electronics Electromedical Engineering and Medical Informatics, 3 (1), 50 –56 (2021). https://doi.org/10.35882/jeeemi.v3i1.8 Google Scholar

[10] 

Khan, R., Yousaf, S., Haseeb, A., et al., “Exploring a design of landslide monitoring system,” Complexity, 2021 (2), 1 –13 (2021). Google Scholar

[11] 

Tiberti, R., Caroni, R., Cannata, M., et al., “Automated high frequency monitoring of Lake Maggiore through in situ sensors: System design, field test and data quality control,” Journal of Limnology, 80 (2), 1 –19 (2021). https://doi.org/10.4081/jlimnol.2021.2011 Google Scholar

[12] 

Hassan, J. A. and Jasim, B. H., “Design and implementation of internet of things-based electrical monitoring system,” Bulletin of Electrical Engineering and Informatics, 10 (6), 3052 –3063 (2021). https://doi.org/10.11591/eei.v10i6 Google Scholar
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Qinqin Liu, Changyong Zhu, and Haifei Zhang "Monitoring system based on virtual reality technology and video processing technology", Proc. SPIE 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022), 1250659 (28 December 2022); https://doi.org/10.1117/12.2662655
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Video surveillance

Video processing

Virtual reality

Data storage

Imaging systems

Cameras

RELATED CONTENT

Evaluation of privacy in high dynamic range video sequences
Proceedings of SPIE (September 23 2014)
Anonymized person re-identification in surveillance cameras
Proceedings of SPIE (September 20 2020)
Real-time automatic inspection under adverse conditions
Proceedings of SPIE (March 01 1991)
Emerging standards suite for wide-area ISR
Proceedings of SPIE (May 25 2012)
Motion detection using fiber-based networks
Proceedings of SPIE (September 15 2005)

Back to Top