SlideShare a Scribd company logo
1 of 140
Faculty of Computing, Engineering
and Technology



  Video Surveillance System

                Name of the Authors

             Mohammed Mohiuddin –mw 009216
             Seikh Faiyaz Moorsalin -mw00914
             Md. Nadeem Chowdhury-cw009217
             Sohana Mahmud –mw009215




    Award Title: Individual modules UG frame work
         FCET-ERASMUS overseas students



                  Name of the Supervisor

                     Sam O. Wane
                       October 2010
Abstract




Over the last few years application of video surveillance system is increased. The
popularity of such system leads to conduct extensive research for introducing more
new features in it. Most important feature of the video surveillance system is to
record video footage of the surveillance area. This is a project on video surveillance
system that can detect movement of any object, record video footage of that object
and can transmit recorded video footage using Ad-Hoc network to a server. In this
project, existing video surveillance system is studied. The components that are
required for implementing such system is discussed. Suitable components to build
this video surveillance system are selected. Selected component features are
discussed. Hardware implementation of the system is discussed. Software for video
footage recording and transmission is developed and discussed. Results from pilot
work to evaluate the system performance are included. Base on the results of pilot
work, methods of increasing system performance is proposed and scope of future
work is described.




                                          i
Acknowledgement



First and foremost, all praises and gratitude to the almighty Allah for giving us the
strength, patients and courage to complete this project.

We would like to convey our heartfelt gratitude to our supervisor Mr. Sam O. Wane
for his valuable guidance, inspiration and friendly attitude. We are thankful to him
for offering this unique project to us, which has a great value to the industry. He also
promotes our project commercially to the industries as well as to the United Nations
(UN) for remote disaster management. All these became possible for his vision
toward this project and its potentiality. We are much indebted for all the
opportunities he has provided us for the completion of our group project.

We would also like to thank Professor Moofik Al-Tai for introducing this group
project module especially for Erasmus Undergraduate students. This module creates
an opportunity to work in a group as a practical job experience and also expends the
window of sharing knowledge with each other.

Our special thanks go to lab technicians Mr. Dave and Mr. Paul for their constant
technical support during the risk, safety and health assessment and also during the
project‟s implementation.

We would like to say thanks to the members of our Bangladesh community in
Staffordshire University. They always helped us to feel like home in a foreign
country.

We also like to express our deepest gratitude and love to our respective families
especially to our parents.

Last but not the least; we are very much grateful to Staffordshire University
authority for offering us this great opportunity to study in promising British
university and to enjoy British civilization and their customs and culture.



                                           ii
Table of Contents
Abstract .................................................................................................................................... i
Acknowledgement ................................................................................................................... ii
1 Introduction .......................................................................................................................... 1
   1.1 Background .................................................................................................................... 1
   1.2 Aims and Objectives ...................................................................................................... 2
       1.2.1 Objectives ............................................................................................................... 2
       1.2.2 Deliverables ............................................................................................................ 2
   1.3 Distribution of Work ...................................................................................................... 3
   1.4 Project Planning ............................................................................................................. 6
2 Literature Review ................................................................................................................. 9
   2.1Video Surveillance System.............................................................................................. 9
       2.1.1Introduction: ............................................................................................................ 9
       2.1.2 Classification of video surveillance system .......................................................... 10
   2.2 Transmission of Video Footage ................................................................................... 11
       2.2.1Wireless Networking ............................................................................................. 11
       2.2.2 Ad-hoc Network.................................................................................................... 12
       2.2.3 Bluetooth Transmission Technology .................................................................... 17
       2.2.4 Protocol issues ...................................................................................................... 18
       2.2.5 TCP/IP model ........................................................................................................ 19
       2.2.6 Live streaming....................................................................................................... 23
       2.2.7 Professional Video Surveillance Software ............................................................ 23
       2.2.8 Encryption............................................................................................................. 27
       2.2.9 Video Encryption .................................................................................................. 28
3 System Concept, Design and Implementation ................................................................... 30
   3.1 Requirement analysis .................................................................................................. 30
   3.2 Selection of Hardware ................................................................................................. 31
       3.2.1 Selection of sensor ............................................................................................... 31
       3.2.2 Selection of Video Camera ................................................................................... 32
       3.2.3 Selection of Motor ................................................................................................ 32
       3.2.4 Selection of PC ...................................................................................................... 32
       3.2.5 Selection of Microcontroller ................................................................................. 33
       3.2.6 Selection of Power Source .................................................................................... 33


                                                                      iii
3.3 Software design ........................................................................................................... 33
       3.3.1Video Capturing Software Requirement .............................................................. 34
   3.4 Component selection and Features ............................................................................ 35
       3.4.1 Hardware Component Selection .......................................................................... 35
           3.4.1.1 PIR Sensor ...................................................................................................... 35
           3.4.1.2 Mbed Rapid Prototyping Board ..................................................................... 38
           3.4.1.3 Servo Motor ................................................................................................... 40
           3.4.1.4 FitPC2 ............................................................................................................. 42
           3.4.1.5 Logitech Webcam C120 ................................................................................. 44
           3.4.1.6 Power Supply ................................................................................................. 44
       3.4.2 Software Selection ................................................................................................ 45
           3.4.2.1 .Net Framework 4: ......................................................................................... 45
           3.4.2.2 Microsoft C# .................................................................................................. 46
           3.4.2.3 Direct Show API ............................................................................................. 46
           3.4.2.4 Video Capture Devices................................................................................... 47
   3.5 Cost Analysis ................................................................................................................ 47
   3.6 System Implementation .............................................................................................. 48
       3.6.1Hardware Implementation .................................................................................... 48
       3.6.2 Software Development ......................................................................................... 56
           3.6.2.1 Video Capturing ............................................................................................. 56
           3.6.2.2 Client Server Transmission Software ............................................................. 63
       3.6.3 System Integration ............................................................................................... 83
4 Pilot Work ........................................................................................................................... 93
   4.1 The Nature of the work ............................................................................................... 93
   4.2 The objectives of the Pilot Work ................................................................................. 93
   4.3 Selection of Places ....................................................................................................... 93
       4.3.1 Pilot Study 1 .......................................................................................................... 94
       4.3.2 Pilot Study 2 .......................................................................................................... 95
       4.3.3 Pilot Study 3 .......................................................................................................... 95
   4.4 Results and Discussion................................................................................................. 96
5 Critical Analysis ................................................................................................................. 102
6 Parallel Development ....................................................................................................... 104
   6.1 Laser Range Finder URG-04LX-UG01 ......................................................................... 104
   6.2 Hokuyo URG-04LX LRF analysis ................................................................................. 106

                                                                    iv
6.3 Developing Program for Hokuyo: .............................................................................. 107
7 Future Work...................................................................................................................... 118
8 Conclusion ........................................................................................................................ 120
9 References ........................................................................................................................ 122
10 Appendices ..................................................................................................................... 125




                                                                   v
List of Figures

Figure1.1 Project Gantt chart part1........................................................................................7
Figure 1.2 Project Gantt chart part2.......................................................................................8
Figure 2.1: Ad-hoc Network............................................................................................... ...12
Figure 2.2: A multi-node ad-hoc network .............................................................................13
Figure 2.3: The hidden station problem ...............................................................................15
Figure 2.4: Eye line video software ......................................................................................24
Figure 2.5: Find and Play Recordings Window .....................................................................24
Figure 2.6: Video Recordings ...............................................................................................25
Figure 2.7: VideoLan Streaming Solution..............................................................................26
Figure3.1: Block diagram of video surveillance system ........................................................31
Figure3.2: General system architecture of PIR sensor .........................................................35
Figure 3.3: Parallax-PIR sensor .............................................................................................36
Figure 3.4: Jumper Pin (H and L) Position......................................................................... ....36
Figure 3.5: Waveform of PIR sensor output for retrigger mode of operation................ .....37
Figure3.6: Waveform of PIR sensor output for normal mode of operation .........................37
Figure 3.7: mbed Microcontroller........................................................................................ 39
Figure3.8: Servo motor ........................................................................................................40
Figure3.9: Internal circuit of a servo motor..........................................................................41
Figure3.10: Some random pulse and its corresponding rotation of a servo shaft................41
Figure 3.11: fitPC2................................................................................................................ 43
Figure3.12: Logitech C120 webcam .....................................................................................44
Figure 3.13: Operation of CLR in .Net Framework............................................................... .45
Figure3.14: Schematic diagram of input output test circuit..................................................49
Figure3.15: Microcontroller and PIR sensor test circuit schematic diagram.........................50
Figure 3.16: Experimental Figure...........................................................................................50
Figure3.17: Three main pulse and their corresponding rotation .........................................52
Figure 3.18: Connection diagram for serial communication.................................................53
Figure 3.19: TeraTerm terminal window ..............................................................................54
Figure 3.20: Schematic Diagram of power supply unit..........................................................55
Figure3.21: Power supply unit.............................................................................................. 55
Figure3.22: Software Diagram...............................................................................................56
Figure3.23: Video Surveillance System GUI ..........................................................................57
Figure 3.24: Communication in Local Network ....................................................................64

                                                                  vi
Figure 3.25: Communication through Internet ....................................................................65
Figure 3.26: Ad-hoc Communication between Two Systems...............................................65
Figure 3.27: Ad-hoc Communication using more nodes...................................................... 66
Figure3.28: Sending Video from Client to Server using Multi-node Ad-hoc Network .........66
Figure 3.29: Log file’s data....................................................................................................69
Figure3.30: Placing two flags on the frame edge..................................................................83
Figure3.31: Diagram to calculate camera coverage angle.....................................................84
Figure 3.32: Camera mounted on the servo shaft.................................................................85
Figure 3.33: 1st step of fixing rotation angle..........................................................................85
Figure 3.34: 2nd step of fixing rotation angle.........................................................................86
Figure 3.35: 1st attempt of reducing PIR coverage angle.......................................................87
Figure 3.36: Fresnel lens........................................................................................................87
Figure 3.37: Solution of reducing PIR coverage angle...........................................................88
Figure 3.38: Calculating PIR cover length for any angle Ө.....................................................89
Figure 3.39: Identifying covered PIR angle ...........................................................................90
Figure 3.40: Final connection diagram of the video surveillance system..............................91
Figure 3.41.: Camera and sensors mounted on the system .................................................92
Figure 3.42: Stand alone system on a tripod ........................................................................92
Figure 4.1: Voltage and current of the system for continuous operation ............................97
Figure4.2: Voltage and current of the system for discontinuous operation ........................97
Figure 4.3: Voltage and current of the system for idle condition .........................................97
Figure 4.4: Screen Shot of Recorded Video Footage in Over-Crowded Place ......................98
Figure 4.5: Screen Shot of Recorded Video Footage in Less-Crowded Place .......................98
Figure 6.1: Hokuyo URG-04LX Laser Rangefinder ...............................................................104
Figure 6.2: Internal mechanism of Hokuyo LRF...................................................................105
Figure 6.3: Range detection area of URG-04LX LRF.............................................................106
Figure 6.4: Range diagram of an empty room.....................................................................107
Figure 6.5: Range diagram of a room with 1 person...........................................................107
Figure 6.6: Detection of movement in an empty room when a person passed................. 116
Figure 6.7: Detection of movement in busy room when people are frequently passing....116




                                                               vii
List of Tables



Table 3.1: Mode of operation of PIR sensor......................................................................... 37
Table 3.2: Price list of components...................................................................................... 47
Table 4.1: Data for Different Speed ......................................................................................99
Table 4.2: Data for Detection Range and Area of Each Sensor ...........................................100
Table 4.3: Experimental Data of Transmission Time...........................................................100
Table 4.3: Experimental Data of Transmission Time...........................................................101
Table 4.4: Experimental Data of Transmission Range.........................................................101
Table 4.5: Experimental Data of Robustness Test...............................................................101




                                                            viii
Chapter 1


1 Introduction
1.1 Background


Over the last few years due to globalization a major change has been occurred in
different sectors worldwide such as business, security, health etc. One of the
important sectors which are now concern worldwide is security. Due to the
emergence of protecting premises, providing security is one of the most important
tasks. Thus in order to providing security, video surveillance system was introduced.
A video surveillance system is used to monitoring of the behaviour, activity or other
information generally of people in a specific area. The application of video
surveillance is now not only limited to provide security of an area. Such systems are
now implemented in different sectors like hospital for monitoring patient, in industry
and process plant to monitor the activity of the production line etc. Generally a video
surveillance system consists of a video camera for capturing video footage and a
monitor to see the capturing footage. Early model of such system had some
limitation. Thus in order to improve this system research has been conducted and
more developed system has been arrived in the market. Current systems that is
available in the market have different features such as video capturing and
recording.

A new feature which has been introduced in this type of system over the last few
years is transmission of video footage using wireless communication. This feature
has given an advantage to place this system into any area. Thus it is now possible to
monitor any place where human presence is not all time attainable.

It is still now expensive to build this type of system and also the implementation of
such system with fully automation. This project aims to design and construct of a
standalone video surveillance system which is capable movement detection of any
object, record video footage of that object and transmits that video footage using Ad-
Hoc network to a server.




                                          1
1.2 Aims and Objectives


The aim of this project is to design and construct a video surveillance system which
can capture video footage of home, office, or any premise. The system is able to
detect any object in motion within a particular area and can capture video footage
triggered by motion as it happens. And also the system is able to transmit the
recorded video footage via a wireless Ad-Hoc network to a server. The system will
be standalone and will be able to function for one week unattended. It must therefore
be extremely robust and be able to recover from any errors without intervention.

1.2.1 Objectives


The objectives of the project are:

                    Interfacing sensor with microcontroller for motion detection.
                    Position and speed control of a motor to achieve precise angle.
                    Interfacing microcontroller with a PC.
                    Developing software for capturing video footage.
                    Develop a client-server video streaming system that will transmit
                     the recently stored video and broadcast it through the network.
                     The client system will be able to transmit stored video and the
                     server system will be able to receive it.
                    A complete battery system and power supply unit to power up the
                     video surveillance system.
                    To construct a robust shell for this system.

1.2.2 Deliverables



For achieving aim of the project following are the deliverables:

               A program that tests microcontroller I/O port by turning on a LED
                connected to output port by giving an input to the input port.




                                             2
   A motion detection circuit with three motion detector sensors that
               interfaced with the microcontroller and detect motion. Three LED
               will be connected with the output port of the microcontroller and each
               LED will turn on when motion is detected by associated sensor.
              A program that will send few bytes of information from
               microcontroller to PC and the information will be displayed on the
               PC.
              A program to scan ports for a data and show the data.
              Software with Graphical User Interface (GUI) which will consist of a
               start and stop button for capturing video footage. It will save the
               video footage with particular date and time in a specific format into a
               specific folder.

              A circuit with motor and microcontroller. The microcontroller will be
               able to control the motor with precise angle.
              A program that will turn the motor shaft in a specific location and
               control the rotation angle precisely.
              A frame that will contain sensors, motor and camera.
              Client-side software that will transmit recorded video footage.
              Server-side software that will receive the transmitted video and
               stream it to the local network.

1.3 Distribution of Work


This is a final year group project. This group consists of four members. The name
and the ID of the group member are given below:

   1. Mohammed Mohiuddin (MM)                    -     09009216
   2. Sheikh Faiyaz Moorsalin(SFM)               -     09009214
   3. Md Nadeem Chowdhury(MNC)                   -     09009217
   4. Sohana Mahmud(SM)                          -     09009215

Each member of the group is assigned with individual objectives and deliverables in
order to fulfil the overall objectives and deliverables of this project. At the end of the
project each member submitted report on their work. And this report is completed

                                            3
with the help of individual report of each member but as a group wise manner. The
individual objectives and deliverables in the projects and contribution of each
member in this report are given below separately with their initials.

Mohammed Mohiuddin (MM):

Objectives
      Interfacing PIR (Passive Infrared) sensor with microcontroller for motion
       detection.
      Interfacing microcontroller with an embedded PC.




Deliverables:

             1. A program that tests microcontroller I/O port by turning on a LED
                connected to output port by giving an input to the input port.
             2. A motion detection circuit with three motion detector sensors that
                interfaced with the microcontroller and detect motion. Three LED
                will be connected with the output port of the microcontroller and each
                LED will turn on when motion is detected by associated sensor.
             3. A program that will send few bytes of information from
                microcontroller to PC and the information will be displayed on the
                PC.

Contribution in the Report:

In this report contributions of Mohammed Mohiuddin are in abstract, introduction,
literature review (video surveillance system), system concept and design, hardware
requirement analysis, hardware selection criteria (sensor, microcontroller, PC,
webcam), selection of hardware component (PIR sensor, mbed Microcontroller,
fitPC2, Logitech webcam C120), cost analysis,         hardware implementation (PIR
sensor, fitpc2, mbed microcontroller, power supply unit), Hardware and software
integration, pilot work, critical analysis (Hardware part), future work and
conclusion.




                                           4
Sheikh Faiyaz Moorsalin (SFM)
Objectives
      Developing software for capturing video footage.

Deliverables:

   1. A program to scan ports for a data and show the data.
   2. Software with Graphical User Interface (GUI) which will consist of a start
       and stop button for capturing video. After pressing the stop button it will
       save the video footage with particular date and time in a specific video
       format into a specific folder.

Contribution in Report:
In this report contributions of Sheikh Faiyaz Moorsalin are in software requirement
analysis, software design for video capturing, and future work for software
improvement.




Md Nadeem Chowdhury (MNC)
Objectives
      Interfacing between motor, motor controller and microcontroller.
      Position and speed control of a motor.
      Detecting Motion using Hokuyo URG Laser Range Finder.

Deliverables:

   1. A circuit with motor, motor controller and microcontroller. The
       microcontroller will be able to control the motor via motor controller.
   2. A program that will turn the motor shaft in a specific location and control the
       rotation angle precisely.
   3. A frame that will contain sensors, motor and camera.
   4. A program that will scan and detect movement by Hokuyo URG Laser
       Range Finder.




                                          5
Contribution in Report:


In this report contributions of Md Nadeem Chowdhury are in motor selection (servo
motor), controlling method of motor, system implementation (servomotor), robust
shell design, cover design to reduce the PIR angle and axis equalization Logitech
webcam and PIR sensor.          And in future work to improve movement detection
performance of the system using Hokuyo Laser sensor.


Sohana Mahmud (SM)
Objective
      Develop a client-server video streaming system that will transmit the recently
       stored video and broadcast it through the network. The client system will be
       able to transmit stored video and the server system will be able to receive it.

Deliverables:

   1. A Client-side software that will transmit recorded video using Ad-Hoc
       network
   2. A Server-side software that will receive the video and stream it to the local
       network.

Contribution in Report:
In this report contributions of Sohana Mahmud are in literature review
(Transmission of video footage), software design for transmission, critical analysis
(transmission software), future work for software improvement.



1.4 Project Planning


The Gantt chart of the project is given in figure 1.1 and figure 1.2




                                              6
Figure 1.1: Project Gantt chart



              7
Figure 1.2: Project Gantt chart



              8
Chapter 2


2 Literature Review
2.1Video Surveillance System
2.1.1Introduction:


The video surveillance system was first used in 1960 to monitor crowds attracted to
the arrival of the Thai royal family in Trafalgar Square, London [1]. The early model
that was used for video surveillance system was all analog and known as CCTV
(closed circuit television). A basic CCTV video surveillance system consists of a
collection of video cameras. This camera usually mounted in fixed position. The
surveillance area of CCTV video surveillance system depends on the field of view
angle of the cameras. The video footage captured is transmitted to a central location.
In central location the transmitted video footage displayed on one or several monitor
or recorded in a storage device. In CCTV video surveillance system the person in
charge, who stays in the central location will monitor the activities of the
surveillance area and will decide if there is any intruder or any activity is ongoing
that warrants a response. With invention of digital technology this system started to
change in the mid of the 20th century. Present video surveillance systems are fully
digitalized and automated. Automated video surveillance system introduces
automatic object detection in the surveillance. This system can generate alarm or
generate message that allow user to know if there is any intruder in the surveillance
area, thus reducing the burden of the user. This system can be based on PC, or
embedded devices, constitute a monitoring system, and multimedia management.


With the development in communication sector such as wireless communication,
broadband, the transmission system of video footage from a surveillance area is now
changed. Many systems are now using wireless communication technology for
transmission instead of cable transmission.




                                          9
2.1.2 Classification of video surveillance system


Video surveillance system technologies can be classified based on movement
detection technique. Two types of technique are widely used for object detection in
video surveillance system, these are
   1. Video surveillance system using motion detection by sensors.
   2. Video surveillance system using motion detection by image processing.


1. Video surveillance system using motion detection by sensors:


This type of video surveillance system use different types of sensor for object
detection. Some sensor that are widely use for detecting object are passive infrared
sensor, ultrasonic sensor and microwave sensor. Among the sensors, PIR sensors are
widely used in surveillance systems and automatic light switching systems. In 2004,
in university of Malaya a PIR based intruder system is designed. The designed
system can track occupants in a designated area, switch on alarm when intrusion
occurs, notify client of the intrusion and provide real time monitoring function by
using personal computer through the internet [2]. Their system used three different
PIR sensors to detect movement. The output from PIR sensor modules are wired to a
microcontroller. The microcontroller acts as the heart of the system and processes
these sensor signals. This signal is then send to a PC using FM transmitter and
further analyzed by software for sending alarm or message signal to the user. Their
implemented system is able to detect the presence of human in a protected area with
maximum distance of around 7 meters. On April 2006 Mitsubishi Electric
Corporation proposed a design on video surveillance system with Object-Aware
Video Transcoder [3]. The proposed system not only store high quality video data. It
also transmits the data over a limited bandwidth network. On 2002 College of
Sciences, Massey University, a security system has been developed using motion
detection. Their system used PIR sensors which are interfaced to a microcontroller
for motion detection [4]. They also developed software to add appropriate behaviour
to the security platform. In 2008 Ying-Wen Bai and Hen Teng, Department of
Electronic Engineering, Fu Jen Catholic University,Design ahome surveillance
system. The system includes an ARM processor together with a Web camera and a



                                        10
PIR sensor. The system triggers the Web camera in presence of an intruder in order
to capture and send to a remote server the snapshot. [5]


2. Video surveillance system using motion detection by image processing:


In this type of video surveillance system movement is detected comparing the
images of video surveillance area. If the two images are same them no movement is
detected. When a intruder enter the surveillance area the results of the comparison
can detect the intruder thus movement can be detected using this technique. There
are many different algorithms available for detection of movement using images
processing.


This project is related the work with the video surveillance system by movement
detection using senor. Thus this literature review helped to get an idea about the
existing video surveillance system.


2.2 Transmission of Video Footage


To build a robust and appropriate software for transmission of video footage via
wireless network, reviews of some existing protocols, architectures is done. Reviews
of the existing protocols, architectures are as follows

2.2.1Wireless Networking


Wireless      network is   a   type   of computer    network where   interconnections
between nodes are implemented without the use of wires. This network is set up by
using radio signal frequency to communicate among computers and other network
devices. Sometimes it‟s also referred to as WiFi network or WLAN. This network is
getting popular nowadays due to easy to setup feature and no cabling involved.[6]

The simple explanation of how it works is between 2 computers each equipped with
wireless adapter and a wireless router is setup. When the computers send out the
data, the binary data will be encoded to radio frequency and transmitted via wireless
router. The receiving computer will then decode the signal back to binary data. The

                                           11
two main components are wireless router or access point and wireless clients which
connects the wireless device with other network or internet.

2.2.2 Ad-hoc Network




                         Figure 2.1: Ad-hoc Network [7]



An ad hoc network, or MANET (Mobile Ad hoc NETwork), is a network composed
only of nodes, with no Access Point. Messages are exchanged and relayed between
nodes. In fact, an ad hoc network has the capability of making communications
possible even between two nodes that are not in direct range with each other: packets
to be exchanged between these two nodes are forwarded by intermediate nodes,
using a routing algorithm. Hence, a MANET may spread over a larger distance,
provided that its ends are interconnected by a chain of links between nodes (also
called routers in this architecture). In the ad hoc network shown in the following
figure, node A can communicate with node D via nodes B and C, and vice versa[8].




                                         12
Figure 2.2: A multi-node ad-hoc network [8]

A sensor network is a special class of ad hoc network, composed of devices
equipped with sensors to monitor temperature, sound, or any other environmental
condition. These devices are usually deployed in large number and have limited
resources in terms of battery energy, bandwidth, memory, and computational power
[8].

Mode:

No fixed infrastructure or base station is needed. Entities communicate with each
other through multiple wireless links. Each node serves as a router to forward
packets for others.

Power issue:

Nodes are usually powered by batteries. Power –aware and energy efficient
algorithms can significantly improve the performance of the systems. Ad-hoc
network consist of large numbers of unattended devices where battery replacement
is much more difficult [9].

The optimization goal of routing is to lower the power usage so as to enhance the
network availability.

Long term Connectivity:

One of the important problems is the long term connectivity maintenance of
networks. Nodes are often homogeneous in terms of initial energy while their

                                          13
workloads are always unevenly distributed, causing some nodes to deplete their
energy faster than others. More seriously, nodes closer to sink always have more
workload than more distant nodes. They not only transmit their own data but also
help to forward others data. Consequently, they are prone to failure because of the
depletion of the energy. This is the so-called “hot spot” problems [10][11]. But this
not has been fully investigated.

Advantages and disadvantages

A wireless network offers important advantages with respect to its wired homologue
[8]:

      The main advantage is that a wireless network allows the machines to be fully
       mobile, as long as they remain in radio range.
      Even when the machines do not necessarily need to be mobile, a wireless
       network avoids the burden of having cables between the machines. From this
       point of view, setting a wireless network is simpler and faster. In several cases,
       because of the nature and topology of the landscape, it is not possible or
       desirable to deploy cables: battlefields, search-and-rescue operations, or standard
       communication needs in ancient buildings, museums, public exhibitions, train
       stations, or inter-building areas.
      While the immediate cost of a small wireless network (the cost of the network
       cards) may be higher than the cost of a wired one, extending the network is
       cheaper. As there are no wires, there is no cost for material, installation and
       maintenance. Moreover, mutating the topology of a wireless network – to add,
       remove or displace a machine – is easy.

On the other hand, there are some drawbacks that need to be pondered [8]:

      The strength of the radio signal weakens (with the square of the distance), hence
       the machines have a limited radio range and a restricted scope of the network.
       This    causes     the    well-known hidden station problem:     consider    three
       machines A, B and C, where both A and C is in radio range of B but they are not
       in radio range of each other. This may happen because the A - C distance is
       greater than the A - B and B – C distances, as in Figure, or because of an
       obstacle between A and C. The hidden station problem occurs whenever C is

                                             14
transmitting: when A wants to send to B, A cannot hear that B is busy and that a
    message collision would occur, hence A transmits when it should not; and
    when B wants to send to A, it mistakenly thinks that the transmission will fail,
    hence B abstains from transmitting when it would not need to.




                    Figure 2.3: The hidden station problem [8]

   The site variably influences the functioning of the network: radio waves are
    absorbed by some objects (brick walls, trees, earth, human bodies) and reflected
    by others (fences, pipes, other metallic objects, and water). Wireless networks
    are also subject to interferences by other equipment that shares the same band,
    such as microwave ovens and other wireless networks.
   Considering the limited range and possible interferences, the data rate is often
    lower than that of a wired network. However, nowadays some standards offer
    data rates comparable to those of Ethernet.
   Due to limitations of the medium, it is not possible to transmit and to listen at the
    same time, therefore there are higher chances of message collisions. Collisions
    and interferences make message losses more likely.
   Being mobile computers, the machines have limited battery and computation
    power. This may entail high communication latency: machines may be off most
    of the time (doze state i.e. power-saving mode) and turning on their receivers
    periodically, therefore it is necessary to wait until they wake up and are ready to
    communicate.



                                           15
    As data is transmitted over Hertzian waves, wireless networks are inherently less
     secure. In fact, transmissions between two computers can be eavesdropped by
     any similar equipment that happens to be in radio range.

Routing protocols for ad hoc networks [8]

In ad hoc networks, to ensure the delivery of a packet from sender to destination,
each node must run a routing protocol and maintain its routing tables in memory.

Reactive protocols

Under a reactive (also called on-demand) protocol, topology data is given only when
needed. Whenever a node wants to know the route to a destination node, it floods the
network with a route request message. This gives reduced average control traffic,
with bursts of messages when packets need being routed and an additional delay due
to the fact that the route is not immediately available.

Proactive protocols

In    opposition, proactive (also   called periodic or table    driven)   protocols   are
characterized by periodic exchange of topology control messages. Nodes
periodically update their routing tables. Therefore, control traffic is more dense but
constant, and routes are instantly available.

Hybrid protocols

Hybrid protocols have both the reactive and proactive nature. Usually, the network is
divided into regions, and a node employs a proactive protocol for routing inside its
near neighborhood‟s region and a reactive protocol for routing outside this region.

The Optimized Link State Routing protocol

The Optimized Link State Routing (OLSR) protocol is a proactive link state routing
protocol for ad hoc networks.

The core optimization of OLSR is the flooding mechanism for distributing link state
information, which is broadcast in the network by selected nodes called Multipoint
Relays (MPR). As a further optimization, only partial link state is diffused in the
network. OLSR provides optimal routes (in terms of number of hops) and is
particularly suitable for large and dense networks.


                                           16
2.2.3 Bluetooth Transmission Technology


The dream for true, seamless, mobile data and voice communications that enables
constant connectivity anywhere is quickly becoming a reality. Wireless and
computer industries are clearly leading the way with revolutionary components that
will shape our lives in the next century. In 1994, Ericsson Mobile Communications
initiated a study to investigate the feasibility of a low power, low cost radio interface
between mobile phones and their accessories. The aim of this study was to eliminate
cables between mobile phones and PC Cards used to connect the phones to a
computer for dial up networks (DUN). In 1998 Intel, IBM, Toshiba, Ericsson and
Nokia began developing a technology that would allow users to easily connect to
mobile devices without cables [12].

This technological vision became a reality through the synergy of market leaders in
laptop computing, telecommunications, and core digital signal processing. May
20th, 1998 marked the formation of the Bluetooth Special Interest Group (SIG) with
the goal to design a royalty free, open specification, de facto, short range, low power
wireless communication standard, as well as a specification for small-form factor,
low-cost, short range radio links between mobile PCs, mobile phones and other
portable devices codenamed Bluetooth. The result was an open specification for a
technology to enable short-range wireless voice and data communications anywhere
in the world. A simple way is to connect and communicate without wires or cables
between electronic devices including computers, PDA‟s, cell-phones, network
access and peripherals.

The technology of Bluetooth operates in a globally available frequency band
ensuring communication compatibility worldwide. One of the primary advantages of
the Bluetooth system is ease of computer vendor product integration. Other key
benefits of this technology are low power, long battery life, low cost, low
complexity, and wireless connectivity for personal space, peer-peer, cable
replacement, and seamless and ubiquitous connectivity. To achieve the Bluetooth
goal, tiny, inexpensive, short-range transceivers are integrated into devices either
directly or through an adapter device such as a PC Card. Add on devices such as a
USB or Parallel port connections are also available for legacy systems. By


                                           17
establishing links in a more convenient manner this technology will add tremendous
benefits to the ease of sharing data between devices.

One universal short-range radio link can replace many proprietary cables that
connect one device to another. Laptop and cellular users will no longer require
cumbersome cables to connect the two devices to send and receive email. Possible
health risks from radiated RF energy of cellular handsets are mitigated with lower
transmission power of the Bluetooth enabled ear set. (The ear set solution does not
require the handset to be close to the head.) Moreover, unlike the traditional headset,
the wireless ear set frees the user from any unnecessary wiring.

As Bluetooth offers the ability to provide seamless voice and data connections to
virtually all sorts of personal devices the human imagination is the only limit to
application options. Beyond un-tethering devices by replacing the cables, this
technology provides a universal bridge to existing data networks, allows users to
form a small private ad hoc wireless network outside of fixed network
infrastructures, enables users to connect to a wide range of computing and
telecommunications devices easily and simply, without the need to buy, carry, or
connect cables. The Bluetooth technology allows users to think about what they are
working on, rather than how to make their technology work. The Internal
Documents and Presentations (IDC) forecast in 2004, 103.1 million devices in the
United States and 451.9 million devices worldwide would become Bluetooth
enabled [12].

2.2.4 Protocol issues


Designing a network protocol to support streaming media raises many issues, such
as:

     Datagram protocols, such as the User Datagram Protocol (UDP), send the media
      stream as a series of small packets. This is simple and efficient; however, there is
      no mechanism within the protocol to guarantee delivery. It is up to the receiving
      application to detect loss or corruption and recover data using error correction
      techniques. If data is lost, the stream may suffer a dropout.
     The Real-time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP)
      and the Real-time Transport Control Protocol (RTCP) were specifically designed

                                             18
to stream media over networks. RTSP runs over a variety of transport protocols,
    while the latter two are built on top of UDP [13].
   Another approach that seems to incorporate both the advantages of using a
    standard web protocol and the ability to be used for streaming even live content
    is the HTTP adaptive bitrates streaming. HTTP adaptive bitrates streaming is
    based on HTTP progressive download, but contrary to the previous approach,
    here the files are very small, so that they can be compared to the streaming of
    packets, much like the case of using RTSP and RTP [14].
   Reliable protocols, such as the Transmission Control Protocol (TCP), guarantee
    correct delivery of each bit in the media stream. However, they accomplish this
    with a system of timeouts and retries, which makes them more complex to
    implement. It also means that when there is data loss on the network, the media
    stream stalls while the protocol handlers detect the loss and retransmit the
    missing data. Clients can minimize this effect by buffering data for display [15].
   Unicast protocols send a separate copy of the media stream from the server to
    each recipient. Unicast is the norm for most Internet connections, but does not
    scale well when many users want to view the same program concurrently.

2.2.5 TCP/IP model


The TCP/IP model is a description framework for computer network protocols
created in the 1970s by DARPA, an agency of the United States Department of
Defense. It evolved from ARPANET, which were the world's first wide area
network and a predecessor of the Internet. The TCP/IP Model is sometimes called
the Internet Model or the DoD Model [16].

As with all other communications protocol, TCP/IP is composed of layers:

   IP - is responsible for moving packet of data from node to node. IP forwards
    each packet based on a four byte destination address (the IP number). The
    Internet authorities assign ranges of numbers to different organizations. The
    organizations assign groups of their numbers to departments. IP operates on
    gateway machines that move data from department to organization to region and
    then around the world.



                                          19
   TCP is responsible for verifying the correct delivery of data from client to server.
    Data can be lost in the intermediate network. TCP adds support to detect errors
    or lost data and to trigger retransmission until the data is correctly and
    completely received.
   A socket is a name given to the package of subroutines that provide access to
    TCP/IP on most systems.

The TCP/IP model, or Internet Protocol Suite, describes a set of general design
guidelines and implementations of specific networking protocols to enable
computers to communicate over a network. TCP/IP provides end-to-end connectivity
specifying how data should be formatted, addressed, transmitted, routed and
received at the destination. Protocols exist for a variety of different types of
communication services between computers [16].

It defines as a four-layer model, with the layers having names, not numbers, as
follows:

       Application Layer (process-to-process): This is the scope within which
        applications create user data and communicate this data to other processes or
        applications on another or the same host. The communications partners are
        often called peers. This is where the "higher level" protocols such as SMTP,
        FTP, SSH, HTTP, etc. operate.
       Transport Layer (host-to-host): The Transport Layer constitutes the
        networking regime between two network hosts, either on the local network or
        on remote networks separated by routers. The Transport Layer provides a
        uniform networking interface that hides the actual topology (layout) of the
        underlying network connections. This is where flow-control, error-
        correction, and connection protocols exist, such as TCP. This layer deals
        with opening and maintaining connections between Internet hosts.
       Internet Layer (internetworking): The Internet Layer has the task of
        exchanging datagrams across network boundaries. It is therefore also referred
        to as the layer that establishes internetworking; indeed, it defines and
        establishes the Internet. This layer defines the addressing and routing
        structures used for the TCP/IP protocol suite. The primary protocol in this
        scope is the Internet Protocol, which defines IP addresses. Its function in

                                          20
routing is to transport datagram to the next IP router that has the connectivity
        to a network closer to the final data destination.
       Link Layer: This layer defines the networking methods with the scope of the
        local network link on which hosts communicate without intervening routers.
        This layer describes the protocols used to describe the local network
        topology and the interfaces needed to affect transmission of Internet Layer
        datagram to next-neighbor hosts. (cf. the OSI Data Link Layer).

Addresses

Each technology has its own convention for transmitting messages between two
machines within the same network. On a LAN, messages are sent between machines
by supplying the six byte unique identifier (the "MAC" address). In an SNA
network, every machine has Logical Units with their own network address.
DECNET, Appletalk, and Novell IPX all have a scheme for assigning numbers to
each local network and to each workstation attached to the network [16].

On top of these local or vendor specific network addresses, TCP/IP assigns a unique
number to every workstation in the world. This "IP number" is a four byte value that,
by convention, is expressed by converting each byte into a decimal number (0 to
255) and separating the bytes with a period.

An Uncertain Path [16]

Every time a message arrives at an IP router, it makes an individual decision about
where to send it next. There is concept of a session with a preselected path for all
traffic. There is no correct answer about how does the router make a decision
between routes. Traffic could be routed by the "clockwise" algorithm. The routers
could alternate, sending one message to one place and the next to other place. More
sophisticated routing measures traffic patterns and sends data through the least busy
link.

If one phone line in this network breaks down, traffic can still reach its destination
through a roundabout path. This provides continued service though with degraded
performance. This kind of recovery is the primary design feature of IP. The loss of
the line is immediately detected by the routers, but somehow this information must
be sent to the other nodes. Each network adopts some Router Protocol which

                                           21
periodically updates the routing tables throughout the network with information
about changes in route status.

If the size of the network grows, then the complexity of the routing updates will
increase as will the cost of transmitting them. Building a single network that covers
the entire US would be unreasonably complicated. Fortunately, the Internet is
designed as a Network of Networks. This means that loops and redundancy are built
into each regional carrier. The regional network handles its own problems and
reroutes messages internally. Its Router Protocol updates the tables in its own
routers, but no routing updates need to propagate from a regional carrier.

TCP treats the data as a stream of bytes. It logically assigns a sequence number to
each byte. The TCP packet has a header that says, in effect, "This packet starts with
byte 379642 and contains 200 bytes of data." The receiver can detect missing or
incorrectly sequenced packets. TCP acknowledges data that has been received and
retransmits data that has been lost. The TCP design means that error recovery is
done end-to-end between the Client and Server machine. There is no formal standard
for tracking problems in the middle of the network, though each network has
adopted some ad hoc tools.

Each large company or university that subscribes to the Internet must have an
intermediate level of network organization and expertise. Half dozen routers might
be configured to connect several dozen departmental LANs in several buildings. All
traffic outside the organization would typically be routed to a single connection to a
regional network provider.

However, the end user can install TCP/IP on a personal computer without any
knowledge of either the corporate or regional network. Three pieces of information
are required:

1. The IP address assigned to this personal computer
2. The part of the IP address (the subnet mask) that distinguishes other machines on
   the same LAN (messages can be sent to them directly) from machines in other
   departments or elsewhere in the world (which are sent to a router machine)
3. The IP address of the router machine that connects this LAN to the rest of the
   world.


                                          22
2.2.6 Live streaming


Live streaming means taking the video and broadcasting it live over the
internet/network. The process involves a camera for the video, an encoder to digitize
the content, a video publisher where the streams are made available to potential end-
users and a content delivery network to distribute and deliver the content. The media
can then be viewed by end-users live [17].

There are some primary technical issues related to streaming. They are:

   The system must have enough CPU power and bus bandwidth to support the
    required data rates
   The software should create low-latency interrupt paths in the operating system
    (OS) to prevent buffer under run.

However, computer networks were still limited, and media was usually delivered
over non-streaming channels, such as by downloading a digital file from a remote
server and then saving it to a local drive on the end user's computer or storing it as a
digital file and playing it back from CD-ROMs.

During the late 1990s and early 2000s, Internet users saw greater network
bandwidth, increased access to networks, use of standard protocols and formats,
such as TCP/IP, HTTP. These advances in computer networking combined with
powerful home computers and modern operating systems made streaming media
practical and affordable for ordinary consumers. But multimedia content has a large
volume, so media storage and transmission costs are still significant [17].



2.2.7 Professional Video Surveillance Software


Eyeline video surveillance software

Designed specifically for business, Eyeline is perfect for video monitoring of offices
and buildings, or to log in-store cameras. Used in conjunction with a security alarm,
Eyeline can capture, analyse and play back security footage to determine if a




                                          23
security call out is warranted. Eyeline is also a simple and effective video security
system for your home [18].




                     Figure 2.4: Eye line video software [18]

Camera Properties:




               Figure 2.5: Find and Play Recordings Window [18]


                                         24
Figure 2.6: Video Recordings [18]

Features of this software [18]

   Records up to 100+ camera sources simultaneously
   Motion detection recording saves space by only recording when something is
    happening
   Email or SMS alerts available for motion detection
   Automatic time stamping of frames lets you use footage as evidence if required
   Web control panel lets you access and view recordings remotely
   'Save to' feature lets you save footage to a network folder
   Back up recordings via FTP
   Video can be monitored live on the screen as it records
   Cameras can be setup in a flash by just a click of a button
   Find and play recordings ordered by camera, date, duration and motion detected
   Integrated with Express burn to record video files to DVD
   Intelligent, easy to use and extremely reliable for day-to-day operation




                                          25
Streaming video Using VLC

The VLC media player is an amazing piece of software. In its most basic form it is a
lightweight media player that can play almost any audio or video format you throw
at it. VLC is also multiplatform in the most extreme sense of the word; it can run on
Windows, OSX, Linux and PocketPC / WinCE handhelds along with other systems.
VLC works great as a streaming server and video transcoder too [19] [20].




                  Figure 2.7: VideoLan Streaming Solution [21]

The network needed to setup the VideoLAN solution can be as small as one ethernet
10/100Mb switch or hub, and as big as the whole Internet. Moreover, the VideoLAN
streaming solution has full IPv6 support [21].

Examples of needed bandwidth are:

   0.5 to 4 Mbit/s for a MPEG-4 stream,
   3 to 4 Mbit/s for an MPEG-2 stream read from a satellite card, a digital
    television card or a MPEG-2 encoding card,
   6 to 9 Mbit/s for a DVD.

VLC is able to announce its streams using the SAP/SDP standard, or using Zeroconf
(also known as Bonjour) [21].

                                           26
2.2.8 Encryption


Encryption is a process that takes information and transcribes it into a different form
that is unable to read by anyone who does not have the encryption code [22].

Manual Encryption

Manual encryption is a type that involves the use of encryption software. These are
computer programs that encrypt various bits of information digitally. Manual
encryption involves the user's participation completely. The files he wants to encrypt
are chosen, and then an encryption type is chosen from a list that the security system
provides.

Transparent Encryption

Transparent encryption is another type of computer software encryption. It can be
downloaded onto a computer to encrypt everything automatically. This is one of the
most secure types of encryption available because it doesn't leave out anything that
might be forgotten when using manual encryption.

Symmetric Encryption

Not all encryption is done via a computer software program. Anyone can easily
encrypt information. One of the simplest ways to do this is through symmetric
encryption. Here, a letter or number coincides with another letter or number in the
encryption code.

Asymmetric Encryption

Asymmetric encryption is a secure and easy way that can be used to encrypt data
that you will be receiving. It is generally done electronically. A public key is given
out to the public to see. They can then encrypt information using the key and send it
to you. This is often done when writing emails. However, to decipher the encrypted
code, there is another key, a private one, that only one person has. This means that
while any can encrypt the data with the public key, it can only be read again by
whoever has the private key.




                                          27
2.2.9 Video Encryption


Video encryption or video scrambling is a powerful technique for the preventing
unwanted interception and viewing of transmitted video, for example from a law
enforcement video surveillance being relayed back to a central viewing centre [23].

Video encryption is the easy part. It is the unscrambling that's hard. There are
several techniques of video encryption. However, the human eye is very good at
spotting distortions in pictures due to poor video decoding or poor choice of video
encryption hardware. So, it‟s important to choose the right video encryption
hardware else you video transmissions may be unsecure or you‟re decoding video
un-viewable.

Some popular techniques for Video Encryption are outlined below [22] [23]:

   Line Inversion Video Encryption:
    Encryption Method: Whole or part video scan lines are inverted.
    Advantages: Simple, cheap video encryption.
    Disadvantages: Poor video decrypting quality, low obscurity, low security.
   Sync Suppression Video Encryption:
    Encryption Method: Hide/remove the horizontal/vertical line syncs.
    Advantages: Provides a low cost solution to scrambling and provides good
    quality video decoding.
    Disadvantages: This method is incompatible with some distribution equipment.
    Obscurity (i.e. how easy it is to visually decipher the image) is dependent on
    video content.
   Line Shuffle Video Encryption:
    Encryption Method: Each video line is re-ordered on the screen.
    Advantages: Provides a compatible video signal, a reasonable amount of
    obscurity, good decode quality.
    Disadvantages: Requires a lot of digital storage space. There are potential issues
    with video stability. Less secure than the cut and rotate encryption method.
   Cut & Rotate Video Encryption:
    Encryption Method: Each scan line is cut into pieces and re-assembled in a
    different order.

                                          28
Advantages: Provides a compatible video signal, gives an excellent amount of
    obscurity, as well as good decode quality and stability.
    Disadvantages: Can have complex timing control and requires specialized
    scrambling equipment

The cut and rotate video encryption method is probably the best way of achieving
reliable and good quality video encryption.

Factors in video encryption implementation

Finally and most obviously each user must have a unique encryption key so that
other users of the system cannot view the transmitted video by accident or purpose
without the key owner‟s knowledge. The total number of possible user keys must be
such that it is highly unlikely for someone to guess the correct key [24]. The key
points for a good video encryption method are:

   Everyone has a unique encryption key or code.
   The video encryption system should not try and decode not encrypted video
    transmissions.
   The encrypted signal should be positively identified by the decoder. The decoder
    should recognize the encrypted signal and only attempt to decode when fully
    validated.
   On screen status display and identification.
   Automatic configuration to any video standard.




                                          29
Chapter 3


3 System Concept, Design and Implementation


In this section the requirement analysis for building video surveillance system is
described. The conceptual system block diagram is included. And a detail selection
criterion of each component is discussed.

3.1 Requirement analysis



The video surveillance system will detect movement of any moving object. After
detecting movement it will turn the camera towards the moving object and start to
record video footage of that object. When the recording is finished it will transmit
the video footage through wireless Ad-Hoc network from the client (surveillance
system) to a server. The key requirements of the system are:

   1. The system must consist of a sensor which will detect movement of any
        object.
   2.   The system must have a video camera to record footage. The camera should
        capable of recording high quality video footage in order to ensure that, the
        recorded object can be detected from the recorded video footage.
   3. The system must consist of a device which can turn video camera to any
        direction within 180 degree.
   4.   The system must consist of a storage device that will store video footage.
   5. The system must consist of a device which has a wireless transmission
        capability to transmit the recorded video footage.
   6. There should be a device which will control the whole system.
   7. There must be software which will record video footage and store it to a
        storage device.
   8.   There must be software which will transmit recorded video footage via
        wireless Ad-Hoc network.
   9. The system must consist of a power supply unit that will power up the whole
        system.


                                            30
After finishing the requirement analysis a block diagram of the system is
   designed. Figure 3.1 shows the blocks diagram of the system.




             Figure3.1: Block diagram of video surveillance system




3.2 Selection of Hardware


According to the requirement analysis each hardware component will be selected so
that it will meet the system requirements. In the following section the selection
criteria of each component is described.

3.2.1 Selection of sensor



The sensor should be able to detect the movement during day or night. The range of
the sensor must be as large as possible; it will ensure a good coverage area for video
surveillance. The sensor should consume less power, which will give an advantage
to run the standalone video surveillance more time with a single power source.

                                           31
3.2.2 Selection of Video Camera



The video camera of the system must have the following features

   1. The resolution of the video footage of the camera should be high. It will
         ensure easy recognition of the object from the recorded video footage.
   2. The weight of the camera should be less in order to couple it with any motor.
   3.    The video camera must have easy connectivity features so that it could be
         easily connected with the relevant device for capturing video footage.
   4. It should consume a small amount of power.

3.2.3 Selection of Motor


According to the requirement analysis the system should be able to record video
footage from any direction within 180 degree. Thus the system requires a motor
which shaft can be positioned in any desired direction between 0-180 degrees. The
camera will be mounted on the motor shaft and the motor will turn the video camera
towards the moving object precisely. The motor must have strong torque to turn the
camera toward the moving object. The motor should consume a small amount of
power.

3.2.4 Selection of PC



For capturing and transmitting video footage a computer will be used. The
configuration of the PC should be high enough so that it can run the software that
will use to record and transmit video footage. The PC must have connectivity port to
connect the video camera with it. The PC should be small enough to use it in the
standalone system and should consume a small amount of power. It must consist of a
wireless network card to establish wireless communication with other device. The
PC should quickly boot up, it should robust to any kind of power failure and must be
able to start program without user intervention.




                                           32
3.2.5 Selection of Microcontroller



Microcontroller will control the whole system. The microcontroller will take the
signal from the sensor and it will turn the motor so that the camera can turn towards
the moving object. And it will send a signal to the PC to record video footage when
a movement is detected. Thus the micro controller should have the following
features

   1. It should consist of serial or USB communication so that it can communicate
       with PC for sending the triggered signal.
   2. It must have input port so that it could take input signal from the sensor when
       the sensor is triggered by any movement.
   3. It should have some output port so that it can control the motor for turning
       any direction.
   4. The clock frequency should be high for faster operation. Faster operation
       means more power i.e. processing speed. A higher processing will ensure
       faster execution of program thus the controller can control the whole system
       more faster.
   5. It should consume a small amount of power.

3.2.6 Selection of Power Source



The power source of the system should be able to run the system for a long time and
it should capable to deliver required power by the system. Thus a high Ampere Hour
rating power source is required.

3.3 Software design


As per general approach of design, wherever possible use commercially available
tools and products. While design software sections this approach has vital
importance. The language selection for design must suit for managing all type of
standard input output system and it should have flexibility to required working
platform. Low level programming language like C or Pascal should be use for


                                         33
programming microcontrollers and its interfaces. Using high level language like
Microsoft C++, Java, Microsoft C# etc can be used for programming different
algorithm and control strategy. In the following sections, there are some discussion
about some common programming language and its available libraries.

3.3.1Video Capturing Software Requirement



The video capturing software requirements are as follows

      The software must be able to read trigger signal from microcontroller
       through serial port to start video capturing.
      The software must capture and save video footage from webcam after getting
       a trigger signal.
      The software must store the video footage with the specific capturing date
       and time.
      The software must allow user to select various capture duration.
      The software must have an interactive and user friendly Graphical User
       Interface (GUI) to interact with non-technical users.

The first requirement deals with serial port communication with the microcontroller
and PC. This communication is based on string transmitting and receiving from
microcontroller to start the capturing right away.

The second requirements deals with capturing from connected external webcam.

The third requirement deals with storing video footage with capturing date and time.
Here the software will read time and date from operating system and will place it as
the file name of the captured footage.

The fourth requirement deals with building a simple and user friendly GUI which
can allow user to control the surveillance system, select capture duration.




                                          34
3.4 Component selection and Features


Based on the criterion that is previously described, selected component is described
in this section. Selection of component is not based on the best components, rather it
is related to cost, availability and meeting the requirements by the system.

3.4.1 Hardware Component Selection

3.4.1.1 PIR Sensor


For movement detection passive infrared sensor (PIR) is selected. Any object that
generates heat also generates infrared radiation. Object including human body and
animals has strongest infrared radiation at wavelength of 9.4μm [25]. Infrared
radiation cannot be seen since its wavelength is longer than visible light. But it can
be detected by Passive Infrared sensor. The PIR sensor converts incident IR flux into
an electrical output. It is done through two steps: there is an absorbing layer inside
the PIR sensor that transforms the radiation flux change into a change in temperature
and the pyroelectric element inside the PIR sensor performs a thermal to electrical
conversion [26]. Thus when an object movement occurs in the Field of View (FoV)
of the sensor it generates an output electrical signal in the response of that
movement. This output electrical signal then processed by amplifier and comparator
and, used in different circuit for movement detection. In figure 3.2 general system
architecture of PIR sensor is shown.




            Figure3.2: General system architecture of PIR sensor [27]


The PIR sensor that is selected for this system is Parallax-PIR sensor. This sensor
has the following features:
       Detection range up to 20 ft

                                          35
   Single bit output
       Small size makes it easy to conceal
       Compatible with all Parallax microcontrollers
       Low current draw, less than 100 uA
       Power requirements: 3.3 to 5 VDC
       Operating temp range: +32 to +158 °F (0 to +70 °C)


In figure 3.3 the selected Parallax-PIR sensor is shown




                        Figure 3.3: Parallax-PIR sensor [28]


There are two mode of operation of this sensor. These modes are retrigger and
normal. These two modes of operation are selected by jumper pin H and L. In figure
3.4 jumper pins position is shown.




                   Figure 3.4: Jumper Pin (H and L) Position [29]


The mode of operation for the two setting is given in table 3.1




                                           36
Table 3.1: Mode of operation of PIR sensor [29]


         Position                     Mode              Description
                                                        Output remains HIGH when
            H                        Retrigger          sensor is retriggered repeatedly.
                                                        Output is LOW when idle (not
                                                        triggered).
                                                        Output goes HIGH then LOW
            L                         Normal            when     triggered.    Continuous
                                                        motion    results     in   repeated
                                                        HIGH/LOW pulses. Output is
                                                        LOW when idle.



The output waveform for the two mode of operation is shown in figure 3.5 and 3.6




     Figure 3.5: Waveform of PIR sensor output for Retrigger mode of operation




      Figure3.6: Waveform of PIR sensor output for normal mode of operation



                                        37
PIR sensor output remains high minimum two second for single movement
detection. “The PIR Sensor requires a „warm-up‟ time in order to function properly.
This is due to the settling time involved in „learning‟ its environment. This could be
anywhere from 10-60 seconds. During this time there should be as little motion as
possible in the sensors field of view [29]”. Science the PIR sensor draw less than
100uA current, a 24Ah battery can operate it for 240000 hours.

3.4.1.2 Mbed Rapid Prototyping Board



For controlling the whole system mbed NXP LPC1768 prototyping board is used.
This board is based on LPC1768 ARM Cortex-M3 based microcontroller. This
board has the following features:

      Convenient form-factor: 40-pin DIP, 0.1-inch pitch.
      Drag-and-drop programming, with the board represented as a USB drive.
      ARM Cortex-M3 hardware
       100 MHz with 64 KB of SRAM, 512 KB of Flash
      Ethernet, USB OTG (USB On-The-GO).
      SPI (Serial Peripheral Interface Bus), I2C (Inter-Integrated Circuit), UART
       (Universal Asynchronous Receiver/Transmitter).
      PWM (Pulse Width Modulation), ADC (Analog-to-Digital Converter), DAC
       (Digital-to-Analog Converter.)
      Web-based C/C++ programming environment



There are 26 digital input/output pin (pin5-pin30) to take digital input or send digital
output. When these pin are set as digital input pin, any voltage that is above 2.0V is
logic 1 and below 0.8V is logic 0. When these pin are set as digital output pin, for
logic 0 the output pin is at 0V and for logic 1 output voltage is 3.3V. These pin can
source or sink a maximum current of 40mA. Thus in the system these pin can be
used as input pin from the PIR sensor [30].

There are six PWM output pin (pin21-pin26) in this prototype board. All PWM
output shares the same period but can be set to give different pulsewidth. Thus
changing the period of one will change the period of other output [31]. There are


                                          38
built in function for PWM signal generation by which period and pulsewidth can be
set precisely in seconds, millisecond or microseconds. Thus in this video
surveillance system the servomotor that is used can be control precisely by this
microcontroller prototyping board.

This microcontroller can establish serial RS-232 communication with PC. Pins
p9/p10, p13/p14, p28/p27 and USBTX/USBRX can be used for serial RS-232
communication. One of the serial connections goes through via USB port. Thus it
allows easy communication with host PC. And this USB port is used as virtual RS-
232 serial port. Baud rate for serial communication is from a few hundred bits per
seconds, to megabits per second. This allows high speed data transfer between
microcontroller and host PC. The data length is 7 or 8 bits long. This virtual RS-232
serial port feature enables the microcontroller to be used with any computer
especially latest PC which does not have any RS-232 serial port. Thus by using this
microcontroller in the video surveillance system will make the system connectable
to any old or latest PC.

The microcontroller can be powered by USB or 4.5v - 9.0v applied to VIN pin. And
it takes less than 200mA (100mA with Ethernet disabled) current. Thus it is possible
to operate it 120 hours with a 24Ah rated battery.

There are many built-in functions that enable to use the microcontroller easily in any
application.

In figure 3.7 mbed NXP LPC1768 prototyping board is shown




                       Figure 3.7: mbed Microcontroller [32]


                                          39
3.4.1.3 Servo Motor


A Servo is a small device that has an output shaft. This shaft can be positioned to
specific angular positions by sending the servo a coded signal. As long as the coded
signal exists on the input line, the servo will maintain the angular position of the
shaft. As the coded signal changes, the angular position of the shaft changes. Servo
is extremely useful in robotics. The motor is small, has built in control circuitry, and
are extremely powerful for its size. A standard servo such as the HITEC-HS-475HB
has torque 0.54 N.m @ 6.0V, which is pretty strong for its size. There are 3 wires
that connect to the outside world. The red wire is for power (+5volts), the black wire
is ground, and the yellow wire is the control wire. In figure 3.8 a servo motor is
shown




                            Figure3.8: Servo motor [33]


The servo motor has some control circuits and a potentiometer that is connected to
the output shaft. This potentiometer allows the control circuitry to monitor the
current angle of the servo motor. If the shaft is at the correct angle, then the motor
shuts off. If the circuit finds that the angle is not correct, it will turn the motor
towards the correct direction until the angle is correct. The output shaft of the servo
is capable of travelling 0 to 180 degrees; also there are servos that are capable of
travelling 0 to 210 degrees. A normal servo is used to control an angular motion of
between 0 and 180 degrees. A normal servo is mechanically not capable of turning
any farther due to a mechanical stop built on to the main output gear.




                                          40
Figure3.9: Internal circuit of a servo motor [34]


The control wire is used to communicate the angle. The angle is determined by the
duration of a pulse that is applied to the control wire. This is called Pulse Coded
Modulation. The servo expects to see a pulse every 20 milliseconds. The length of
the pulse will determine how far the motor turns. A 1.5 millisecond pulse, for
example, will make the motor turn to the 90 degree position (often called the neutral
position). If the pulse is shorter than 1.5 ms, then the motor will turn the shaft to
closer to 0 degree. If the pulse is longer than 1.5ms, the shaft turns closer to 180
degree.




Figure3.10: Some random pulse and its corresponding rotation of a servo shaft


According to the requirement analysis HITEC-HS-475HB servo motor is used in the
project. The technical specification of the servo is given below.




                                          41
Technical specification:

      Operating Voltage: 4.8V/6.0V
      Speed @ 4.8V: 0.23 sec/60°
      Speed @ 6.0V: 0.18 sec/60°
      Torque @ 4.8V: 0.43 N.m
      Torque @ 6.0V: 0.54 N.m
      Motor Type: 3 Pole
      Bearing Type: Top Ball Bearing
      Weight: 1.41oz (40.0g)
      Dimensions: 38.8 x 19.8 x 36mm

According to the requirement analysis the system should be able to record video
footage from any direction. The HITEC-HS-475HB servo motor shaft can cover any
angle 0 degree to 180 degree. This is primarily enough for the project which
includes three PIR sensors. But to cover 0 degree to 360 degree two similar servo
motors can be used.


From the aspect of positioning the servo shaft in a specific angle a particular pulse
width modulated signal is enough to given through control wire.From the aspect of
mounting the camera the HITEC-HS-475HB servo motor is perfect. The motor shaft
is well made and very comfortable to mount the camera. The torque of the HITEC-
HS-475HB servo motor is 76.37 oz-in (5.5kg/cm) @ 6.0V which is more than
enough to rotate the camera. The HITEC-HS-475HB servo motor usually consumes
200mA current during rotation with ideal load which is much less and cost effective.
Current consumption may increase with load.

3.4.1.4 FitPC2


For video capturing and transmission fitPC2 running on Windows 7 is selected. The
specification of fitPC2 is as below:

      Intel Atom Z530 1.6GHz
      Memory 1GB DDR2-533MHz
      Internal bay for 2.5″ SATA HDD 160 GB


                                         42
   Intel GMA500 graphics acceleration
      Wireless LAN - 802.11g WLAN
      6 USB 2.0 High Speed ports
      Windows 7 Professional
      Case - 100% aluminium Die cast body
      Dimensions-101 x 115 x 27 mm
                      4″ x 4.5″ x 1.05″
      Weight - 370 grams / 13 ounces – including hard disk
      Operating Temperature - 0 – 45 deg C with hard disk
                                     0 – 70 deg C with SSD
      Power- 12V single supply
                  8-15V tolerant
      Power Consumption- 6W at low CPU load
                                <7W at 1080p H.264 playback
                                8W at full CPU load
                                0.5W at standby

The smaller size, less weight and low power consumption features of the fitPC2 will
make the video surveillance smaller in size, lesser in weight and operate for long
hours. Moreover USB2.0 connection can be used to communicate with the
microcontroller via virtual serial port to receive triggering signal. The built in
WLAN 802.11g card will allow the video surveillance system to transmit recorded
video footage through wireless network. And the wide temperature range will allow
the PC in different environmental condition. There is no cooling fan inside it, thus it
will work quite noiseless in any place. In figure 3.11 fitPC2 is shown.




                               Figure 3.11: fitPC2 [35]

                                          43
3.4.1.5 Logitech Webcam C120



Logitech webcam C120 is selected for video recording. This web cam has the
following specifications:

      Video capture: up to 800 x 600 pixels
      Up to 30 frames per second video
      Hi-Speed USB 2.0 communication
      Resolution up to 1.3 megapixel
      Weight 100 gm

800 x 600 pixels and 30 frames per second will give a high quality video footage for
the video surveillance system. And the webcam can transfer data faster using USB
2.0 port. And the light weight of the camera will makes it easy to mount with the
servo motor. The minimum system requirement of a PC to connect this webcam is
1GHz CPU and 256 MB of RAM with Windows XP operating system thus this
webcam performance will be better with fitPC2. In figure 3.12 the selected Logitech
C120 webcam is shown.




                     Figure3.12: Logitech C120 webcam [36]

3.4.1.6 Power Supply



For power supply a 12v, 24Ah battery is chosen. The fitPC2 will be powered
directly from 12v supply. A power supply circuit will be used to reduce the voltage
level to power up the microcontroller, PIR sensor and Servomotor.




                                          44
3.4.2 Software Selection


Microsoft .Net framework has a collection of huge number of libraries for secure
communication and solution for all common programming problems. The features
include multi language interoperability, virtual machine and common language
runtime (CLR).

The common language runtime (CLR) is a major component of .Net Framework.
User no needs to care about the execution time for the specific system. The CLR will
deal with all CPU dependent operations while execution a program. The program
written in any language converts in to byte code called as Common Intermediate
Language (CIL). And the runtime it again transfers to the specific system platform.
Figure 2.4 shows the operations of CLR. So CLR helping the programmers to write
program in less effort without considering the memory management, security,
Garbage collection, Exception handling and thread management.




                 Figure 3.13: Operation of CLR in .Net Framework

It also allows the developers to apply common skills across a variety of devices,
application types, and programming tasks. It can integrate with other tools and
technologies to build the right solution with less work.

3.4.2.1 .Net Framework 4:


The .NET Framework is Microsoft's comprehensive and consistent programming
model for building applications that have visually stunning user experiences,
seamless and secure communication, and the ability to model a range of business
processes.
The .NET Framework 4 works side by side with older Framework versions.
Applications that are based on earlier versions of the Framework will continue to run
on           the            version            targeted         by           default.


                                          45
System Hardware Requirements:

Recommended Minimum: Pentium 1 GHz or higher with 512 MB RAM or more
Minimum disk space:
x86 – 850 MB
x64 – 2 GB
3.4.2.2 Microsoft C#


Microsoft c# is an object oriented programming language designed for windows
graphical programming. The object orientation is the structured method for solving
problems. And the mental models can easily transfer in to programs using object
oriented programming. Another attractive benefit of the object oriented language is
its ease of code reusability and maintenance. Among this all benefits object oriented
languages are the time consuming and large sized. In object oriented language it
defines an object with its own properties and classes with a set of objects with
common behaviour.

3.4.2.3 Direct Show API


Microsoft DirectShow is architecture for streaming media on the Microsoft
Windows platform. DirectShow provides for high-quality capture and playback of
multimedia streams. It supports a wide variety of formats, including Advanced
Systems Format (ASF), Motion Picture Experts Group (MPEG), Audio-Video
Interleaved (AVI), MPEG Audio Layer-3 (MP3), and WAV sound files. It supports
capture from digital and analogue devices based on the Windows Driver Model
(WDM) or Video for Windows. It automatically detects and uses video and audio
acceleration hardware when available, but also supports systems without
acceleration hardware.

DirectShow is based on the Component Object Model (COM). DirectShow is
designed for C++. Microsoft does not provide a managed API for DirectShow.

DirectShow simplifies media playback, format conversion, and capture tasks. At the
same time, it provides access to the underlying stream control architecture for



                                         46
applications that require custom solutions. You can also create your own
DirectShow components to support new formats or custom effects.

3.4.2.4 Video Capture Devices


Most new video capture devices use Windows Driver Model (WDM) drivers. In the
WDM architecture, Microsoft supplies a set of hardware-independent drivers, called
class drivers, and the hardware vendor provides hardware-specific minidrivers. A
minidriver implements any functions that are specific to the device; for most
functions, the minidriver calls the Microsoft class driver.

In a DirectShow filter graph, any WDM capture device appears as the WDM Video
Capture filter. The WDM Video Capture filter configures itself based on the
characteristics of the driver. It appears under a name provided by the driver you will
not see a filter called "WDM Video Capture Filter" anywhere in the graph.

Some older capture devices still use Video for Windows (VFW) drivers. Although
these drivers are now obsolete, they are supported in DirectShow through the VFW
Capture filter.

3.5 Cost Analysis

After selecting the hardware software system budget is estimated in table 3.2. The
list included only the selected component that will be needed to complete the
project. Other component like resistors, capacitors and op-amps should not
drastically raise our budget, and also these components are available in the
laboratory.
                              Table 3.2: Price list of components.

                Component List                                   Price (£)
              mbed microcontroller                                   45
                    Camera                                           8
                  PIR Sensor                                         21
                  Servo Motor                                        7
                     fitPC2                                          270



The total budget of the project will not exceed 500 pound

                                              47
3.6 System Implementation


Before constructing the whole system first each subsystem is constructed and tested.
After constructing and testing all sub system the whole system is built by integrating
these subsystems. In the following sections how prototype of each subsystem is
designed and tested is described.

3.6.1Hardware Implementation

In this video surveillance system the microcontroller will perform three tasks:

   1. It will take input from the PIR sensor
   2. It will control the servomotor
   3. It will send trigger signal to the PC via virtual serial port.

Thus for each purpose a prototype subsystem is designed and tested. The first
subsystem is for testing the microcontroller input/output port. The second subsystem
is for controlling the servo motor. And the third subsystem is for establishing
communication with the PC. How the subsystem is constructed and tested is given
below.

The microcontroller will take input from the PIR sensor, thus its input port is tested
using three LEDs, three switches and a power source. From the microcontroller‟s
Digital input/output pin, three input pins are selected. And for selected input pin
three corresponding output pins are selected. These input pins are pin16, pin17 and
pin 18 and their corresponding output pins are pin26, pin25 and pin24 consecutively.
Three switches are connected to the three input pin. And a 5V source is connected
with the switches. Thus when the switches are on, microcontroller selected input
pins will get a 5V signal which is a logic1 digital input. Three LEDs are connected
via a 220ohm resistor in the three selected output pins. The LED that are used
require 5 volt and 20 mA current to be turned on. Current more than 20mA will burn
the LED. Thus in order to limit current, 220ohm resistor is connected in series with
three LED. Now when one switch will be turned on the corresponding LED will be
also turned on, which will ensure that the microcontroller is functioning properly for
the digital input. The schematic diagram of the circuit is given in figure 3.14.



                                           48
Figure3.14: Schematic diagram of input output test circuit.


After constructing the circuit the program in appendix 10.1 is written in online
complier of mbed and it is complied and downloaded in the mbed microcontroller.
The explanation of this program is given in the appendix. After downloading the
program, it is executed and the program successfully run in the microcontroller.
Each LED is lighted up when its corresponding switch is turned on.

After successful execution of the input/output test program, three PIR sensors are
connected instead of three switches. All PIR sensors are set to operation mode 1.
Thus for continuous movement the output of PIR sensor remain in logic high. And
when there is no movement the output will be in logic low. Thus it will work as a
switch as turned on or turned off. After completing the circuit the input/output
program run again from the microcontroller. And successfully test is completed.
When any movement is detected by the PIR sensors, their corresponding LEDs are
turned on. Schematic diagram of the subsystem with PIR sensor connected to input
port is given in figure 3.15




                                        49
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement
Video surveillance system detects movement

More Related Content

What's hot

CCTV Security Cameras - Basics
CCTV Security Cameras - Basics CCTV Security Cameras - Basics
CCTV Security Cameras - Basics Emmanuel Kirui
 
Close Circuit Television (CCTV SURVEILLANCE SYSTEMS)
Close Circuit Television (CCTV SURVEILLANCE SYSTEMS)Close Circuit Television (CCTV SURVEILLANCE SYSTEMS)
Close Circuit Television (CCTV SURVEILLANCE SYSTEMS)Corporate Services
 
Video Surveillance
Video SurveillanceVideo Surveillance
Video SurveillanceMihika Shah
 
Night Vision Technology
Night Vision TechnologyNight Vision Technology
Night Vision TechnologyManoj Kumar
 
Shri pps
Shri ppsShri pps
Shri ppslshri
 
Real time image processing ppt
Real time image processing pptReal time image processing ppt
Real time image processing pptashwini.jagdhane
 
Detailed Study of CCTV Cameras
Detailed Study of CCTV CamerasDetailed Study of CCTV Cameras
Detailed Study of CCTV CamerasHans Khanna
 
iMouse
iMouseiMouse
iMouseeeshak
 
Night vision technology
Night vision technologyNight vision technology
Night vision technologyshreedevi456
 
Video Surveillance Report
Video Surveillance ReportVideo Surveillance Report
Video Surveillance ReportMihika Shah
 
CCTV System - Close circuit television System - UCJ
CCTV System - Close circuit television System - UCJCCTV System - Close circuit television System - UCJ
CCTV System - Close circuit television System - UCJPaheerathan Sabaratnam
 
Ip cameras vs analog cameras
Ip cameras vs analog camerasIp cameras vs analog cameras
Ip cameras vs analog camerassiscomtech
 
Sniffer for Detecting Lost Mobile
Sniffer for Detecting Lost MobileSniffer for Detecting Lost Mobile
Sniffer for Detecting Lost MobileSeminar Links
 
Introduction to cctv_installation_course
Introduction to cctv_installation_courseIntroduction to cctv_installation_course
Introduction to cctv_installation_courseFire alarm engineer job
 
Ip Cameras
Ip CamerasIp Cameras
Ip CamerasirisZH21
 
CCTV Camera Presentation
CCTV Camera PresentationCCTV Camera Presentation
CCTV Camera PresentationBasith JM
 
"Intelligent Video Surveillance: Are We There Yet?," a Presentation from Chec...
"Intelligent Video Surveillance: Are We There Yet?," a Presentation from Chec..."Intelligent Video Surveillance: Are We There Yet?," a Presentation from Chec...
"Intelligent Video Surveillance: Are We There Yet?," a Presentation from Chec...Edge AI and Vision Alliance
 

What's hot (20)

CCTV Security Cameras - Basics
CCTV Security Cameras - Basics CCTV Security Cameras - Basics
CCTV Security Cameras - Basics
 
Close Circuit Television (CCTV SURVEILLANCE SYSTEMS)
Close Circuit Television (CCTV SURVEILLANCE SYSTEMS)Close Circuit Television (CCTV SURVEILLANCE SYSTEMS)
Close Circuit Television (CCTV SURVEILLANCE SYSTEMS)
 
Video Surveillance
Video SurveillanceVideo Surveillance
Video Surveillance
 
Cctv presentation
Cctv presentationCctv presentation
Cctv presentation
 
Night Vision Technology
Night Vision TechnologyNight Vision Technology
Night Vision Technology
 
Shri pps
Shri ppsShri pps
Shri pps
 
Real time image processing ppt
Real time image processing pptReal time image processing ppt
Real time image processing ppt
 
Detailed Study of CCTV Cameras
Detailed Study of CCTV CamerasDetailed Study of CCTV Cameras
Detailed Study of CCTV Cameras
 
iMouse
iMouseiMouse
iMouse
 
Night vision technology
Night vision technologyNight vision technology
Night vision technology
 
Video Surveillance Report
Video Surveillance ReportVideo Surveillance Report
Video Surveillance Report
 
CCTV System - Close circuit television System - UCJ
CCTV System - Close circuit television System - UCJCCTV System - Close circuit television System - UCJ
CCTV System - Close circuit television System - UCJ
 
Night vision Technology
Night vision TechnologyNight vision Technology
Night vision Technology
 
Ip cameras vs analog cameras
Ip cameras vs analog camerasIp cameras vs analog cameras
Ip cameras vs analog cameras
 
Sniffer for Detecting Lost Mobile
Sniffer for Detecting Lost MobileSniffer for Detecting Lost Mobile
Sniffer for Detecting Lost Mobile
 
Introduction to cctv_installation_course
Introduction to cctv_installation_courseIntroduction to cctv_installation_course
Introduction to cctv_installation_course
 
Mini Project- Face Recognition
Mini Project- Face RecognitionMini Project- Face Recognition
Mini Project- Face Recognition
 
Ip Cameras
Ip CamerasIp Cameras
Ip Cameras
 
CCTV Camera Presentation
CCTV Camera PresentationCCTV Camera Presentation
CCTV Camera Presentation
 
"Intelligent Video Surveillance: Are We There Yet?," a Presentation from Chec...
"Intelligent Video Surveillance: Are We There Yet?," a Presentation from Chec..."Intelligent Video Surveillance: Are We There Yet?," a Presentation from Chec...
"Intelligent Video Surveillance: Are We There Yet?," a Presentation from Chec...
 

Similar to Video surveillance system detects movement

Dual-Band Mobile Phone Jammer
Dual-Band Mobile Phone JammerDual-Band Mobile Phone Jammer
Dual-Band Mobile Phone JammerMohamed Atef
 
Realtimesamplingofutilization
RealtimesamplingofutilizationRealtimesamplingofutilization
RealtimesamplingofutilizationVicente Nava
 
ImplementationOFDMFPGA
ImplementationOFDMFPGAImplementationOFDMFPGA
ImplementationOFDMFPGANikita Pinto
 
Evaluation for the DfES video conferencing in the classroom ...
Evaluation for the DfES video conferencing in the classroom ...Evaluation for the DfES video conferencing in the classroom ...
Evaluation for the DfES video conferencing in the classroom ...Videoguy
 
Master Arbeit_Chand _Piyush
Master Arbeit_Chand _PiyushMaster Arbeit_Chand _Piyush
Master Arbeit_Chand _PiyushPiyush Chand
 
Mixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless NetworksMixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless NetworksVideoguy
 
online examination management system
online examination management systemonline examination management system
online examination management systemPraveen Patel
 
Students in the director's seat: Teaching and learning across the school curr...
Students in the director's seat: Teaching and learning across the school curr...Students in the director's seat: Teaching and learning across the school curr...
Students in the director's seat: Teaching and learning across the school curr...Matthew Kearney
 
digiinfo website project report
digiinfo website project reportdigiinfo website project report
digiinfo website project reportABHIJEET KHIRE
 
Work Measurement Application - Ghent Internship Report - Adel Belasker
Work Measurement Application - Ghent Internship Report - Adel BelaskerWork Measurement Application - Ghent Internship Report - Adel Belasker
Work Measurement Application - Ghent Internship Report - Adel BelaskerAdel Belasker
 
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Jason Cheung
 
Project final report
Project final reportProject final report
Project final reportALIN BABU
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 
Design And Implementation Of A Phone Card Company
Design And Implementation Of A Phone Card CompanyDesign And Implementation Of A Phone Card Company
Design And Implementation Of A Phone Card Companygrysh129
 
Chat Application [Full Documentation]
Chat Application [Full Documentation]Chat Application [Full Documentation]
Chat Application [Full Documentation]Rajon
 
bonino_thesis_final
bonino_thesis_finalbonino_thesis_final
bonino_thesis_finalDario Bonino
 
iGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - ReportiGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - ReportNandu B Rajan
 

Similar to Video surveillance system detects movement (20)

Dual-Band Mobile Phone Jammer
Dual-Band Mobile Phone JammerDual-Band Mobile Phone Jammer
Dual-Band Mobile Phone Jammer
 
Realtimesamplingofutilization
RealtimesamplingofutilizationRealtimesamplingofutilization
Realtimesamplingofutilization
 
Mobile d
Mobile dMobile d
Mobile d
 
ImplementationOFDMFPGA
ImplementationOFDMFPGAImplementationOFDMFPGA
ImplementationOFDMFPGA
 
Evaluation for the DfES video conferencing in the classroom ...
Evaluation for the DfES video conferencing in the classroom ...Evaluation for the DfES video conferencing in the classroom ...
Evaluation for the DfES video conferencing in the classroom ...
 
Master Arbeit_Chand _Piyush
Master Arbeit_Chand _PiyushMaster Arbeit_Chand _Piyush
Master Arbeit_Chand _Piyush
 
Mixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless NetworksMixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless Networks
 
online examination management system
online examination management systemonline examination management system
online examination management system
 
Students in the director's seat: Teaching and learning across the school curr...
Students in the director's seat: Teaching and learning across the school curr...Students in the director's seat: Teaching and learning across the school curr...
Students in the director's seat: Teaching and learning across the school curr...
 
KHAN_FAHAD_FL14
KHAN_FAHAD_FL14KHAN_FAHAD_FL14
KHAN_FAHAD_FL14
 
digiinfo website project report
digiinfo website project reportdigiinfo website project report
digiinfo website project report
 
Work Measurement Application - Ghent Internship Report - Adel Belasker
Work Measurement Application - Ghent Internship Report - Adel BelaskerWork Measurement Application - Ghent Internship Report - Adel Belasker
Work Measurement Application - Ghent Internship Report - Adel Belasker
 
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
 
Project final report
Project final reportProject final report
Project final report
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
Design And Implementation Of A Phone Card Company
Design And Implementation Of A Phone Card CompanyDesign And Implementation Of A Phone Card Company
Design And Implementation Of A Phone Card Company
 
Chat Application [Full Documentation]
Chat Application [Full Documentation]Chat Application [Full Documentation]
Chat Application [Full Documentation]
 
bonino_thesis_final
bonino_thesis_finalbonino_thesis_final
bonino_thesis_final
 
ThesisB
ThesisBThesisB
ThesisB
 
iGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - ReportiGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - Report
 

Video surveillance system detects movement

  • 1. Faculty of Computing, Engineering and Technology Video Surveillance System Name of the Authors Mohammed Mohiuddin –mw 009216 Seikh Faiyaz Moorsalin -mw00914 Md. Nadeem Chowdhury-cw009217 Sohana Mahmud –mw009215 Award Title: Individual modules UG frame work FCET-ERASMUS overseas students Name of the Supervisor Sam O. Wane October 2010
  • 2. Abstract Over the last few years application of video surveillance system is increased. The popularity of such system leads to conduct extensive research for introducing more new features in it. Most important feature of the video surveillance system is to record video footage of the surveillance area. This is a project on video surveillance system that can detect movement of any object, record video footage of that object and can transmit recorded video footage using Ad-Hoc network to a server. In this project, existing video surveillance system is studied. The components that are required for implementing such system is discussed. Suitable components to build this video surveillance system are selected. Selected component features are discussed. Hardware implementation of the system is discussed. Software for video footage recording and transmission is developed and discussed. Results from pilot work to evaluate the system performance are included. Base on the results of pilot work, methods of increasing system performance is proposed and scope of future work is described. i
  • 3. Acknowledgement First and foremost, all praises and gratitude to the almighty Allah for giving us the strength, patients and courage to complete this project. We would like to convey our heartfelt gratitude to our supervisor Mr. Sam O. Wane for his valuable guidance, inspiration and friendly attitude. We are thankful to him for offering this unique project to us, which has a great value to the industry. He also promotes our project commercially to the industries as well as to the United Nations (UN) for remote disaster management. All these became possible for his vision toward this project and its potentiality. We are much indebted for all the opportunities he has provided us for the completion of our group project. We would also like to thank Professor Moofik Al-Tai for introducing this group project module especially for Erasmus Undergraduate students. This module creates an opportunity to work in a group as a practical job experience and also expends the window of sharing knowledge with each other. Our special thanks go to lab technicians Mr. Dave and Mr. Paul for their constant technical support during the risk, safety and health assessment and also during the project‟s implementation. We would like to say thanks to the members of our Bangladesh community in Staffordshire University. They always helped us to feel like home in a foreign country. We also like to express our deepest gratitude and love to our respective families especially to our parents. Last but not the least; we are very much grateful to Staffordshire University authority for offering us this great opportunity to study in promising British university and to enjoy British civilization and their customs and culture. ii
  • 4. Table of Contents Abstract .................................................................................................................................... i Acknowledgement ................................................................................................................... ii 1 Introduction .......................................................................................................................... 1 1.1 Background .................................................................................................................... 1 1.2 Aims and Objectives ...................................................................................................... 2 1.2.1 Objectives ............................................................................................................... 2 1.2.2 Deliverables ............................................................................................................ 2 1.3 Distribution of Work ...................................................................................................... 3 1.4 Project Planning ............................................................................................................. 6 2 Literature Review ................................................................................................................. 9 2.1Video Surveillance System.............................................................................................. 9 2.1.1Introduction: ............................................................................................................ 9 2.1.2 Classification of video surveillance system .......................................................... 10 2.2 Transmission of Video Footage ................................................................................... 11 2.2.1Wireless Networking ............................................................................................. 11 2.2.2 Ad-hoc Network.................................................................................................... 12 2.2.3 Bluetooth Transmission Technology .................................................................... 17 2.2.4 Protocol issues ...................................................................................................... 18 2.2.5 TCP/IP model ........................................................................................................ 19 2.2.6 Live streaming....................................................................................................... 23 2.2.7 Professional Video Surveillance Software ............................................................ 23 2.2.8 Encryption............................................................................................................. 27 2.2.9 Video Encryption .................................................................................................. 28 3 System Concept, Design and Implementation ................................................................... 30 3.1 Requirement analysis .................................................................................................. 30 3.2 Selection of Hardware ................................................................................................. 31 3.2.1 Selection of sensor ............................................................................................... 31 3.2.2 Selection of Video Camera ................................................................................... 32 3.2.3 Selection of Motor ................................................................................................ 32 3.2.4 Selection of PC ...................................................................................................... 32 3.2.5 Selection of Microcontroller ................................................................................. 33 3.2.6 Selection of Power Source .................................................................................... 33 iii
  • 5. 3.3 Software design ........................................................................................................... 33 3.3.1Video Capturing Software Requirement .............................................................. 34 3.4 Component selection and Features ............................................................................ 35 3.4.1 Hardware Component Selection .......................................................................... 35 3.4.1.1 PIR Sensor ...................................................................................................... 35 3.4.1.2 Mbed Rapid Prototyping Board ..................................................................... 38 3.4.1.3 Servo Motor ................................................................................................... 40 3.4.1.4 FitPC2 ............................................................................................................. 42 3.4.1.5 Logitech Webcam C120 ................................................................................. 44 3.4.1.6 Power Supply ................................................................................................. 44 3.4.2 Software Selection ................................................................................................ 45 3.4.2.1 .Net Framework 4: ......................................................................................... 45 3.4.2.2 Microsoft C# .................................................................................................. 46 3.4.2.3 Direct Show API ............................................................................................. 46 3.4.2.4 Video Capture Devices................................................................................... 47 3.5 Cost Analysis ................................................................................................................ 47 3.6 System Implementation .............................................................................................. 48 3.6.1Hardware Implementation .................................................................................... 48 3.6.2 Software Development ......................................................................................... 56 3.6.2.1 Video Capturing ............................................................................................. 56 3.6.2.2 Client Server Transmission Software ............................................................. 63 3.6.3 System Integration ............................................................................................... 83 4 Pilot Work ........................................................................................................................... 93 4.1 The Nature of the work ............................................................................................... 93 4.2 The objectives of the Pilot Work ................................................................................. 93 4.3 Selection of Places ....................................................................................................... 93 4.3.1 Pilot Study 1 .......................................................................................................... 94 4.3.2 Pilot Study 2 .......................................................................................................... 95 4.3.3 Pilot Study 3 .......................................................................................................... 95 4.4 Results and Discussion................................................................................................. 96 5 Critical Analysis ................................................................................................................. 102 6 Parallel Development ....................................................................................................... 104 6.1 Laser Range Finder URG-04LX-UG01 ......................................................................... 104 6.2 Hokuyo URG-04LX LRF analysis ................................................................................. 106 iv
  • 6. 6.3 Developing Program for Hokuyo: .............................................................................. 107 7 Future Work...................................................................................................................... 118 8 Conclusion ........................................................................................................................ 120 9 References ........................................................................................................................ 122 10 Appendices ..................................................................................................................... 125 v
  • 7. List of Figures Figure1.1 Project Gantt chart part1........................................................................................7 Figure 1.2 Project Gantt chart part2.......................................................................................8 Figure 2.1: Ad-hoc Network............................................................................................... ...12 Figure 2.2: A multi-node ad-hoc network .............................................................................13 Figure 2.3: The hidden station problem ...............................................................................15 Figure 2.4: Eye line video software ......................................................................................24 Figure 2.5: Find and Play Recordings Window .....................................................................24 Figure 2.6: Video Recordings ...............................................................................................25 Figure 2.7: VideoLan Streaming Solution..............................................................................26 Figure3.1: Block diagram of video surveillance system ........................................................31 Figure3.2: General system architecture of PIR sensor .........................................................35 Figure 3.3: Parallax-PIR sensor .............................................................................................36 Figure 3.4: Jumper Pin (H and L) Position......................................................................... ....36 Figure 3.5: Waveform of PIR sensor output for retrigger mode of operation................ .....37 Figure3.6: Waveform of PIR sensor output for normal mode of operation .........................37 Figure 3.7: mbed Microcontroller........................................................................................ 39 Figure3.8: Servo motor ........................................................................................................40 Figure3.9: Internal circuit of a servo motor..........................................................................41 Figure3.10: Some random pulse and its corresponding rotation of a servo shaft................41 Figure 3.11: fitPC2................................................................................................................ 43 Figure3.12: Logitech C120 webcam .....................................................................................44 Figure 3.13: Operation of CLR in .Net Framework............................................................... .45 Figure3.14: Schematic diagram of input output test circuit..................................................49 Figure3.15: Microcontroller and PIR sensor test circuit schematic diagram.........................50 Figure 3.16: Experimental Figure...........................................................................................50 Figure3.17: Three main pulse and their corresponding rotation .........................................52 Figure 3.18: Connection diagram for serial communication.................................................53 Figure 3.19: TeraTerm terminal window ..............................................................................54 Figure 3.20: Schematic Diagram of power supply unit..........................................................55 Figure3.21: Power supply unit.............................................................................................. 55 Figure3.22: Software Diagram...............................................................................................56 Figure3.23: Video Surveillance System GUI ..........................................................................57 Figure 3.24: Communication in Local Network ....................................................................64 vi
  • 8. Figure 3.25: Communication through Internet ....................................................................65 Figure 3.26: Ad-hoc Communication between Two Systems...............................................65 Figure 3.27: Ad-hoc Communication using more nodes...................................................... 66 Figure3.28: Sending Video from Client to Server using Multi-node Ad-hoc Network .........66 Figure 3.29: Log file’s data....................................................................................................69 Figure3.30: Placing two flags on the frame edge..................................................................83 Figure3.31: Diagram to calculate camera coverage angle.....................................................84 Figure 3.32: Camera mounted on the servo shaft.................................................................85 Figure 3.33: 1st step of fixing rotation angle..........................................................................85 Figure 3.34: 2nd step of fixing rotation angle.........................................................................86 Figure 3.35: 1st attempt of reducing PIR coverage angle.......................................................87 Figure 3.36: Fresnel lens........................................................................................................87 Figure 3.37: Solution of reducing PIR coverage angle...........................................................88 Figure 3.38: Calculating PIR cover length for any angle Ө.....................................................89 Figure 3.39: Identifying covered PIR angle ...........................................................................90 Figure 3.40: Final connection diagram of the video surveillance system..............................91 Figure 3.41.: Camera and sensors mounted on the system .................................................92 Figure 3.42: Stand alone system on a tripod ........................................................................92 Figure 4.1: Voltage and current of the system for continuous operation ............................97 Figure4.2: Voltage and current of the system for discontinuous operation ........................97 Figure 4.3: Voltage and current of the system for idle condition .........................................97 Figure 4.4: Screen Shot of Recorded Video Footage in Over-Crowded Place ......................98 Figure 4.5: Screen Shot of Recorded Video Footage in Less-Crowded Place .......................98 Figure 6.1: Hokuyo URG-04LX Laser Rangefinder ...............................................................104 Figure 6.2: Internal mechanism of Hokuyo LRF...................................................................105 Figure 6.3: Range detection area of URG-04LX LRF.............................................................106 Figure 6.4: Range diagram of an empty room.....................................................................107 Figure 6.5: Range diagram of a room with 1 person...........................................................107 Figure 6.6: Detection of movement in an empty room when a person passed................. 116 Figure 6.7: Detection of movement in busy room when people are frequently passing....116 vii
  • 9. List of Tables Table 3.1: Mode of operation of PIR sensor......................................................................... 37 Table 3.2: Price list of components...................................................................................... 47 Table 4.1: Data for Different Speed ......................................................................................99 Table 4.2: Data for Detection Range and Area of Each Sensor ...........................................100 Table 4.3: Experimental Data of Transmission Time...........................................................100 Table 4.3: Experimental Data of Transmission Time...........................................................101 Table 4.4: Experimental Data of Transmission Range.........................................................101 Table 4.5: Experimental Data of Robustness Test...............................................................101 viii
  • 10. Chapter 1 1 Introduction 1.1 Background Over the last few years due to globalization a major change has been occurred in different sectors worldwide such as business, security, health etc. One of the important sectors which are now concern worldwide is security. Due to the emergence of protecting premises, providing security is one of the most important tasks. Thus in order to providing security, video surveillance system was introduced. A video surveillance system is used to monitoring of the behaviour, activity or other information generally of people in a specific area. The application of video surveillance is now not only limited to provide security of an area. Such systems are now implemented in different sectors like hospital for monitoring patient, in industry and process plant to monitor the activity of the production line etc. Generally a video surveillance system consists of a video camera for capturing video footage and a monitor to see the capturing footage. Early model of such system had some limitation. Thus in order to improve this system research has been conducted and more developed system has been arrived in the market. Current systems that is available in the market have different features such as video capturing and recording. A new feature which has been introduced in this type of system over the last few years is transmission of video footage using wireless communication. This feature has given an advantage to place this system into any area. Thus it is now possible to monitor any place where human presence is not all time attainable. It is still now expensive to build this type of system and also the implementation of such system with fully automation. This project aims to design and construct of a standalone video surveillance system which is capable movement detection of any object, record video footage of that object and transmits that video footage using Ad- Hoc network to a server. 1
  • 11. 1.2 Aims and Objectives The aim of this project is to design and construct a video surveillance system which can capture video footage of home, office, or any premise. The system is able to detect any object in motion within a particular area and can capture video footage triggered by motion as it happens. And also the system is able to transmit the recorded video footage via a wireless Ad-Hoc network to a server. The system will be standalone and will be able to function for one week unattended. It must therefore be extremely robust and be able to recover from any errors without intervention. 1.2.1 Objectives The objectives of the project are:  Interfacing sensor with microcontroller for motion detection.  Position and speed control of a motor to achieve precise angle.  Interfacing microcontroller with a PC.  Developing software for capturing video footage.  Develop a client-server video streaming system that will transmit the recently stored video and broadcast it through the network. The client system will be able to transmit stored video and the server system will be able to receive it.  A complete battery system and power supply unit to power up the video surveillance system.  To construct a robust shell for this system. 1.2.2 Deliverables For achieving aim of the project following are the deliverables:  A program that tests microcontroller I/O port by turning on a LED connected to output port by giving an input to the input port. 2
  • 12. A motion detection circuit with three motion detector sensors that interfaced with the microcontroller and detect motion. Three LED will be connected with the output port of the microcontroller and each LED will turn on when motion is detected by associated sensor.  A program that will send few bytes of information from microcontroller to PC and the information will be displayed on the PC.  A program to scan ports for a data and show the data.  Software with Graphical User Interface (GUI) which will consist of a start and stop button for capturing video footage. It will save the video footage with particular date and time in a specific format into a specific folder.  A circuit with motor and microcontroller. The microcontroller will be able to control the motor with precise angle.  A program that will turn the motor shaft in a specific location and control the rotation angle precisely.  A frame that will contain sensors, motor and camera.  Client-side software that will transmit recorded video footage.  Server-side software that will receive the transmitted video and stream it to the local network. 1.3 Distribution of Work This is a final year group project. This group consists of four members. The name and the ID of the group member are given below: 1. Mohammed Mohiuddin (MM) - 09009216 2. Sheikh Faiyaz Moorsalin(SFM) - 09009214 3. Md Nadeem Chowdhury(MNC) - 09009217 4. Sohana Mahmud(SM) - 09009215 Each member of the group is assigned with individual objectives and deliverables in order to fulfil the overall objectives and deliverables of this project. At the end of the project each member submitted report on their work. And this report is completed 3
  • 13. with the help of individual report of each member but as a group wise manner. The individual objectives and deliverables in the projects and contribution of each member in this report are given below separately with their initials. Mohammed Mohiuddin (MM): Objectives  Interfacing PIR (Passive Infrared) sensor with microcontroller for motion detection.  Interfacing microcontroller with an embedded PC. Deliverables: 1. A program that tests microcontroller I/O port by turning on a LED connected to output port by giving an input to the input port. 2. A motion detection circuit with three motion detector sensors that interfaced with the microcontroller and detect motion. Three LED will be connected with the output port of the microcontroller and each LED will turn on when motion is detected by associated sensor. 3. A program that will send few bytes of information from microcontroller to PC and the information will be displayed on the PC. Contribution in the Report: In this report contributions of Mohammed Mohiuddin are in abstract, introduction, literature review (video surveillance system), system concept and design, hardware requirement analysis, hardware selection criteria (sensor, microcontroller, PC, webcam), selection of hardware component (PIR sensor, mbed Microcontroller, fitPC2, Logitech webcam C120), cost analysis, hardware implementation (PIR sensor, fitpc2, mbed microcontroller, power supply unit), Hardware and software integration, pilot work, critical analysis (Hardware part), future work and conclusion. 4
  • 14. Sheikh Faiyaz Moorsalin (SFM) Objectives  Developing software for capturing video footage. Deliverables: 1. A program to scan ports for a data and show the data. 2. Software with Graphical User Interface (GUI) which will consist of a start and stop button for capturing video. After pressing the stop button it will save the video footage with particular date and time in a specific video format into a specific folder. Contribution in Report: In this report contributions of Sheikh Faiyaz Moorsalin are in software requirement analysis, software design for video capturing, and future work for software improvement. Md Nadeem Chowdhury (MNC) Objectives  Interfacing between motor, motor controller and microcontroller.  Position and speed control of a motor.  Detecting Motion using Hokuyo URG Laser Range Finder. Deliverables: 1. A circuit with motor, motor controller and microcontroller. The microcontroller will be able to control the motor via motor controller. 2. A program that will turn the motor shaft in a specific location and control the rotation angle precisely. 3. A frame that will contain sensors, motor and camera. 4. A program that will scan and detect movement by Hokuyo URG Laser Range Finder. 5
  • 15. Contribution in Report: In this report contributions of Md Nadeem Chowdhury are in motor selection (servo motor), controlling method of motor, system implementation (servomotor), robust shell design, cover design to reduce the PIR angle and axis equalization Logitech webcam and PIR sensor. And in future work to improve movement detection performance of the system using Hokuyo Laser sensor. Sohana Mahmud (SM) Objective  Develop a client-server video streaming system that will transmit the recently stored video and broadcast it through the network. The client system will be able to transmit stored video and the server system will be able to receive it. Deliverables: 1. A Client-side software that will transmit recorded video using Ad-Hoc network 2. A Server-side software that will receive the video and stream it to the local network. Contribution in Report: In this report contributions of Sohana Mahmud are in literature review (Transmission of video footage), software design for transmission, critical analysis (transmission software), future work for software improvement. 1.4 Project Planning The Gantt chart of the project is given in figure 1.1 and figure 1.2 6
  • 16. Figure 1.1: Project Gantt chart 7
  • 17. Figure 1.2: Project Gantt chart 8
  • 18. Chapter 2 2 Literature Review 2.1Video Surveillance System 2.1.1Introduction: The video surveillance system was first used in 1960 to monitor crowds attracted to the arrival of the Thai royal family in Trafalgar Square, London [1]. The early model that was used for video surveillance system was all analog and known as CCTV (closed circuit television). A basic CCTV video surveillance system consists of a collection of video cameras. This camera usually mounted in fixed position. The surveillance area of CCTV video surveillance system depends on the field of view angle of the cameras. The video footage captured is transmitted to a central location. In central location the transmitted video footage displayed on one or several monitor or recorded in a storage device. In CCTV video surveillance system the person in charge, who stays in the central location will monitor the activities of the surveillance area and will decide if there is any intruder or any activity is ongoing that warrants a response. With invention of digital technology this system started to change in the mid of the 20th century. Present video surveillance systems are fully digitalized and automated. Automated video surveillance system introduces automatic object detection in the surveillance. This system can generate alarm or generate message that allow user to know if there is any intruder in the surveillance area, thus reducing the burden of the user. This system can be based on PC, or embedded devices, constitute a monitoring system, and multimedia management. With the development in communication sector such as wireless communication, broadband, the transmission system of video footage from a surveillance area is now changed. Many systems are now using wireless communication technology for transmission instead of cable transmission. 9
  • 19. 2.1.2 Classification of video surveillance system Video surveillance system technologies can be classified based on movement detection technique. Two types of technique are widely used for object detection in video surveillance system, these are 1. Video surveillance system using motion detection by sensors. 2. Video surveillance system using motion detection by image processing. 1. Video surveillance system using motion detection by sensors: This type of video surveillance system use different types of sensor for object detection. Some sensor that are widely use for detecting object are passive infrared sensor, ultrasonic sensor and microwave sensor. Among the sensors, PIR sensors are widely used in surveillance systems and automatic light switching systems. In 2004, in university of Malaya a PIR based intruder system is designed. The designed system can track occupants in a designated area, switch on alarm when intrusion occurs, notify client of the intrusion and provide real time monitoring function by using personal computer through the internet [2]. Their system used three different PIR sensors to detect movement. The output from PIR sensor modules are wired to a microcontroller. The microcontroller acts as the heart of the system and processes these sensor signals. This signal is then send to a PC using FM transmitter and further analyzed by software for sending alarm or message signal to the user. Their implemented system is able to detect the presence of human in a protected area with maximum distance of around 7 meters. On April 2006 Mitsubishi Electric Corporation proposed a design on video surveillance system with Object-Aware Video Transcoder [3]. The proposed system not only store high quality video data. It also transmits the data over a limited bandwidth network. On 2002 College of Sciences, Massey University, a security system has been developed using motion detection. Their system used PIR sensors which are interfaced to a microcontroller for motion detection [4]. They also developed software to add appropriate behaviour to the security platform. In 2008 Ying-Wen Bai and Hen Teng, Department of Electronic Engineering, Fu Jen Catholic University,Design ahome surveillance system. The system includes an ARM processor together with a Web camera and a 10
  • 20. PIR sensor. The system triggers the Web camera in presence of an intruder in order to capture and send to a remote server the snapshot. [5] 2. Video surveillance system using motion detection by image processing: In this type of video surveillance system movement is detected comparing the images of video surveillance area. If the two images are same them no movement is detected. When a intruder enter the surveillance area the results of the comparison can detect the intruder thus movement can be detected using this technique. There are many different algorithms available for detection of movement using images processing. This project is related the work with the video surveillance system by movement detection using senor. Thus this literature review helped to get an idea about the existing video surveillance system. 2.2 Transmission of Video Footage To build a robust and appropriate software for transmission of video footage via wireless network, reviews of some existing protocols, architectures is done. Reviews of the existing protocols, architectures are as follows 2.2.1Wireless Networking Wireless network is a type of computer network where interconnections between nodes are implemented without the use of wires. This network is set up by using radio signal frequency to communicate among computers and other network devices. Sometimes it‟s also referred to as WiFi network or WLAN. This network is getting popular nowadays due to easy to setup feature and no cabling involved.[6] The simple explanation of how it works is between 2 computers each equipped with wireless adapter and a wireless router is setup. When the computers send out the data, the binary data will be encoded to radio frequency and transmitted via wireless router. The receiving computer will then decode the signal back to binary data. The 11
  • 21. two main components are wireless router or access point and wireless clients which connects the wireless device with other network or internet. 2.2.2 Ad-hoc Network Figure 2.1: Ad-hoc Network [7] An ad hoc network, or MANET (Mobile Ad hoc NETwork), is a network composed only of nodes, with no Access Point. Messages are exchanged and relayed between nodes. In fact, an ad hoc network has the capability of making communications possible even between two nodes that are not in direct range with each other: packets to be exchanged between these two nodes are forwarded by intermediate nodes, using a routing algorithm. Hence, a MANET may spread over a larger distance, provided that its ends are interconnected by a chain of links between nodes (also called routers in this architecture). In the ad hoc network shown in the following figure, node A can communicate with node D via nodes B and C, and vice versa[8]. 12
  • 22. Figure 2.2: A multi-node ad-hoc network [8] A sensor network is a special class of ad hoc network, composed of devices equipped with sensors to monitor temperature, sound, or any other environmental condition. These devices are usually deployed in large number and have limited resources in terms of battery energy, bandwidth, memory, and computational power [8]. Mode: No fixed infrastructure or base station is needed. Entities communicate with each other through multiple wireless links. Each node serves as a router to forward packets for others. Power issue: Nodes are usually powered by batteries. Power –aware and energy efficient algorithms can significantly improve the performance of the systems. Ad-hoc network consist of large numbers of unattended devices where battery replacement is much more difficult [9]. The optimization goal of routing is to lower the power usage so as to enhance the network availability. Long term Connectivity: One of the important problems is the long term connectivity maintenance of networks. Nodes are often homogeneous in terms of initial energy while their 13
  • 23. workloads are always unevenly distributed, causing some nodes to deplete their energy faster than others. More seriously, nodes closer to sink always have more workload than more distant nodes. They not only transmit their own data but also help to forward others data. Consequently, they are prone to failure because of the depletion of the energy. This is the so-called “hot spot” problems [10][11]. But this not has been fully investigated. Advantages and disadvantages A wireless network offers important advantages with respect to its wired homologue [8]:  The main advantage is that a wireless network allows the machines to be fully mobile, as long as they remain in radio range.  Even when the machines do not necessarily need to be mobile, a wireless network avoids the burden of having cables between the machines. From this point of view, setting a wireless network is simpler and faster. In several cases, because of the nature and topology of the landscape, it is not possible or desirable to deploy cables: battlefields, search-and-rescue operations, or standard communication needs in ancient buildings, museums, public exhibitions, train stations, or inter-building areas.  While the immediate cost of a small wireless network (the cost of the network cards) may be higher than the cost of a wired one, extending the network is cheaper. As there are no wires, there is no cost for material, installation and maintenance. Moreover, mutating the topology of a wireless network – to add, remove or displace a machine – is easy. On the other hand, there are some drawbacks that need to be pondered [8]:  The strength of the radio signal weakens (with the square of the distance), hence the machines have a limited radio range and a restricted scope of the network. This causes the well-known hidden station problem: consider three machines A, B and C, where both A and C is in radio range of B but they are not in radio range of each other. This may happen because the A - C distance is greater than the A - B and B – C distances, as in Figure, or because of an obstacle between A and C. The hidden station problem occurs whenever C is 14
  • 24. transmitting: when A wants to send to B, A cannot hear that B is busy and that a message collision would occur, hence A transmits when it should not; and when B wants to send to A, it mistakenly thinks that the transmission will fail, hence B abstains from transmitting when it would not need to. Figure 2.3: The hidden station problem [8]  The site variably influences the functioning of the network: radio waves are absorbed by some objects (brick walls, trees, earth, human bodies) and reflected by others (fences, pipes, other metallic objects, and water). Wireless networks are also subject to interferences by other equipment that shares the same band, such as microwave ovens and other wireless networks.  Considering the limited range and possible interferences, the data rate is often lower than that of a wired network. However, nowadays some standards offer data rates comparable to those of Ethernet.  Due to limitations of the medium, it is not possible to transmit and to listen at the same time, therefore there are higher chances of message collisions. Collisions and interferences make message losses more likely.  Being mobile computers, the machines have limited battery and computation power. This may entail high communication latency: machines may be off most of the time (doze state i.e. power-saving mode) and turning on their receivers periodically, therefore it is necessary to wait until they wake up and are ready to communicate. 15
  • 25. As data is transmitted over Hertzian waves, wireless networks are inherently less secure. In fact, transmissions between two computers can be eavesdropped by any similar equipment that happens to be in radio range. Routing protocols for ad hoc networks [8] In ad hoc networks, to ensure the delivery of a packet from sender to destination, each node must run a routing protocol and maintain its routing tables in memory. Reactive protocols Under a reactive (also called on-demand) protocol, topology data is given only when needed. Whenever a node wants to know the route to a destination node, it floods the network with a route request message. This gives reduced average control traffic, with bursts of messages when packets need being routed and an additional delay due to the fact that the route is not immediately available. Proactive protocols In opposition, proactive (also called periodic or table driven) protocols are characterized by periodic exchange of topology control messages. Nodes periodically update their routing tables. Therefore, control traffic is more dense but constant, and routes are instantly available. Hybrid protocols Hybrid protocols have both the reactive and proactive nature. Usually, the network is divided into regions, and a node employs a proactive protocol for routing inside its near neighborhood‟s region and a reactive protocol for routing outside this region. The Optimized Link State Routing protocol The Optimized Link State Routing (OLSR) protocol is a proactive link state routing protocol for ad hoc networks. The core optimization of OLSR is the flooding mechanism for distributing link state information, which is broadcast in the network by selected nodes called Multipoint Relays (MPR). As a further optimization, only partial link state is diffused in the network. OLSR provides optimal routes (in terms of number of hops) and is particularly suitable for large and dense networks. 16
  • 26. 2.2.3 Bluetooth Transmission Technology The dream for true, seamless, mobile data and voice communications that enables constant connectivity anywhere is quickly becoming a reality. Wireless and computer industries are clearly leading the way with revolutionary components that will shape our lives in the next century. In 1994, Ericsson Mobile Communications initiated a study to investigate the feasibility of a low power, low cost radio interface between mobile phones and their accessories. The aim of this study was to eliminate cables between mobile phones and PC Cards used to connect the phones to a computer for dial up networks (DUN). In 1998 Intel, IBM, Toshiba, Ericsson and Nokia began developing a technology that would allow users to easily connect to mobile devices without cables [12]. This technological vision became a reality through the synergy of market leaders in laptop computing, telecommunications, and core digital signal processing. May 20th, 1998 marked the formation of the Bluetooth Special Interest Group (SIG) with the goal to design a royalty free, open specification, de facto, short range, low power wireless communication standard, as well as a specification for small-form factor, low-cost, short range radio links between mobile PCs, mobile phones and other portable devices codenamed Bluetooth. The result was an open specification for a technology to enable short-range wireless voice and data communications anywhere in the world. A simple way is to connect and communicate without wires or cables between electronic devices including computers, PDA‟s, cell-phones, network access and peripherals. The technology of Bluetooth operates in a globally available frequency band ensuring communication compatibility worldwide. One of the primary advantages of the Bluetooth system is ease of computer vendor product integration. Other key benefits of this technology are low power, long battery life, low cost, low complexity, and wireless connectivity for personal space, peer-peer, cable replacement, and seamless and ubiquitous connectivity. To achieve the Bluetooth goal, tiny, inexpensive, short-range transceivers are integrated into devices either directly or through an adapter device such as a PC Card. Add on devices such as a USB or Parallel port connections are also available for legacy systems. By 17
  • 27. establishing links in a more convenient manner this technology will add tremendous benefits to the ease of sharing data between devices. One universal short-range radio link can replace many proprietary cables that connect one device to another. Laptop and cellular users will no longer require cumbersome cables to connect the two devices to send and receive email. Possible health risks from radiated RF energy of cellular handsets are mitigated with lower transmission power of the Bluetooth enabled ear set. (The ear set solution does not require the handset to be close to the head.) Moreover, unlike the traditional headset, the wireless ear set frees the user from any unnecessary wiring. As Bluetooth offers the ability to provide seamless voice and data connections to virtually all sorts of personal devices the human imagination is the only limit to application options. Beyond un-tethering devices by replacing the cables, this technology provides a universal bridge to existing data networks, allows users to form a small private ad hoc wireless network outside of fixed network infrastructures, enables users to connect to a wide range of computing and telecommunications devices easily and simply, without the need to buy, carry, or connect cables. The Bluetooth technology allows users to think about what they are working on, rather than how to make their technology work. The Internal Documents and Presentations (IDC) forecast in 2004, 103.1 million devices in the United States and 451.9 million devices worldwide would become Bluetooth enabled [12]. 2.2.4 Protocol issues Designing a network protocol to support streaming media raises many issues, such as:  Datagram protocols, such as the User Datagram Protocol (UDP), send the media stream as a series of small packets. This is simple and efficient; however, there is no mechanism within the protocol to guarantee delivery. It is up to the receiving application to detect loss or corruption and recover data using error correction techniques. If data is lost, the stream may suffer a dropout.  The Real-time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP) and the Real-time Transport Control Protocol (RTCP) were specifically designed 18
  • 28. to stream media over networks. RTSP runs over a variety of transport protocols, while the latter two are built on top of UDP [13].  Another approach that seems to incorporate both the advantages of using a standard web protocol and the ability to be used for streaming even live content is the HTTP adaptive bitrates streaming. HTTP adaptive bitrates streaming is based on HTTP progressive download, but contrary to the previous approach, here the files are very small, so that they can be compared to the streaming of packets, much like the case of using RTSP and RTP [14].  Reliable protocols, such as the Transmission Control Protocol (TCP), guarantee correct delivery of each bit in the media stream. However, they accomplish this with a system of timeouts and retries, which makes them more complex to implement. It also means that when there is data loss on the network, the media stream stalls while the protocol handlers detect the loss and retransmit the missing data. Clients can minimize this effect by buffering data for display [15].  Unicast protocols send a separate copy of the media stream from the server to each recipient. Unicast is the norm for most Internet connections, but does not scale well when many users want to view the same program concurrently. 2.2.5 TCP/IP model The TCP/IP model is a description framework for computer network protocols created in the 1970s by DARPA, an agency of the United States Department of Defense. It evolved from ARPANET, which were the world's first wide area network and a predecessor of the Internet. The TCP/IP Model is sometimes called the Internet Model or the DoD Model [16]. As with all other communications protocol, TCP/IP is composed of layers:  IP - is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world. 19
  • 29. TCP is responsible for verifying the correct delivery of data from client to server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received.  A socket is a name given to the package of subroutines that provide access to TCP/IP on most systems. The TCP/IP model, or Internet Protocol Suite, describes a set of general design guidelines and implementations of specific networking protocols to enable computers to communicate over a network. TCP/IP provides end-to-end connectivity specifying how data should be formatted, addressed, transmitted, routed and received at the destination. Protocols exist for a variety of different types of communication services between computers [16]. It defines as a four-layer model, with the layers having names, not numbers, as follows:  Application Layer (process-to-process): This is the scope within which applications create user data and communicate this data to other processes or applications on another or the same host. The communications partners are often called peers. This is where the "higher level" protocols such as SMTP, FTP, SSH, HTTP, etc. operate.  Transport Layer (host-to-host): The Transport Layer constitutes the networking regime between two network hosts, either on the local network or on remote networks separated by routers. The Transport Layer provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. This is where flow-control, error- correction, and connection protocols exist, such as TCP. This layer deals with opening and maintaining connections between Internet hosts.  Internet Layer (internetworking): The Internet Layer has the task of exchanging datagrams across network boundaries. It is therefore also referred to as the layer that establishes internetworking; indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in 20
  • 30. routing is to transport datagram to the next IP router that has the connectivity to a network closer to the final data destination.  Link Layer: This layer defines the networking methods with the scope of the local network link on which hosts communicate without intervening routers. This layer describes the protocols used to describe the local network topology and the interfaces needed to affect transmission of Internet Layer datagram to next-neighbor hosts. (cf. the OSI Data Link Layer). Addresses Each technology has its own convention for transmitting messages between two machines within the same network. On a LAN, messages are sent between machines by supplying the six byte unique identifier (the "MAC" address). In an SNA network, every machine has Logical Units with their own network address. DECNET, Appletalk, and Novell IPX all have a scheme for assigning numbers to each local network and to each workstation attached to the network [16]. On top of these local or vendor specific network addresses, TCP/IP assigns a unique number to every workstation in the world. This "IP number" is a four byte value that, by convention, is expressed by converting each byte into a decimal number (0 to 255) and separating the bytes with a period. An Uncertain Path [16] Every time a message arrives at an IP router, it makes an individual decision about where to send it next. There is concept of a session with a preselected path for all traffic. There is no correct answer about how does the router make a decision between routes. Traffic could be routed by the "clockwise" algorithm. The routers could alternate, sending one message to one place and the next to other place. More sophisticated routing measures traffic patterns and sends data through the least busy link. If one phone line in this network breaks down, traffic can still reach its destination through a roundabout path. This provides continued service though with degraded performance. This kind of recovery is the primary design feature of IP. The loss of the line is immediately detected by the routers, but somehow this information must be sent to the other nodes. Each network adopts some Router Protocol which 21
  • 31. periodically updates the routing tables throughout the network with information about changes in route status. If the size of the network grows, then the complexity of the routing updates will increase as will the cost of transmitting them. Building a single network that covers the entire US would be unreasonably complicated. Fortunately, the Internet is designed as a Network of Networks. This means that loops and redundancy are built into each regional carrier. The regional network handles its own problems and reroutes messages internally. Its Router Protocol updates the tables in its own routers, but no routing updates need to propagate from a regional carrier. TCP treats the data as a stream of bytes. It logically assigns a sequence number to each byte. The TCP packet has a header that says, in effect, "This packet starts with byte 379642 and contains 200 bytes of data." The receiver can detect missing or incorrectly sequenced packets. TCP acknowledges data that has been received and retransmits data that has been lost. The TCP design means that error recovery is done end-to-end between the Client and Server machine. There is no formal standard for tracking problems in the middle of the network, though each network has adopted some ad hoc tools. Each large company or university that subscribes to the Internet must have an intermediate level of network organization and expertise. Half dozen routers might be configured to connect several dozen departmental LANs in several buildings. All traffic outside the organization would typically be routed to a single connection to a regional network provider. However, the end user can install TCP/IP on a personal computer without any knowledge of either the corporate or regional network. Three pieces of information are required: 1. The IP address assigned to this personal computer 2. The part of the IP address (the subnet mask) that distinguishes other machines on the same LAN (messages can be sent to them directly) from machines in other departments or elsewhere in the world (which are sent to a router machine) 3. The IP address of the router machine that connects this LAN to the rest of the world. 22
  • 32. 2.2.6 Live streaming Live streaming means taking the video and broadcasting it live over the internet/network. The process involves a camera for the video, an encoder to digitize the content, a video publisher where the streams are made available to potential end- users and a content delivery network to distribute and deliver the content. The media can then be viewed by end-users live [17]. There are some primary technical issues related to streaming. They are:  The system must have enough CPU power and bus bandwidth to support the required data rates  The software should create low-latency interrupt paths in the operating system (OS) to prevent buffer under run. However, computer networks were still limited, and media was usually delivered over non-streaming channels, such as by downloading a digital file from a remote server and then saving it to a local drive on the end user's computer or storing it as a digital file and playing it back from CD-ROMs. During the late 1990s and early 2000s, Internet users saw greater network bandwidth, increased access to networks, use of standard protocols and formats, such as TCP/IP, HTTP. These advances in computer networking combined with powerful home computers and modern operating systems made streaming media practical and affordable for ordinary consumers. But multimedia content has a large volume, so media storage and transmission costs are still significant [17]. 2.2.7 Professional Video Surveillance Software Eyeline video surveillance software Designed specifically for business, Eyeline is perfect for video monitoring of offices and buildings, or to log in-store cameras. Used in conjunction with a security alarm, Eyeline can capture, analyse and play back security footage to determine if a 23
  • 33. security call out is warranted. Eyeline is also a simple and effective video security system for your home [18]. Figure 2.4: Eye line video software [18] Camera Properties: Figure 2.5: Find and Play Recordings Window [18] 24
  • 34. Figure 2.6: Video Recordings [18] Features of this software [18]  Records up to 100+ camera sources simultaneously  Motion detection recording saves space by only recording when something is happening  Email or SMS alerts available for motion detection  Automatic time stamping of frames lets you use footage as evidence if required  Web control panel lets you access and view recordings remotely  'Save to' feature lets you save footage to a network folder  Back up recordings via FTP  Video can be monitored live on the screen as it records  Cameras can be setup in a flash by just a click of a button  Find and play recordings ordered by camera, date, duration and motion detected  Integrated with Express burn to record video files to DVD  Intelligent, easy to use and extremely reliable for day-to-day operation 25
  • 35. Streaming video Using VLC The VLC media player is an amazing piece of software. In its most basic form it is a lightweight media player that can play almost any audio or video format you throw at it. VLC is also multiplatform in the most extreme sense of the word; it can run on Windows, OSX, Linux and PocketPC / WinCE handhelds along with other systems. VLC works great as a streaming server and video transcoder too [19] [20]. Figure 2.7: VideoLan Streaming Solution [21] The network needed to setup the VideoLAN solution can be as small as one ethernet 10/100Mb switch or hub, and as big as the whole Internet. Moreover, the VideoLAN streaming solution has full IPv6 support [21]. Examples of needed bandwidth are:  0.5 to 4 Mbit/s for a MPEG-4 stream,  3 to 4 Mbit/s for an MPEG-2 stream read from a satellite card, a digital television card or a MPEG-2 encoding card,  6 to 9 Mbit/s for a DVD. VLC is able to announce its streams using the SAP/SDP standard, or using Zeroconf (also known as Bonjour) [21]. 26
  • 36. 2.2.8 Encryption Encryption is a process that takes information and transcribes it into a different form that is unable to read by anyone who does not have the encryption code [22]. Manual Encryption Manual encryption is a type that involves the use of encryption software. These are computer programs that encrypt various bits of information digitally. Manual encryption involves the user's participation completely. The files he wants to encrypt are chosen, and then an encryption type is chosen from a list that the security system provides. Transparent Encryption Transparent encryption is another type of computer software encryption. It can be downloaded onto a computer to encrypt everything automatically. This is one of the most secure types of encryption available because it doesn't leave out anything that might be forgotten when using manual encryption. Symmetric Encryption Not all encryption is done via a computer software program. Anyone can easily encrypt information. One of the simplest ways to do this is through symmetric encryption. Here, a letter or number coincides with another letter or number in the encryption code. Asymmetric Encryption Asymmetric encryption is a secure and easy way that can be used to encrypt data that you will be receiving. It is generally done electronically. A public key is given out to the public to see. They can then encrypt information using the key and send it to you. This is often done when writing emails. However, to decipher the encrypted code, there is another key, a private one, that only one person has. This means that while any can encrypt the data with the public key, it can only be read again by whoever has the private key. 27
  • 37. 2.2.9 Video Encryption Video encryption or video scrambling is a powerful technique for the preventing unwanted interception and viewing of transmitted video, for example from a law enforcement video surveillance being relayed back to a central viewing centre [23]. Video encryption is the easy part. It is the unscrambling that's hard. There are several techniques of video encryption. However, the human eye is very good at spotting distortions in pictures due to poor video decoding or poor choice of video encryption hardware. So, it‟s important to choose the right video encryption hardware else you video transmissions may be unsecure or you‟re decoding video un-viewable. Some popular techniques for Video Encryption are outlined below [22] [23]:  Line Inversion Video Encryption: Encryption Method: Whole or part video scan lines are inverted. Advantages: Simple, cheap video encryption. Disadvantages: Poor video decrypting quality, low obscurity, low security.  Sync Suppression Video Encryption: Encryption Method: Hide/remove the horizontal/vertical line syncs. Advantages: Provides a low cost solution to scrambling and provides good quality video decoding. Disadvantages: This method is incompatible with some distribution equipment. Obscurity (i.e. how easy it is to visually decipher the image) is dependent on video content.  Line Shuffle Video Encryption: Encryption Method: Each video line is re-ordered on the screen. Advantages: Provides a compatible video signal, a reasonable amount of obscurity, good decode quality. Disadvantages: Requires a lot of digital storage space. There are potential issues with video stability. Less secure than the cut and rotate encryption method.  Cut & Rotate Video Encryption: Encryption Method: Each scan line is cut into pieces and re-assembled in a different order. 28
  • 38. Advantages: Provides a compatible video signal, gives an excellent amount of obscurity, as well as good decode quality and stability. Disadvantages: Can have complex timing control and requires specialized scrambling equipment The cut and rotate video encryption method is probably the best way of achieving reliable and good quality video encryption. Factors in video encryption implementation Finally and most obviously each user must have a unique encryption key so that other users of the system cannot view the transmitted video by accident or purpose without the key owner‟s knowledge. The total number of possible user keys must be such that it is highly unlikely for someone to guess the correct key [24]. The key points for a good video encryption method are:  Everyone has a unique encryption key or code.  The video encryption system should not try and decode not encrypted video transmissions.  The encrypted signal should be positively identified by the decoder. The decoder should recognize the encrypted signal and only attempt to decode when fully validated.  On screen status display and identification.  Automatic configuration to any video standard. 29
  • 39. Chapter 3 3 System Concept, Design and Implementation In this section the requirement analysis for building video surveillance system is described. The conceptual system block diagram is included. And a detail selection criterion of each component is discussed. 3.1 Requirement analysis The video surveillance system will detect movement of any moving object. After detecting movement it will turn the camera towards the moving object and start to record video footage of that object. When the recording is finished it will transmit the video footage through wireless Ad-Hoc network from the client (surveillance system) to a server. The key requirements of the system are: 1. The system must consist of a sensor which will detect movement of any object. 2. The system must have a video camera to record footage. The camera should capable of recording high quality video footage in order to ensure that, the recorded object can be detected from the recorded video footage. 3. The system must consist of a device which can turn video camera to any direction within 180 degree. 4. The system must consist of a storage device that will store video footage. 5. The system must consist of a device which has a wireless transmission capability to transmit the recorded video footage. 6. There should be a device which will control the whole system. 7. There must be software which will record video footage and store it to a storage device. 8. There must be software which will transmit recorded video footage via wireless Ad-Hoc network. 9. The system must consist of a power supply unit that will power up the whole system. 30
  • 40. After finishing the requirement analysis a block diagram of the system is designed. Figure 3.1 shows the blocks diagram of the system. Figure3.1: Block diagram of video surveillance system 3.2 Selection of Hardware According to the requirement analysis each hardware component will be selected so that it will meet the system requirements. In the following section the selection criteria of each component is described. 3.2.1 Selection of sensor The sensor should be able to detect the movement during day or night. The range of the sensor must be as large as possible; it will ensure a good coverage area for video surveillance. The sensor should consume less power, which will give an advantage to run the standalone video surveillance more time with a single power source. 31
  • 41. 3.2.2 Selection of Video Camera The video camera of the system must have the following features 1. The resolution of the video footage of the camera should be high. It will ensure easy recognition of the object from the recorded video footage. 2. The weight of the camera should be less in order to couple it with any motor. 3. The video camera must have easy connectivity features so that it could be easily connected with the relevant device for capturing video footage. 4. It should consume a small amount of power. 3.2.3 Selection of Motor According to the requirement analysis the system should be able to record video footage from any direction within 180 degree. Thus the system requires a motor which shaft can be positioned in any desired direction between 0-180 degrees. The camera will be mounted on the motor shaft and the motor will turn the video camera towards the moving object precisely. The motor must have strong torque to turn the camera toward the moving object. The motor should consume a small amount of power. 3.2.4 Selection of PC For capturing and transmitting video footage a computer will be used. The configuration of the PC should be high enough so that it can run the software that will use to record and transmit video footage. The PC must have connectivity port to connect the video camera with it. The PC should be small enough to use it in the standalone system and should consume a small amount of power. It must consist of a wireless network card to establish wireless communication with other device. The PC should quickly boot up, it should robust to any kind of power failure and must be able to start program without user intervention. 32
  • 42. 3.2.5 Selection of Microcontroller Microcontroller will control the whole system. The microcontroller will take the signal from the sensor and it will turn the motor so that the camera can turn towards the moving object. And it will send a signal to the PC to record video footage when a movement is detected. Thus the micro controller should have the following features 1. It should consist of serial or USB communication so that it can communicate with PC for sending the triggered signal. 2. It must have input port so that it could take input signal from the sensor when the sensor is triggered by any movement. 3. It should have some output port so that it can control the motor for turning any direction. 4. The clock frequency should be high for faster operation. Faster operation means more power i.e. processing speed. A higher processing will ensure faster execution of program thus the controller can control the whole system more faster. 5. It should consume a small amount of power. 3.2.6 Selection of Power Source The power source of the system should be able to run the system for a long time and it should capable to deliver required power by the system. Thus a high Ampere Hour rating power source is required. 3.3 Software design As per general approach of design, wherever possible use commercially available tools and products. While design software sections this approach has vital importance. The language selection for design must suit for managing all type of standard input output system and it should have flexibility to required working platform. Low level programming language like C or Pascal should be use for 33
  • 43. programming microcontrollers and its interfaces. Using high level language like Microsoft C++, Java, Microsoft C# etc can be used for programming different algorithm and control strategy. In the following sections, there are some discussion about some common programming language and its available libraries. 3.3.1Video Capturing Software Requirement The video capturing software requirements are as follows  The software must be able to read trigger signal from microcontroller through serial port to start video capturing.  The software must capture and save video footage from webcam after getting a trigger signal.  The software must store the video footage with the specific capturing date and time.  The software must allow user to select various capture duration.  The software must have an interactive and user friendly Graphical User Interface (GUI) to interact with non-technical users. The first requirement deals with serial port communication with the microcontroller and PC. This communication is based on string transmitting and receiving from microcontroller to start the capturing right away. The second requirements deals with capturing from connected external webcam. The third requirement deals with storing video footage with capturing date and time. Here the software will read time and date from operating system and will place it as the file name of the captured footage. The fourth requirement deals with building a simple and user friendly GUI which can allow user to control the surveillance system, select capture duration. 34
  • 44. 3.4 Component selection and Features Based on the criterion that is previously described, selected component is described in this section. Selection of component is not based on the best components, rather it is related to cost, availability and meeting the requirements by the system. 3.4.1 Hardware Component Selection 3.4.1.1 PIR Sensor For movement detection passive infrared sensor (PIR) is selected. Any object that generates heat also generates infrared radiation. Object including human body and animals has strongest infrared radiation at wavelength of 9.4μm [25]. Infrared radiation cannot be seen since its wavelength is longer than visible light. But it can be detected by Passive Infrared sensor. The PIR sensor converts incident IR flux into an electrical output. It is done through two steps: there is an absorbing layer inside the PIR sensor that transforms the radiation flux change into a change in temperature and the pyroelectric element inside the PIR sensor performs a thermal to electrical conversion [26]. Thus when an object movement occurs in the Field of View (FoV) of the sensor it generates an output electrical signal in the response of that movement. This output electrical signal then processed by amplifier and comparator and, used in different circuit for movement detection. In figure 3.2 general system architecture of PIR sensor is shown. Figure3.2: General system architecture of PIR sensor [27] The PIR sensor that is selected for this system is Parallax-PIR sensor. This sensor has the following features:  Detection range up to 20 ft 35
  • 45. Single bit output  Small size makes it easy to conceal  Compatible with all Parallax microcontrollers  Low current draw, less than 100 uA  Power requirements: 3.3 to 5 VDC  Operating temp range: +32 to +158 °F (0 to +70 °C) In figure 3.3 the selected Parallax-PIR sensor is shown Figure 3.3: Parallax-PIR sensor [28] There are two mode of operation of this sensor. These modes are retrigger and normal. These two modes of operation are selected by jumper pin H and L. In figure 3.4 jumper pins position is shown. Figure 3.4: Jumper Pin (H and L) Position [29] The mode of operation for the two setting is given in table 3.1 36
  • 46. Table 3.1: Mode of operation of PIR sensor [29] Position Mode Description Output remains HIGH when H Retrigger sensor is retriggered repeatedly. Output is LOW when idle (not triggered). Output goes HIGH then LOW L Normal when triggered. Continuous motion results in repeated HIGH/LOW pulses. Output is LOW when idle. The output waveform for the two mode of operation is shown in figure 3.5 and 3.6 Figure 3.5: Waveform of PIR sensor output for Retrigger mode of operation Figure3.6: Waveform of PIR sensor output for normal mode of operation 37
  • 47. PIR sensor output remains high minimum two second for single movement detection. “The PIR Sensor requires a „warm-up‟ time in order to function properly. This is due to the settling time involved in „learning‟ its environment. This could be anywhere from 10-60 seconds. During this time there should be as little motion as possible in the sensors field of view [29]”. Science the PIR sensor draw less than 100uA current, a 24Ah battery can operate it for 240000 hours. 3.4.1.2 Mbed Rapid Prototyping Board For controlling the whole system mbed NXP LPC1768 prototyping board is used. This board is based on LPC1768 ARM Cortex-M3 based microcontroller. This board has the following features:  Convenient form-factor: 40-pin DIP, 0.1-inch pitch.  Drag-and-drop programming, with the board represented as a USB drive.  ARM Cortex-M3 hardware 100 MHz with 64 KB of SRAM, 512 KB of Flash  Ethernet, USB OTG (USB On-The-GO).  SPI (Serial Peripheral Interface Bus), I2C (Inter-Integrated Circuit), UART (Universal Asynchronous Receiver/Transmitter).  PWM (Pulse Width Modulation), ADC (Analog-to-Digital Converter), DAC (Digital-to-Analog Converter.)  Web-based C/C++ programming environment There are 26 digital input/output pin (pin5-pin30) to take digital input or send digital output. When these pin are set as digital input pin, any voltage that is above 2.0V is logic 1 and below 0.8V is logic 0. When these pin are set as digital output pin, for logic 0 the output pin is at 0V and for logic 1 output voltage is 3.3V. These pin can source or sink a maximum current of 40mA. Thus in the system these pin can be used as input pin from the PIR sensor [30]. There are six PWM output pin (pin21-pin26) in this prototype board. All PWM output shares the same period but can be set to give different pulsewidth. Thus changing the period of one will change the period of other output [31]. There are 38
  • 48. built in function for PWM signal generation by which period and pulsewidth can be set precisely in seconds, millisecond or microseconds. Thus in this video surveillance system the servomotor that is used can be control precisely by this microcontroller prototyping board. This microcontroller can establish serial RS-232 communication with PC. Pins p9/p10, p13/p14, p28/p27 and USBTX/USBRX can be used for serial RS-232 communication. One of the serial connections goes through via USB port. Thus it allows easy communication with host PC. And this USB port is used as virtual RS- 232 serial port. Baud rate for serial communication is from a few hundred bits per seconds, to megabits per second. This allows high speed data transfer between microcontroller and host PC. The data length is 7 or 8 bits long. This virtual RS-232 serial port feature enables the microcontroller to be used with any computer especially latest PC which does not have any RS-232 serial port. Thus by using this microcontroller in the video surveillance system will make the system connectable to any old or latest PC. The microcontroller can be powered by USB or 4.5v - 9.0v applied to VIN pin. And it takes less than 200mA (100mA with Ethernet disabled) current. Thus it is possible to operate it 120 hours with a 24Ah rated battery. There are many built-in functions that enable to use the microcontroller easily in any application. In figure 3.7 mbed NXP LPC1768 prototyping board is shown Figure 3.7: mbed Microcontroller [32] 39
  • 49. 3.4.1.3 Servo Motor A Servo is a small device that has an output shaft. This shaft can be positioned to specific angular positions by sending the servo a coded signal. As long as the coded signal exists on the input line, the servo will maintain the angular position of the shaft. As the coded signal changes, the angular position of the shaft changes. Servo is extremely useful in robotics. The motor is small, has built in control circuitry, and are extremely powerful for its size. A standard servo such as the HITEC-HS-475HB has torque 0.54 N.m @ 6.0V, which is pretty strong for its size. There are 3 wires that connect to the outside world. The red wire is for power (+5volts), the black wire is ground, and the yellow wire is the control wire. In figure 3.8 a servo motor is shown Figure3.8: Servo motor [33] The servo motor has some control circuits and a potentiometer that is connected to the output shaft. This potentiometer allows the control circuitry to monitor the current angle of the servo motor. If the shaft is at the correct angle, then the motor shuts off. If the circuit finds that the angle is not correct, it will turn the motor towards the correct direction until the angle is correct. The output shaft of the servo is capable of travelling 0 to 180 degrees; also there are servos that are capable of travelling 0 to 210 degrees. A normal servo is used to control an angular motion of between 0 and 180 degrees. A normal servo is mechanically not capable of turning any farther due to a mechanical stop built on to the main output gear. 40
  • 50. Figure3.9: Internal circuit of a servo motor [34] The control wire is used to communicate the angle. The angle is determined by the duration of a pulse that is applied to the control wire. This is called Pulse Coded Modulation. The servo expects to see a pulse every 20 milliseconds. The length of the pulse will determine how far the motor turns. A 1.5 millisecond pulse, for example, will make the motor turn to the 90 degree position (often called the neutral position). If the pulse is shorter than 1.5 ms, then the motor will turn the shaft to closer to 0 degree. If the pulse is longer than 1.5ms, the shaft turns closer to 180 degree. Figure3.10: Some random pulse and its corresponding rotation of a servo shaft According to the requirement analysis HITEC-HS-475HB servo motor is used in the project. The technical specification of the servo is given below. 41
  • 51. Technical specification:  Operating Voltage: 4.8V/6.0V  Speed @ 4.8V: 0.23 sec/60°  Speed @ 6.0V: 0.18 sec/60°  Torque @ 4.8V: 0.43 N.m  Torque @ 6.0V: 0.54 N.m  Motor Type: 3 Pole  Bearing Type: Top Ball Bearing  Weight: 1.41oz (40.0g)  Dimensions: 38.8 x 19.8 x 36mm According to the requirement analysis the system should be able to record video footage from any direction. The HITEC-HS-475HB servo motor shaft can cover any angle 0 degree to 180 degree. This is primarily enough for the project which includes three PIR sensors. But to cover 0 degree to 360 degree two similar servo motors can be used. From the aspect of positioning the servo shaft in a specific angle a particular pulse width modulated signal is enough to given through control wire.From the aspect of mounting the camera the HITEC-HS-475HB servo motor is perfect. The motor shaft is well made and very comfortable to mount the camera. The torque of the HITEC- HS-475HB servo motor is 76.37 oz-in (5.5kg/cm) @ 6.0V which is more than enough to rotate the camera. The HITEC-HS-475HB servo motor usually consumes 200mA current during rotation with ideal load which is much less and cost effective. Current consumption may increase with load. 3.4.1.4 FitPC2 For video capturing and transmission fitPC2 running on Windows 7 is selected. The specification of fitPC2 is as below:  Intel Atom Z530 1.6GHz  Memory 1GB DDR2-533MHz  Internal bay for 2.5″ SATA HDD 160 GB 42
  • 52. Intel GMA500 graphics acceleration  Wireless LAN - 802.11g WLAN  6 USB 2.0 High Speed ports  Windows 7 Professional  Case - 100% aluminium Die cast body  Dimensions-101 x 115 x 27 mm 4″ x 4.5″ x 1.05″  Weight - 370 grams / 13 ounces – including hard disk  Operating Temperature - 0 – 45 deg C with hard disk 0 – 70 deg C with SSD  Power- 12V single supply 8-15V tolerant  Power Consumption- 6W at low CPU load <7W at 1080p H.264 playback 8W at full CPU load 0.5W at standby The smaller size, less weight and low power consumption features of the fitPC2 will make the video surveillance smaller in size, lesser in weight and operate for long hours. Moreover USB2.0 connection can be used to communicate with the microcontroller via virtual serial port to receive triggering signal. The built in WLAN 802.11g card will allow the video surveillance system to transmit recorded video footage through wireless network. And the wide temperature range will allow the PC in different environmental condition. There is no cooling fan inside it, thus it will work quite noiseless in any place. In figure 3.11 fitPC2 is shown. Figure 3.11: fitPC2 [35] 43
  • 53. 3.4.1.5 Logitech Webcam C120 Logitech webcam C120 is selected for video recording. This web cam has the following specifications:  Video capture: up to 800 x 600 pixels  Up to 30 frames per second video  Hi-Speed USB 2.0 communication  Resolution up to 1.3 megapixel  Weight 100 gm 800 x 600 pixels and 30 frames per second will give a high quality video footage for the video surveillance system. And the webcam can transfer data faster using USB 2.0 port. And the light weight of the camera will makes it easy to mount with the servo motor. The minimum system requirement of a PC to connect this webcam is 1GHz CPU and 256 MB of RAM with Windows XP operating system thus this webcam performance will be better with fitPC2. In figure 3.12 the selected Logitech C120 webcam is shown. Figure3.12: Logitech C120 webcam [36] 3.4.1.6 Power Supply For power supply a 12v, 24Ah battery is chosen. The fitPC2 will be powered directly from 12v supply. A power supply circuit will be used to reduce the voltage level to power up the microcontroller, PIR sensor and Servomotor. 44
  • 54. 3.4.2 Software Selection Microsoft .Net framework has a collection of huge number of libraries for secure communication and solution for all common programming problems. The features include multi language interoperability, virtual machine and common language runtime (CLR). The common language runtime (CLR) is a major component of .Net Framework. User no needs to care about the execution time for the specific system. The CLR will deal with all CPU dependent operations while execution a program. The program written in any language converts in to byte code called as Common Intermediate Language (CIL). And the runtime it again transfers to the specific system platform. Figure 2.4 shows the operations of CLR. So CLR helping the programmers to write program in less effort without considering the memory management, security, Garbage collection, Exception handling and thread management. Figure 3.13: Operation of CLR in .Net Framework It also allows the developers to apply common skills across a variety of devices, application types, and programming tasks. It can integrate with other tools and technologies to build the right solution with less work. 3.4.2.1 .Net Framework 4: The .NET Framework is Microsoft's comprehensive and consistent programming model for building applications that have visually stunning user experiences, seamless and secure communication, and the ability to model a range of business processes. The .NET Framework 4 works side by side with older Framework versions. Applications that are based on earlier versions of the Framework will continue to run on the version targeted by default. 45
  • 55. System Hardware Requirements: Recommended Minimum: Pentium 1 GHz or higher with 512 MB RAM or more Minimum disk space: x86 – 850 MB x64 – 2 GB 3.4.2.2 Microsoft C# Microsoft c# is an object oriented programming language designed for windows graphical programming. The object orientation is the structured method for solving problems. And the mental models can easily transfer in to programs using object oriented programming. Another attractive benefit of the object oriented language is its ease of code reusability and maintenance. Among this all benefits object oriented languages are the time consuming and large sized. In object oriented language it defines an object with its own properties and classes with a set of objects with common behaviour. 3.4.2.3 Direct Show API Microsoft DirectShow is architecture for streaming media on the Microsoft Windows platform. DirectShow provides for high-quality capture and playback of multimedia streams. It supports a wide variety of formats, including Advanced Systems Format (ASF), Motion Picture Experts Group (MPEG), Audio-Video Interleaved (AVI), MPEG Audio Layer-3 (MP3), and WAV sound files. It supports capture from digital and analogue devices based on the Windows Driver Model (WDM) or Video for Windows. It automatically detects and uses video and audio acceleration hardware when available, but also supports systems without acceleration hardware. DirectShow is based on the Component Object Model (COM). DirectShow is designed for C++. Microsoft does not provide a managed API for DirectShow. DirectShow simplifies media playback, format conversion, and capture tasks. At the same time, it provides access to the underlying stream control architecture for 46
  • 56. applications that require custom solutions. You can also create your own DirectShow components to support new formats or custom effects. 3.4.2.4 Video Capture Devices Most new video capture devices use Windows Driver Model (WDM) drivers. In the WDM architecture, Microsoft supplies a set of hardware-independent drivers, called class drivers, and the hardware vendor provides hardware-specific minidrivers. A minidriver implements any functions that are specific to the device; for most functions, the minidriver calls the Microsoft class driver. In a DirectShow filter graph, any WDM capture device appears as the WDM Video Capture filter. The WDM Video Capture filter configures itself based on the characteristics of the driver. It appears under a name provided by the driver you will not see a filter called "WDM Video Capture Filter" anywhere in the graph. Some older capture devices still use Video for Windows (VFW) drivers. Although these drivers are now obsolete, they are supported in DirectShow through the VFW Capture filter. 3.5 Cost Analysis After selecting the hardware software system budget is estimated in table 3.2. The list included only the selected component that will be needed to complete the project. Other component like resistors, capacitors and op-amps should not drastically raise our budget, and also these components are available in the laboratory. Table 3.2: Price list of components. Component List Price (£) mbed microcontroller 45 Camera 8 PIR Sensor 21 Servo Motor 7 fitPC2 270 The total budget of the project will not exceed 500 pound 47
  • 57. 3.6 System Implementation Before constructing the whole system first each subsystem is constructed and tested. After constructing and testing all sub system the whole system is built by integrating these subsystems. In the following sections how prototype of each subsystem is designed and tested is described. 3.6.1Hardware Implementation In this video surveillance system the microcontroller will perform three tasks: 1. It will take input from the PIR sensor 2. It will control the servomotor 3. It will send trigger signal to the PC via virtual serial port. Thus for each purpose a prototype subsystem is designed and tested. The first subsystem is for testing the microcontroller input/output port. The second subsystem is for controlling the servo motor. And the third subsystem is for establishing communication with the PC. How the subsystem is constructed and tested is given below. The microcontroller will take input from the PIR sensor, thus its input port is tested using three LEDs, three switches and a power source. From the microcontroller‟s Digital input/output pin, three input pins are selected. And for selected input pin three corresponding output pins are selected. These input pins are pin16, pin17 and pin 18 and their corresponding output pins are pin26, pin25 and pin24 consecutively. Three switches are connected to the three input pin. And a 5V source is connected with the switches. Thus when the switches are on, microcontroller selected input pins will get a 5V signal which is a logic1 digital input. Three LEDs are connected via a 220ohm resistor in the three selected output pins. The LED that are used require 5 volt and 20 mA current to be turned on. Current more than 20mA will burn the LED. Thus in order to limit current, 220ohm resistor is connected in series with three LED. Now when one switch will be turned on the corresponding LED will be also turned on, which will ensure that the microcontroller is functioning properly for the digital input. The schematic diagram of the circuit is given in figure 3.14. 48
  • 58. Figure3.14: Schematic diagram of input output test circuit. After constructing the circuit the program in appendix 10.1 is written in online complier of mbed and it is complied and downloaded in the mbed microcontroller. The explanation of this program is given in the appendix. After downloading the program, it is executed and the program successfully run in the microcontroller. Each LED is lighted up when its corresponding switch is turned on. After successful execution of the input/output test program, three PIR sensors are connected instead of three switches. All PIR sensors are set to operation mode 1. Thus for continuous movement the output of PIR sensor remain in logic high. And when there is no movement the output will be in logic low. Thus it will work as a switch as turned on or turned off. After completing the circuit the input/output program run again from the microcontroller. And successfully test is completed. When any movement is detected by the PIR sensors, their corresponding LEDs are turned on. Schematic diagram of the subsystem with PIR sensor connected to input port is given in figure 3.15 49