Skip to main content

Full text of "Cross Pose Facial Recognition Method for Tracking any Person's Location an Approach"

See other formats





International Journal of Trend in Scientific 
Research and Development (IJTSRD) 
International Open Access Journal 



ISSN No: 2456 - 6470 | www.ijtsrd.com | Volume - 2 | Issue -1 


♦ 

♦ 


Cross Pose Facial Recognition Method for 
Tracking any Person’s Location an Approach 

Sanjay D. Sawaitul 

Department of Computer Technology, K.I.T.S., Ramtek, Nagpur, India 


ABSTRACT 

In today’s world, there are number of existing methods 
for facial recognition. These methods are based on 
frontal view face data. There are few methods which 
are based on non-frontal view face recognition method. 
In most of the face recognition algorithm, “Feature 
space” approach is used. In this approach, different 
feature vectors are extracted from face. These distances 
are compared to determine matches. In this paper, it is 
proposed that how any person can be located in a 
campus or in a city using a cross pose face recognition 
method. This paper is focusing on three parts 1) 
generation of multi-view images 2) comparison of 
images 3) showing the actual location of a person. 

Keywords: Feature Space, Face Recognition, Cross 
Pose method. 

I. INTRODUCTION 

Image processing concept plays an important role in 
aspects of all science and technology field. Face 
recognition is one of the biggest challenges for today’s 
scientist in real applications. A facial recognition 
system is a system which identifies and verifies a 
person automatically from a digital image or a video 
frame from a video source. One of the methods to do 
this is by comparing selected facial features from the 
image and a facial database. Face recognition is one of 
the most pertinent applications of image analysis. It is 
really a true challenge for the developer to build an 
automated system which can recognize faces same like 
a human being. “Face geometry” was the first way to 
recognize people. There are number of algorithms 
which can be used for identifying the faces. 
Recognition algorithm can be divided into two main 


approaches those are Geometric and Photometric. 
Geometric is a process which keeps a track on 
distinguishing features. Photometric is process that 
arranges the data into statistical form. This process 
converts an image into values, then it compares those 
values with predefined templates to eliminate 
variations[l]. 

In recent years, Biometric based techniques were 
widely used to identify individuals. Face recognition is 
having number of practical applications in the area of 
biometrics, smart cards, law enforcement, information 
security, access control and surveillance system [2], 
Face recognition method can also be used for security 
purpose. Instead of remembering any password or PIN 
number, a user can feed the image of face, which can 
be identified by the system and the access will be 
granted. A good face recognition algorithm along with 
proper preprocessing of image can remove the noise 
and compensate the slight variations in scale 
orientation. An automated human facial expression 
recognition system can benefit multiple research fields. 
Face recognition system performs three steps 

a) DETECTION(it finds the area of face or extracts a 
face from an image or a video frame). 

b) SEGMENTATIONS analyze the distances 
between different points on face i.e. eyes, nose jaws 
etc.). 

c) RECOGNITION/VERIFICATION(it compares 
the statistical data with database). 


@ IJTSRD | Available Online @ www.ijtsrd.com | Volume - 2 | Issue - 1 | Nov-Dec 2017 


Page: 1428 








International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470 



fig 1: Face Recognition System 


In spite of number of methods for face recognition, 
there are few challenges in face recognition method as 

1) POSE VARIATIONS: The existing system 
recognizes based on frontal images which can be 
considered as the ideal image to detect he person. 
The performance of face detection algorithm drops 
when there are large pose variations. 

2 ) FEATURE OCCLUSION: Beards, hats, glasses or 
moustache may cause a problem for face 
identification. These elements may introduce high 
variability. 

3 ) FACIAL EXPRESSION: Due to different facial 
expressions, the features of face varies, which also 
arises problem for face identification. 

4) IMAGING CONDITIONS: Different weather 
conditions and the quality of a camera may also 
affect the quality of an image, affecting the 
appearance of a face.[3] 

II. RELATED WORK 

In the beginning of the 1970's, face recognition was 
treated as a 2D pattern recognition problem [4],The 
distances between important points were used to 
recognize known faces, e.g. measuring the distance 
between the eyes or other important points or 
measuring different angles of facial components. The 
following methods are used in face recognition process. 

1. Holistic Matching Methods 

2. Feature-Based (structural) Methods 

3. Hybrid Methods 

1. HOLISTIC MATCHING METHODS: In holistic 
approach, the complete face region is taken into 
account as input data into face catching system. 
One of the best example of holistic methods are 


Eigenfaces[5], Principal Component Analysis, 
Linear Discriminant Analysis [6] and Independent 
Component Analysis etc. 

2. FEATURE-BASED (STRUCTURAL) 

METHODS: In this method, local features such as 
eyes, nose and mouth are extracted and their 
locations and local statistics are feed into a 
structural classifier. A big challenge for feature 
extraction methods is feature "restoration".[7] 

3 . HYBRID METHODS: Hybrid Face Recognition 
Systems uses a combination of both Holistic and 
Feature Extraction methods. Generally 3D Images 
are used in hybrid methods. The image of a person's 
face is caught in 3D, allowing the system to note 
the curves of the eye sockets or the shapes of the 
chin or forehead. The 3D system usually proceeds 
through different terms as: Detection, Position, 
Measurement, Representation and Matching. 

> DETECTION: Capturing a face either by scanning 
a photograph or photographing a person's face in 
real time. 

> POSITION: Determining the location, size and 
angle of the head. 

> MEASUREMENT: Assigning measurements to 
each curve of the face to make a template with 
specific focus on the outside of the eye, the inside 
of the eye and the angle of the nose. 

> REPRESENTATION: Converting the template 
into a code - a numerical representation of the face. 

> MATCHING: Comparing the received data with 
faces in the existing database. In this case, the 3D 
image is to be compared with an existing 3D 
image. [8] 

Geometric approach is another way for face 
recognition. The first historical way to recognize people 
was based on face geometry. There are lots of 
geometric features based on the points. Geometric 
features may be generated by segments, perimeters and 
areas of some figures formed by the points. The feature 
set are described in detail “Human and machine 
recognition of faces: a survey”[9], which helps in 
comparing the recognition result. It includes 15 
segments between the points and the mean values of 15 
symmetrical segment pairs.[10] 

III. PROPOSED METHODOLOGY 

The proposed system can track a person’s location. 
This can be done by using image processing, cross pose 


@ IJTSRD | Available Online @ www.ijtsrd.com | Volume-2 | Issue-1 |Nov-Dec2017 


Page: 1429 











International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470 


multi-view image generation and pattern matching IV. STEP-BY-STEP DESCRIPTION OF 
concept. OPERATION 


In this process a 2D image of a person is given as an 
input, by using partial least square method, this 2D 
image is converted into multi-view image i.e. cross 
pose image in different angles(0°, 30°, 45°, 60° & 90° 
degrees), which will give an idea of 3D image. In the 
proposed system, there is a single database which is 
connected with several cameras and this database will 
store the images extracted from the videos captured by 
the cameras. This database is having four attributes 
(image, date, time &camera_id). The multi-view 
images generated from input image are compared with 
database. By using geometric and photometric 
approach the images are compared, wherever the 
images are matched, the corresponding row is fetched. 

The numbers of fetched rows are stored in separate file. 
This separate file is having three attributes(date, time & 
camera id). The records of this file is arranged in 
descending order, giving the higher priority to date and 
then to time. After sorting, the first row is fetched and 
the address corresponding to that camera_id is 
displayed on the output screen. The proposed system is 
explained with the help of following flowchart, 


a) INPUT: A 2D image of a person is given as an 
input as shown in fig 3. This image must be a 
person to be located. Enter the name of the person 
as an input so that the name can be displayed on the 
output screen. 



fig 3: Input Image [11] 


b) PREPROCESSING: This step will remove all the 
noise present in the input image. 


R ^ 

1 W | 


fig 4: Preprocessed Image [11] 



fig2: Flowchart 


c) 


GENERATE CROSS POSE IMAGES: A 2D 


images is converted into multi-view images using 
partial least square method. It will generate multi¬ 
view images of facial expressions from the 
available 3D data. The data in this experiment 
includes images at five different angles(0°,30°, 45°, 
60° & 90° degrees). 



fig 5: Multi-View Image[l 1] 


d) CAMERA INTERFACING: In this system, a high 
storing capacity database is required, which will 
store the images extracted from videos captured by 
different cameras. These cameras are placed at 
different locations having unique id. All the 
cameras are connected with a single database. This 
database is having separate subparts, which will 
store the images extracted from videos of each 
camera. 


e) FACE RECOGNITION: The output of Partial 
least square method (cross pose images) of input is 

@ IJTSRD | Available Online @ www.ijtsrd.com | Volume-2 | Issue-1 | Nov-Dee 2017 


Page: 1430 




































International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470 


compared with the database. Wherever the images 
are matched, its date, time and camera_ id are 
stored in different variables of same file. There are 
number of algorithms for identifying the faces. This 
may be done by analyzing the relative positive, size 
and shape of eyes, nose, cheekbones and jaws. This 
can be achieved either by using Eigenfaces or by 
Line Edge Map technique. 

f) GENERATION OF FILE: In this step, the system 
will generate a file, which includes information in a 
tabular form as below. 


Sr. 

Date 

Time 

Cameraid 

i. 

12/04/2013 

13:00 

3 

2. 

13/05/2014 

02:00 

4 

3. 

14/05/2011 

17:00 

1 

4. 

16/07/2013 

18:00 

2 


table 1: File showing details about an input image 

g) SORTING OF RECORDS: The number of 
readings stored in a file in the previous step are 
sorted in descending order giving the first priority 
to "date" and then "time" as shown below. 


Sr. 

Date 

Time 

Camera id 

1 . 

13/05/2014 

02:00 

4 

2. 

12/04/2013 

13:00 

3 

3. 

16/07/2012 

18:00 

2 

4. 

14/05/2011 

17:00 

1 


table 2: Records in sorted order 


h) FINAL OUTPUT (displaying the location): After 
generating the sorted list, the first row is fetched 
from the file and the address corresponding to the 
cameraid present at that row is displayed on the 
screen as shown below. 


MESSAGE 

Name: Ms. Subject 
Last seen 





Date: 13/05/2017 
Time: 02:00 
Camera_id: 4 
Location: 3 rd floor 

Computer Technology 
Department KITS, Ramtek. 




fig 6: Message Displaying Location 


V. CONCLUSION AND FUTURE SCOPE 

In the field of image analysis and computer vision, face 
recognition is really a challenging problem. Face 
recognition has received a great deal of attention 
because of its many applications in various domains. 
The analysis of face from non-frontal view is largely 
unexplored research. This paper includes an 
introductory survey from face recognition technology 
and cross pose generation method. A proposed system 
can track a location of any individual simply by using 
2D image. A system can generate multi-view images or 
cross pose images which can give a 3D effect of an 
input image. 

Recognizing a face accurately in all poses is still a great 
challenge which can be implemented in future. In 
future the system can be extended all over the country 
that requires large database which belongs to the 
concept of Bigdata. 


REFERENCES 

1) RahimehRouhi, MehranAmiri and 

Behzadlrannejad, “A Review on Feature Extraction 
Techniques in Face Recognition”, Signal & Image 
Processing : An International Journal (SIPIJ), 
Vol.3, No.6, December 2012. 

2) RabiaJafri and Hamid R. Arabnia, “A Survey of 
Face Recognition Techniques”, Journal of 
Information Processing Systems, Vol.5, No.2, June 
2009, pp-41-61. 

3) Dr. Pramod Kumar, Mrs. Monika Agarwal, Miss. 
Stuti Nagar, “A Survey onFace Recognition System 
- A Challenge”, International Journal of Advanced 
Research in Computer and Communication 
Engineering, Vol. 2, Issue 5, May 2013. 

4) C. A. Hansen, “Face Recognition”, Institute for 
Computer Science University ofTromso, Norway. 

5) M. A. Turk and A. P. Pentland, "Face Recognition 
Using Eigenfaces", IEEE, 1991. 

6) S. Suhas, A. Kurhe, Dr. P. Khanale, “Face 
Recognition Using Principal Component Analysis 
and Linear Discriminant Analysis on Holistic 
Approach in Facial Images Database”, IOSR 
Journal of Engineering e-ISSN: 2250-3021, p-ISSN: 
2278-8719, Vol. 2, Issue 12 (Dec. 2012). 

7) W. Zhao, R. Chellappa, P. J. Phillips & A. 
Rosenfeld, “Face Recognitions Literature Survey”, 


@ IJTSRD | Available Online @ www.ijtsrd.com | Volume-2 | Issue-1 |Nov-Dec2017 


Page: 1431 
































International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470 

ACM Computing Surveys, Vol. 35, No. 4, 

December 2003, pp. 399-458. 

8) Divyarajsinh N Parmar et al , “Face Recognition 
Methods & Applications”, International Journal 
Computer Technology & Applications, Vol. 4 
(1),84-86. 

9) R. Chellapa, C.L. Wilson, S. Sirohey and C.S. 

Barnes, “Human and machine recognition of faces: 
a survey”, Proc. of IEEE.-1995, Vol. 83.- P.705- 
739. 

10) Manish Choudhary et al, “An Application of Face 
Recognition System using Image Processing and 
Neural Networks”, International Journal Computer 
Technology & Application, Vol. 3 (1), 45-49. 

11) MahdiJampour, et al, “Multi-view Facial 
Expressions Recognition using Local Linear 
Regression of Sparse Codes”, 20th Computer 
Vision Winter Workshop Paul Wohlhart, Vincent 
Lepetit (eds.) Seggau, Austria, February 9-11, 2015. 


@ IJTSRD | Available Online @ www.ijtsrd.com | Volume-2 | Issue-1 | Nov-Dee 2017 


Page: 1432