Human Identification Based on Geometric Feature Extraction Using a Number of Biometric Systems Available: Review

,


Introduction
The term biometric is derived from the Greek words bio, which means "life," and metric, which means "the measure of."Biometrics is the automated use of unique and measurable characteristics to establish or verify an identity based on some special biometric features derived from physiological and behavioral characteristics (Tiwari, Chourasia, & Chourasia, 2015;Jayaram & Fleyeh, 2013;Mäkinen & Raisamo, 2008;Vaidya, 2015).
However, verification and identification are interchangeably used in the literature for biometric recognition (Mir, Rubab, & Jhat, 2011).Identification and verification have distinct meanings.Verification refers to 1:1 matching, which indicates that a person is claiming his/her identity from the system and then having been verified himself/herself.Identification refers to 1:m matching, which refers to a situation where the user does not know who he/she is or is not to claim his identity, but presents his/her biometric to match with the entire database (K.P. 2011; Himanshu, S. 2013).A biometric system includes all the hardware, associated software, firmware, and network components required for the biometric matching process.(Erno et al., 2008) found that biometrics can enhance the high assurance to a system, particularly when tied with one or two other forms of verification.These "multimodal" forms of authentication (e.g., passwords together with speech, speech together with gesture, or passwords together with speech and eye features) allow the potential for significant state of security and privacy enhancement.A biometric authentication system is a satisfactory solution to authentication problems.However, biometric authentication has disadvantages (K. P., 2011;Mali & Bhattacharya, 2013).This process requires user skills.Users should know how to position their fingers, faces, and eyes to be read.Biometrics cannot reduce the cost of current and established authentication methods.Businesses will not implement such security systems because of their technical complexity and high cost.Biometrics requires much data of an individual to be stored.These systems are not always reliable because a person changes over time, such as when the eyes turn puffy or when the voice of a person changes when the person is ill.The voice of a person changes with flu or throat infection or in the presence of too much noise in the environment.This method may not authenticate properly a person's fingers that had become rough because of work.Thus, a machine may have difficulty in identifying a person precisely.Several methods for retinal scan are invasive, i.e., a laser light (or other coherent light source) is directed through the cornea of the eye and uses an infrared light source to highlight the biometric pattern.This process can damage an individual's eye (K. P. 2011;Mali & Bhattacharya, 2013).
Biometric recognition system is performed as follows: 1. Acquisition.Acquisition devices capture a sample.For example, a digital device is used to collect data.A finger is placed on a plate for finger reader device.The face, gait, ear, or hand may be recorded by a video or camera device and an audio device for speech.
2. Comparison.The template then compares the acquired data from a sample.
3. Feature Extraction.Features are extracted from the sample using different techniques.
4. Matching.The acquired feature set is compared against the template set in the database.The results will show whether the identity of a user is authenticated.
The requirements for certified features are described as follows (Vaidya, 2015;K. P. 2011;Wu, Zhuang, Student, Long, Lin & Xu, 2015;Tiwalade., Francis &. Idachab, 2011;Duquenoy, Carlisle & Kimppa, 2008): 1) Universality.The features should be universal among each individual, which can be used to identify a person.A feature is not appropriate for classification if the feature cannot be extracted from each individual.
2) Individuality.The feature should be effectively discriminated among individuals.
3) Stability.The feature should be stable and remain unchanged over a long period, i.e., regardless of age and environment 4) Collectability.The feature should be measured quantitatively, which has a remarkable effect on the applications.An approach based on a feature with high collectability, such as vision-based application, is suitable for real-time or online applications.The approach based on a feature with lower collectability can only be used in complex or offline applications.This paper represents a review of human identification based on geometric feature extraction using several biometric systems available, after-study papers, articles, conference paper, and from our experiments in previous works in this field.The rest of this review paper is organized as follows.The biometric technology is discussed in section 2. Tables based on various aspects, such as geometric features, are compared, and the results are concluded in section 3.

Physiological Biometrics
Biometrics is relatively a stable human physical characteristic based on the direct measurements of a part of the human body.Fingerprint, face, iris eye, Ear, and hand or palm scan recognition belong to this group.This measurement is unchanging and unalterable without significant duress.Different techniques are used to extract these features, such as appearance-or geometry-based techniques.Our current investigation is focused on geometric feature extraction using several biometric systems available.

Fingerprint Geometry
The fingerprint is the pattern of ridges and valleys at the tip of a finger and used to personally verify a person (Mir et al, 2011).Local and global geometric features are extracted.Fingerprint global features are identified by the local orientation of fingerprint ridges, i.e., the orientation field curves.These features occur in the form of a core and/or a delta and are normally at the central region of the fingerprint.These features are referred to as singularities.A core is the area around the center of the fingerprint loop, and a delta is the area where the fingerprint ridges triangulate.The fingerprint local features are attributes that provide the minute details about the fingerprint pattern.These features are known as the minutiae, which include ridge endings, ridge bifurcations, and, although uncommon, the islands.These minutiae constitute the uniqueness of every human fingerprint pattern.The local features have an important role to play in a fingerprint-matching problem because of their detailed nature given that the key to the matching problem is each fingerprint (Ishmael et al. 2009;S. Msiza, Leke-Betechuoh, V. Nelwamondo & Msimang, 2009).
The enhanced data from the proposed methods (Khazaei & Mohades, 2007;Poulos, Magkos, Chrissikopoulos & Alexandris, 2003) are submitted to specific segmentation (datasets) using computational geometric algorithms implemented via Matlab.Thus, onion layers (convex polygons) are created from these datasets.The smallest layer of the constructed onion layers is isolated from the fingerprint in vector form, which will be referred to as the referenced polygon.This characteristic is supposed to be stored in a reference database for subsequent verification.The proposed feature extraction can be used for accurate and secure fingerprint verification because the proposed feature extraction is based on a specific area in which the fingerprint exhibits a range of dominant brightness value.The proposed method promisingly allows very small false acceptance and false rejection rates because it is based on specific segmentation.Biometric applications will gain universal acceptance in digital technologies only when the number of false rejections/acceptances approach zero.
The crop region is defined as the region of interest (w × w) around a base point of an input fingerprint image.The width of the region of interest of an image depends on the resolution of the input image.In this system, the region of interest is considered using the number of similar pixels around the base point.Maximum curvature of the concave ridges is defined as the base line and base point.The coordinate of this point is represented as (xo, yo).The base line is the horizontal line across the base point.In addition, the base point is used to align input and enrolment fingerprints.Rotation metrics is used to align the input and template images.Fingerprint alignment is performed to make both images have the same orientation.α is the difference in the orientation between the template and input images.The crossing number (CN) is used to extract minutiae points.Rutovitz's definition of CN for pixel p is as follows (Valarmathy & Kumar, 2012): (2) where P (i) is the neighborhood of P, which has a binary value of 0 or 1.

Face Geometry
The face is the commonly used biometric characteristic to recognize a person (Bakshe & Patil, 2014).The geometry-feature-based approach analyzes local features, such as nose and eyes, and their geometric relationships (Bakshi & Singhal, 2014).Image processing techniques extract feature points, such as eyes, nose, and mouth, and then these points are used as input data for application.Geometry-based technique features are extracted using the sizes and relative positions of important components of images.In this technique, the direction and edges of important component is first detected under the first method, and then feature vectors are built from these edges and direction (Bakshi et al., 2014).Detecting edges, directions of important components, or regional images contain important components to build feature vectors from these edges and directions.
Filters, such as Canny filter, were used to detect the eye or/and mouth image region; whereas transforms methods, such as Hough transform, were used to detect the eyes and to perform gradient analysis.(Kim, Lim & Mi-Hye Kim, 2015) presented a method to detect image regions that contain eyes and mouth in face color images.To detect image regions containing the left eye, the right eye, and the mouth, they first used a filter technique and threshold to change the images to binary images followed by a connected region algorithm to detect image regions containing the eyes and the mouth.Then, calculated the feature vector, which is called a "component feature vector".The bounding box location of feature segments obtained in the previous step is used to calculate the heights and widths of the left eyebrow, left eye, right eyebrow, right eye, nose, and mouth.The distances between center of the left eye and the eyebrow, the right eye and the eyebrow, and the mouth and the nose were calculated.Thus, the following 15 parameters were obtained and considered as feature vectors: height of the left eyebrow; width of the left eyebrow; height of the left eye; width of the left eye; height of the right eyebrow; width of the right eyebrow; height of the right eye; width of the right eye; height of the nose; width of the nose; height of the mouth; width of the mouth; distance between the center of the left eyebrow and the left eye; distance between the center of the right eyebrow and the right eye; and distance between the center of nose and mouth (Kim et al., 2015).
In previous research (Mashagba, E. F., 2016), geometric-based features were extracted by using ellipse mathematical definition and properties.Global features were calculated for a video sequence frame from the ellipse for the face region, horizontal center (Xo), and vertical center (Y0) of the detected face, where a and b are the semi-major and semi-minor axes (half of the major and minor axes of the ellipse), respectively.The area is enclosed by an elliptical area.The two foci (the term "focal points" are also used) of an ellipse are two special points, F1 and F2, on the ellipse's major axis and are equidistant from the center point.The sum of distances from any point P on the ellipse to the two foci is constant and equal to the major axis (PF1 + PF2 = 2a), denoted as eccentricity.The eccentricity of an ellipse, which is denoted by ε or e, is the ratio of the distance between the two foci to the length of the major axis, that is, e = 2f/2a = f.The ratio of the dimension and four-quadrant inverse tangent (arctangent) of the real parts of the semi-major axis (a) and semi-minor axis (b) are calculated.The distance from the center C to either focus is f = ae, which can be expressed in terms of the major and minor radii (Mashagba, E. F., 2016).

Hand or Palm Geometry
Hand geometry refers to the geometric structure of the hand, which consists of the lengths and widths of fingers, and width of a palm, among others.Palm print is the region between the wrist and fingers.Palm print features, such as ridges, singular points, minutia points, principal lines, wrinkles, and texture, can be used for personal verification (Mir et al., 2011).(Mathivanan, Palanisamy & Selvarajan, 2012) located the local minima and the local maxima points, which correspond to fingertips and finger valleys.In addition, they identified the orientation of each finger, as well as four points on the finger contour (two points each on both sides of the fingertip), at fixed distances from the fingertip.Two middle points are computed for corresponding points on either side and are joined to obtain the finger orientation.Points at the center and bottom parts of the finger are not considered for the estimate of orientation, as certain fingers are found to be non-symmetric at these parts.Once finger orientation and fingertip valley points are determined, extracting a rectangular ROI from the fingers is a straightforward test.Similarly, based on the two finger valley points (between little-ring and middle-index fingers), a fixed ROI can be extracted.Hand-geometry features are extracted from the version of the gained intensity images of the hand.Features considered in their work include finger lengths, finger widths at equally spaced distances along the finger area, and finger perimeters.
The feature extraction module extracts the features of hand geometry; the handprint features are calculated by using the reference point as follows (Poonam & Sipi, 2013 The hand contour was obtained from the black-and-white image.The Cartesian coordinates of the hand contour were converted to polar coordinates (radius and angle), with the center of the hand base being considered as the coordinate origin.The peaks of the radius were located at the finger ends, and the minima of the radius show the valleys between fingers.To obtain the exterior base of the index and little fingers, the slope of the line going from the valley of the index heart fingers to that of the ring-little finger valley was determined.The exterior of the thumb is identified as the intersection of the contour and the line going from the heart-ring finger valley to the index-thumb finger valley.Once the most important points of the hand were located, the geometric features were determined by means of measures.The geometrical features of the index, middle, and ring fingers were used.The little and thumb fingers were not used because of the non-uniform illumination of the region in their system.The geometric feature vector was immediately obtained once the ends and valleys between fingers have been detected.Each finger was characterized as a triangle.The three vertices of the triangle are the end and the two side valleys of the finger.Approximately 40 wide measures to the index, middle, and ring fingers were obtained.The first 20% of the finger was discarded to solve the problem of rings.Three different measure vectors were independently obtained for each finger.The three vectors characterize each user and were combined at the score level (Aythami, Miguel, Francisco, Jesús & Carlos, 2008).
In this module, various hand geometric features finger length, finger width, palm length, palm width, and hand length.To find these features, fingertips, valley points, and landmark points were identified, and the features were then extracted (Mandeep & Amardeep, 2015).
An algorithm for feature extraction was created in MATLAB programming environment and based on counting pixel distances in specific areas of the hand.Given that the system uses special surface with pegs to fix the appropriate position of the hand, the pixel distance of the given measurement can be obtained.The algorithm looks for white pixels between two given points and computes a distance by using geometrical principles.The result is a vector of 21 elements (Peter & Dušan,2007): Widths: each of the fingers is measured in three different heights.The thumb finger is measured in two heights.
Heights: the height of all fingers and thumb is obtained.
Palm: two measurements of palm size.
By scanning the pixels at the bottom of the image from left to right, the left-most pixel of the hand image, S1, and the right-most pixel, E1, are located.The reference point is the middle point between S1 and E1.The next step involves finding all fingertips and valley points of the hand.The distances between the reference point and each contour point of the hand, from S1 to E1, are measured by Euclidean distance (Nidhi, Vipul, Neelesh & Pragya, 2013), where (x,y) is a point in the contour and (xr, yr) is the reference point.Comparing the distances with those of other neighbor points on the hand contour in some distances, the fingertips are the points with the most distances and the least valley points.The result positions of fingertips and valley points are marked as circles.The extracted features used in the research are the lengths of each finger, the widths of each finger at three locations, and the width of the palm.A total of 21 features were utilized.These features can be found as follows.Finger baselines: the finger baselines of a middle finger and a ring finger are obtained by connecting the valley points, which are on both sides of that finger.The baselines are the lines linked between two valley points.Finger lengths: Finger lengths are obtained by measuring the distances from the fingertips to the middle points of the finger baselines.Finger widths: Finger widths are the widths of a finger measured at three locations.The first is measured at the middle of the finger length; the second at the one-third, and the last at the two-thirds of the finger length.Palm width: Palm width is the distance from b1 to b2 in the database [Nidhi et al., 2013].

Iris and Retina Geometry
The iris image I(x,y) is remapped from raw Cartesian coordinate to polar coordinates I (r,θ), where r radius lies in the unit interval (0,1).The normalization step exactly reduces the distortion of the iris caused by pupil movement and simplifies subsequent processing ( (Bhawna & Shailja, 2010;Roselin, chirch & Waghmare,. 2013).
Pupil-based feature: The pupil is defined to be circular or round but is not an actual circle.Therefore, the geometrical properties of the circle can identify the pupil, and the features can be used for recognition.The following are pupil-based features (Poonguzhali & Ezhilarasan, 2015): • pupil roundness (PR) • pupil largeness (PL) • pupil smoothness (PS) PR: Roundness measurement requires tracing the iris at 360 °C; hence, the consistency of the diameter can be measured at a number of orientation points on the iris.In general, roundness is defined as a measure of sharpness at the corners and the edges of a circle.This measure is related with sphericity and compactness of the circle.In simple terms, the radii around the iris are measured at regular intervals.The roundness feature is calculated based on the diameter at various orientations, where the ratio between the maximum value and the minimum value is calculated at horizontal (h), vertical (v), left (l), and right (r) at 0°, 45°, 90°, and 135°, respectively (Poonguzhali & Ezhilarasan, 2015).
PL: The largeness of the pupil is described based on the radius of the pupil.As the pupil is not an actual circle the general diameter calculation of distance from the center to any point on the pupil is insufficient.The diameter is calculated at eight directions on the pupil at 0° (horizontally), 90° (vertically), 30°, 45°, 75°, 120°, 135°, and 165°.Hence, PL is defined according to ( Poonguzhali et al., 2015).

PS:
The smoothness of the iris can be represented according to the curvature of the circle.The curvature (K) of the circle is defined as the reciprocal of the radius (Poonguzhali & Ezhilarasan, 2015): Collarette-based feature: The collarette area is insensitive to pupil dilation and not affected by the eyelid or the eyelash.The collarette is concentric with the pupil, and the radius of the area is restricted within a certain range.The collarette can be defined as a snaky scalloped line that splits the iris ancillary zone and ciliary zone.The collarette is restricted to the inner half of the iris and contains ether radial spokes or dots.Given that collarette-based features cannot be detected directly from the iris, the collarette should be detected first, and the geometric measures should then be computed.The proposed method uses the adjusted angle zigzag sampler to detect the collarette.In the given image, the collarette along the line is detected.Here, the height of the collarette is denoted as hs (x), where s is the sampler at the location x along the axis (Poonguzhali & Ezhilarasan, 2015).
Collarette roundness (CR): As the collarette feature is extracted from the iris, the next step is determining the geometric feature of the collarette.The roundness feature of the collarette is computed as the ratio between the maximum and minimum values of the six diameters of the collarette (Poonguzhali & Ezhilarasan, 2015).
Collarette iris ratio (CIR): The CIR is computed based on the diameter of the iris.The diameter of iris is nearly 10-11 mm.The collarette occupies one-third of the iris and is approximately 1.5 mm away from the pupil.Hence, the distance of the collarette and the iris diameter (ID) is calculated at eight points, where the iris is divided into equal space ordinates regarding the polar coordinates.The distance is computed for each intersection point on the edge of the iris, and the CIR is computed (Poonguzhali & Ezhilarasan, 2015).
Iris-based feature: In the proposed work, the geometric feature of iris is used for recognition rather than the texture features of iris.The following are the iris geometric features used for recognition (Poonguzhali & Ezhilarasan, 2015): Tracing the boundaries of the eyes is important, as finding the outline of the eyes makes computational localization of the position of the irises easier.Several calculations were performed on images to detect the actual position of the iris.The fine eye region is detected at the second stage after the face region is marked in face detection; and then eye features, namely, two eye corners and an iris circle, are extracted from this region at the final stage (Namrata & M.S. Burange 2016).According to my previous experiment in this field, the eye is extracted as a face part and is mainly used in face recognition.In addition, more details are extracted as iris features for identification purposes.

Ear Geometry
The shape of the ear and the structure of the cartilaginous tissue of the pinna are distinctive (Asmaa, Kareem & Elmahdyc, 2015).The ear offers an ideal solution in human identification.As ears can be seen, their images are easy to capture, and the ear structure does not change.The boundary of binary images is obtained.The ear is next to the face and able to provide valuable additional information to supplement facial images (Mudit, Ajinkya & Sagar, 2015).The largest object is detected, and the minimum Euclidean distance between every pixel and all pixels was obtained.Finally, the matrix is sorted, four distances are obtained, and the centroid of the largest object and the mean of ear image are considered as feature values to ensure uniqueness between ear images.Seven extracted values for the feature vector include the following (Mudit et al., 2015): Distance 1 (D (1)).

X coordinate of the centroid Y coordinate of the centroid
Mean is calculated where X is the mean of the image and N is the number of pixels (Asmaa et al., 2015).
Feature extraction conducted in the following two stages: 1. Shape and edge detection.

Euclidian distances and angles of triangle detection.
The proposed approach can detect ears of different shapes.Ear shapes include are round, oval, triangular, and rectangular.If the obtained shape matches, then other feature vectors, such as Euclidian distances and angle of a triangle, are compared, and the maximum feature vector of a person is matched for human identification (Luaibilafta, R., 2013).

Behavioral Biometrics
Behavioral biometrics reflects an individual's psychological makeup, although physical traits, such as size and gender, exert a major influence.This approach is based on data derived from an action performed by the user and thus ultimately measures the characteristics of the human body.This approach includes signature/handwriting, gait, voice, gesture, and keystroke dynamics that measure the time spacing of typed words (Mir et al., 2011).

Signature/Handwriting Geometry
The first established biometric verification technique in society is the handwritten signature ( Mir et al., 2011).The total distance that the pen travelled on the hand-written signature is extracted, and the Euclidean distance of all the points are as follows: xi is the coordinate in the x direction, and yi is the coordinate in the y direction; speed vx and vy express the functions of time; xm is the coordinate in the x direction, and ym is the coordinate in the y direction; tm is the time of movement; vx is the speed at point tm in direction x, and vy is the speed at point tm in direction y.Acceleration values ax and ay can be calculated at different speeds and represent the acceleration at point tm in the x and y directions.Tk is the total time of the hand-written signature.The total time, stroke length, and the time for lifting the pen can vary for different signatures of the same user.These features oscillate about the average and variance, thus possibly symbolizing the biometric features of a person.Two time-related features are the total signature time T and the time down ratio Tdr, which is the ratio of pen-down time to the total time.By considering ad hoc parameters, such as speed, acceleration, and pen-downs, a simple system can be improved.The improvement referred to by the biometric handwritten system involves minimizing the rate of false acceptance and false rejection (Syed et al2015).
Geometric global and local features are extracted separately as follows (Dakshina, Phalguni & Jamuna, 2010): Width: For a binary signature image, width is the distance between two points in the horizontal projection.The width must contain more than 3 pixels of the image.
Height: Height is the distance between two points in the vertical projection and must contain more than 3 pixels of the image for a binary image.
Aspect ratio: Aspect ratio is defined as width-to-height ratio of a signature.
Horizontal projection: Horizontal projection is computed from both binary and skeletonized images.The number of black pixels is counted from the horizontal projections of binary and skeletonized signatures.
Vertical projection: A vertical projection is defined as the number of black pixels obtained from vertical projections of binary and skeletonized signatures.
Area of black pixels: The area of black pixels is obtained by counting the number of black pixels in the binary, thinned, and HPR signature images separately.
Normalized pixels are found by dividing the area of black pixels by the area of signature image (width × height) of the signature.The normalized area of black pixels is calculated from binary, thinned, and HPR images.
Center of gravity: A signature image is obtained by adding all x, y locations of gray pixels and dividing them by the number of pixels counted.The resulting two numbers (one for x and other for y) is the location of the center of gravity.
Maximum and minimum black pixels: The maximum and minimum black pixels are counted in the horizontal projection over smoothened horizontal projection.These values are the highest and the lowest frequencies of black pixels in the horizontal projection, respectively.
Global baseline: The vertical projection of the binary signature image shows one peak point, and the global baseline corresponds to this point; otherwise, the global baseline is considered as the median of the two outermost maximum points.Global baseline is defined as the median of the pixel distribution.The upper and lower edge difference between smoothened and original curves of the vertical projection above the baseline and under the baseline is known as the upper and lower edge limits, respectively.
Middle zone: The middle zone is the distance between upper and lower edge limits.Each signature image is divided into 25 equal grid regions.
The local features for each grid region are considered as global features.Both feature sets are combined into a feature vector and then used as input to classifiers (Dakshina et al., 2010).
The feature vector is extracted as follows (Pallavi & Salokhe, 2015): Maximum horizontal and vertical histograms: The horizontal histogram is calculated by going through each row of the signature image and counting the number of black pixels.The row with the maximum number of black pixels is recorded as the maximum horizontal histogram.
Centers of mass: The signature image is split in two equal parts, and the center of mass is determined for individual parts.
The normalized area of signature is the ratio of area of signature image to the area of signature enclosed in a bounding box.The area of a signature is the number of comprising pixels.
The aspect ratio is the ratio of width of signature image to the height of the image.The value is considered because the width or height of person's signature may vary, but its ratio remains approximately equal.
Tri-surface feature: Two different signatures may have the same area.Therefore, to increase the accuracy of the features, three surface features are used.Here, a signature is divided into three equal parts, and the area for each part is calculated.
The sixfold surface feature divides a signature in three equal parts, and the bounding box was found for each part.
Then, the center of the mass for each part was calculated.A horizontal line was drawn passing through the center of mass of each part, and the area of signature above and below the center of mass within a bounding box is calculated, yielding six features (Pallavi et al., 2015).The new proposed features include Walsh coefficient of pixel distribution, code word histogram based on clustering technique (Vector Quantization), spatial moments of code word, grid and texture features, and successive geometric centers of depth 2 (Pallavi et al., 2015).

Gait Geometry
The gait is a particular manner of moving in lower body joints.New biometrics aims to recognize people via the style of human walking, which contain physiological or behavioral individuality of humans (Sajid, Zhongke, Abdul & Hafeez, 2013;Mashagba, E. F., 2015).
A triangle is formed from extracted joints (hip knee and ankle) data from the motion file (BVH file), and the triangle area of each frame is computed.First, the joint parameter values of each frame of the hip joint during walking are computed.
The main motive of triangle formulation is to compute the area of a triangle and ultimately compute the average area from the computed areas of each frame triangle of a subject via the statistical method that was described (Sajid et al., 2013).
The features of interest points are used in extracting motion parameters from the sequence of gait figures to show human gait patterns.The feature is represented as joint angles and vertex points, which are used together for gait classification.For gait feature extraction, the width and height of the human silhouette are measured.The dimension of the human silhouette, joint angle, angle velocity, and gait velocities from body segments are calculated as gait features.The horizontal coordinates of each body points are calculated (Nyo Nyo Htwe & Nu War, 2013).
Gait parameters, such as cadence, are the number of steps taken in a time.Two steps are involved in a single gait cycle, and the cadence is a measure of half-cycles.The cycle time is also known as the "stride time," in seconds: cycle time (s) = 120/cadence (steps/min).Walking speed is the distance covered in a given time.The instantaneous speed varies from one instant to another during the walking cycle, whereas the average speed is the product of the cadence and the stride length.The cadence, in steps per minute, corresponds to half-strides per 60 s or full strides per 120 s.The speed can be calculated by using the following formula: speed (m/s) = stride length (m) × cadence steps/min/120.If cycle time is used in place of cadence, the calculation becomes considerably more straightforward: speed (m/s) = stride length (m)/cycle time (s).Walking speed thus depends on two-step lengths, which depend to a large extent on the duration of the swing phase on each side.When pathology affects one foot more than the other, an individual will usually try to spend a shorter time on the "bad foot" and correspondingly longer time on the "good foot."Shortening the stance phase on the "bad" foot means bringing the good foot to the ground sooner, shortening both the duration of the swing phase and the step length on that side.Thus, a short step length on one side means problems with a single support on the other side (Ashutosh, Vipin, Y. K Jain & Surender, 2011).
The trajectories of gait figure contain the general gait, such as stride length, cycle time, and speed, and provide a basic description of the gait motion.The number of frames determines the period of the gait motion during one gait cycle in image sequence, and the frame rate of the SOTON database was 25 frames/s, where the stride length can be directly estimated from the physical dimensions of the image plane.The coordinates of the forward displacements of gait figures during one gait cycle determine the stride length.In addition, the kinematic parameters are usually characterized by the joint angles between body segments and their relationships to the events of the gait cycle.In the gait figures, the joint angles can be determined from the coordinates of the body points.By definition, the joint angles are measured as one joint relative to another, so the relative angles in each joint are derived from the extracted angle values.In normal walking, the torso of a human body can be considered to be almost vertical.Thus, the relative hip angle is identical to that of the extracted value, and the relative knee angle can be defined from the extracted hip angle and knee angle.The trajectories of the gait figure contain numerous kinematic characteristics on human movement, including linear and angular position, their displacements, and time derivatives.Joint angles were efficiently interpolated by trigonometric-polynomial functions (Jang-Hee & Mark, 2011).
In previous work (Mashagba.E. F., 2015;Mashagba.E. F, Mashagba. F & Nassar, 2014), the x and y coordinates of the hip joint were used to calculate the thigh angle segment.In addition, knee and ankle coordinates were used to calculate the shank angle segment.For each frame low limb joint and segment, kinematic features were partly extracted to form the gait motion trajectories.The feature vectors of gait motion parameters were extracted from each frame by using image segmentation methods and were categorized into five categories (Mashagba. E. F. et al., 2014).Two of these categories were used to form gait motion trajectories.Category one: Gait angle velocity: angle velocity hip, angle velocity knee, and angle velocity thigh and shank.Category two: Gait angle acceleration: angle acceleration hip, angle acceleration knee, and angle acceleration thigh and shank for each image sequence (Mashagba, E. F., 2015).

Voice Geometry
Speaker/voice verification combines physiological and behavioral factors to produce speech patterns that can be captured by speech processing technology.Inherent properties of the speaker, such as fundamental frequency, nasal tone, cadence, and inflection, among other, are used for speech authentication.Both the combination of all features and separate features were attempted.Other features that were omitted in the final evaluation include the following: 1) Zero-crossing rate 2) Short-time energy

Gesture Geometry
Gesture recognition is primarily concerned on analyzing the functionality of human mind.The major aim of gesture recognition is to recognize specific human gestures and use these gestures to communicate information or for device control ([Ithm Karthik, Manoj, M.V.V.N. & Naveen, 2010).Gesture is the use of motions of the limbs or body as a means of expression or communicating a meaning or emotion.Gestures include body movements (e.g., palm-down and shoulder-shrug) and postures (e.g., angular distance).Face and body gestures are two of the several channels of nonverbal communication that arise together.Messages can be expressed through face and gesture (Hatice, Massimo& Tony, 2004) and often occur in conjunction with speech.Thus, representative gestures that can replace speech are not considered gestures.In noisy situations, humans depend on access to more than one modality; thus, nonverbal modalities come in to play.Listeners rely on gestural cues when speech is ambiguous or in a speech situation with certain noises (Hatice et al., 2004).
The use of YUV Skin Color Segmentation followed by CAMSHIFT algorithm will facilitate effective detection and tracking, as the centroid values can easily be obtained by calculating the moments at each point.As the hand is the largest connected region, then the hand was segmented from the body.Afterward, the position of the hand centroid is calculated in each frame by first calculating the zeroth and first moments and then using this information to calculate the centroid.Then, different centroid points are joined to form a trajectory.This trajectory shows the path of the hand movement; thus, the hand tracking procedure is determined.For the nose detection and tracking system, edge detection techniques yield comparatively better results and detect the nose for faces at various angles (Ith et al., 2010).
In this thesis (Mashagba E. F, 2010), hand and face gesture are used to automate sign language.By extracting the hand and face regions in each frame, the position, angle, and velocity are tracked to extract feature vectors that classify the sign.

Keystroke Dynamics Geometry
Keystroke dynamics is not what you type but how you type (Alsultan & Warwick, 2013;Shivshankar & Priyanka, 2015).During typing, the user types in text, as usual, with no kind of extra work.The original idea of using keystroke patterns for user identification purposes originated from identifying the sender of Morse code on a telegraph machine, where operators have been able to identify the sender of a message by the rhythm, pace, and syncopation of the received taps (Shivshankar 7 Priyanka, 2015).The manner by which user data is collected in free-text keystroke systems differs considerably from that of fixed-text systems in how a user is normally monitored for a period, such as a duration of several days for example.From all typing data collected during this time, the system infers the typing pattern that the user typically follows and stores the data as the user database.
The time for typing single letters or combinations of letters, such as di-graphs or tri-graphs, involve even longer combinations in the free-text keystroke systems.However, a condition includes a particular letter or combination of letters in the template, which should be typed often enough during the enrolment phase.This condition renders the mean and standard deviation as statistically sound.Timing features (Alsultan & Warwick, 2013;Shivshankar & Priyanka, 2015;Giot, Mohamad El-Abed & Rosenberger, 2009) are calculated by using the press and release times of every key that the user types.These data are processed in a specific manner before being stored in the user's profile.The use of keystroke dynamics has been investigated previously as a more useable authentication alternative.Researchers have focused on the research conducted on free-text keystroke systems and its ability to provide continual identity verification during the whole time that the user is using the system (Alsultan & Warwick, 2013; Shivshankar & Priyanka, 2015).
Features such as keystroke latency, duration of key hold, and typing rate are extracted as follows (Shivshankar & Priyanka, 2015): 1) Session time is the total time spent by the user on the system.Session time is calculated by computing the difference between the starting time and user response times.Session time: Starting time -User response time.
2) Keystroke latency is the time interval between the key release of the first keystroke and the key press of the second keystroke.However, the latency of longer n-graphs (n > 2) is defined as the time interval between the down key events of the first and last keystrokes that compose the n-graph.
3) Held time (or dwell time) is the time (in milliseconds) between a key press and a key release of the same key.
4) Sequence is a list of consecutive keystrokes.For example, "REVIEW" is a sequence.A sequence can be of any length (minimum two).In this example, the sequence is a valid English word, but this need not be the case.Thus, "REV" and "IEW" are also valid sequences from the same keystroke Stream.A length-2 sequence is called a digraph, a length-3 sequence is a trigraph, and so on.Therefore, a general sequence is an n-graph.
5) Typing speed is the total number of keystrokes or words per minute.Typing speed could be an indicator of different emotions.
Two features were extracted during the keystroke: keystroke duration and keystroke latency.Keystroke duration is the interval of time that a key is pressed and liberated.Keystroke latency is the interval of time between pressing two consecutive keys.The interval of time to liberate a key and press the key successor is known as flight time or dwell time.Flight time is the time between releasing the key and pressing the next key, whereas dwell time is the time taken to press a key ( Pavithra & Sri Sathya, 2015).

Conclusion
Biometric technologies have recently played an important function as a suitable method of identification.These technologies have been proposed for high-security professional applications but are now applied as a major part of developing online applications, offline applications, standalone security systems, use for human interaction to non-traditional network-based delivery systems, and ability to verify a human being remotely.
In • Onion layers (convex polygons) are created from data sets.The smallest layer (convex polygon) of the constructed onion layers is isolated from the fingerprint in vector form.
• Extract feature points such as eyes, nose, and mouth.The direction and edges from the points are detected, and then feature vectors are built from the edges and direction.• Feature vectors, such as Euclidian distances and angles of a triangle (Luaibilafta, 2013) • The iris region, Cartesian coordinates, polar coordinates, coordinates of the pupil, and iris boundaries along the θ direction, the mean, variance, standard deviation of the circles, and pixel correlation (Bhawna et al., 2010;Roselin, et al., 2013) • Pupil-based features (pupil roundness, pupil largeness, and pupil smoothness).

•
Collarette-based features (collarette iris ratio and collarette pupil ratio).Velocity to extracted feature vector (Mashagba E.F et al., 2010) ): a) Tip point of all fingers, including the thumb b) Starting reference point and ending reference point.c) Centroid of the hand d) Major axis length e) Minor axis length f) Perimeter Speaker recognition systems are classified as text-dependent (fixed text) and text-independent (free text).Text-dependent systems perform better than text-independent systems because of the foreknowledge of what is said can be exploited to align speech signals into more discriminate classes.By contrast, text-dependent systems require a user to repronounce several specified utterances, which usually contain the same text as the training data (G.Chiţu, M. Rothkrantz, Wiggers & C. Wojdel, 2007).The following features are used for the final evaluation (Michalevsky & Talmon, 2010): Mel Frequency Cepstral Coefficients 2) Pitch statistics (mean, variance, minimum, and maximum) and pitch derivative statistics 3) Voiced/unvoiced frame percentage

(•••
Image regions that contain eyes and mouth are detected.Thus, a total of 15 parameters are considered as feature vectors: height of the left eyebrow, width of the left eyebrow, height of the left eye, width of left eye, height of right eyebrow, width of the right eyebrow, height of the right eye, width of the right eye, nose height, nose width, mouth height, mouth width, distance between the center of the left eyebrow and the left eye, distance between center of the right eyebrow and the right eye, and distance between the center of the nose and the mouth (Kim et al., 2015) Tip point of all fingers, including thumb • Starting reference point and ending reference point • Centroid of the hand • Major axis length • Minor axis length • Perimeter (Poonam & Sipi, 2013) Geometrical features of the index, middle, and ring fingers.Each finger is characterized as a triangle.The three vertices of the triangle are the end and the two side valleys of the finger.Approximately 40 wide measures to the index, middle, and ring fingers were taken. of the fingers is measured in three different heights.Thump finger is measured in two heights.• Heights: the heights of all fingers and thumb are obtained • coordinate in direction x • yi is the coordinate in direction y • Speed vx and vy express the functions of time • tm is the time of movement • vx is the speed at point tm in direction x • vy is the speed at point tm in direction y. • Acceleration ax and ay can be calculated with different speeds and the acceleration at point tm in directions x and y • Tk, total time of the hand-written signature; total time, stroke length, the time for lifting the pen (Syed et al., 2015) • The position of the hand centroid is calculated in each frame • Different centroid points are joined to form a trajectory.This trajectory shows the path of the hand movement • Extraction of hand and face region in each frame • Position • Angle •

Table 1 .
this work, we focus in geometry features extraction techniques, the most important issues in these techniques are the emphases in the choosing a appropriate geometry features that can identification human being, where this can be achieved by conducted several experiments with different geometry features.Another issues is security applications need to use multi biometric type.The most existing techniques using geometric features,it not real time recognition so we need to focus in this point to propose suitable online applications.In this work, we established a comparison between different biometrics.The following table compares several biometric technologies from the point of the geometric feature extraction in different research.From the following tables, we can see that geometry-based methods with different biometrics are applied simply and efficiently.According to previous experiments, the eye region is extracted as face part for use mainly in face recognition.In addition, the extracted eye region shows more detail in comparison with iris features.We can see that certain biometric technologies, such as fingerprint and signature, present more geometric features than other biometric technologies, as shown in the table below.Comparisons between different biometrics (Ishmael et al., 2009) endings, ridge bifurcations, and islands)(Ishmael et al., 2009)