Novel Method for More Precise Determination of Oscillometric Pulse Amplitude Envelopes

,


Introduction
Oscillometric method for blood pressure measurement is widely used in electronic tonometers because of its low cost, rapidness and convenience (Raamat, 2010).Principle of this method is based on the fact that oscillations of the arterial wall can be detected by sensors fixed on the occluding cuff when the pulsatile blood flows through the blood vessels.After the oscillation waveform is acquired, the mean arterial pressure (MAP) can be directly measured by searching for the maximal oscillometric pulsation during deflation (or inflation) of the cuff.Typically, the values of systolic blood pressure (SBP) and diastolic blood pressure (DBP) are determined by maximum amplitude algorithm (MAA) (S.Lee, 2013).As Figure 1 shows (Doi, 2012), oscillation amplitude of the pulses will be divided over MAP to get a ratio.If the ratio that equals to a predefined value appears before MAP, then the corresponding cuff pressure will be regarded as SBP.Otherwise, if the ratio that equals to another predefined value appears after MAP, then the corresponding cuff pressure will be regarded as DBP.
Because the oscillometric pulse amplitudes are discrete, the predefined ratio may not be observed.Therefore, a continuous variable envelope of the oscillometric pulsations is generally implemented instead of the discrete amplitudes.As a result, the accuracy of SBP and DBP is depend on the accurately determined envelope to a large extent.Oscillometric waveforms inevitably contain some noise.To get the real envelope that fit the ideally noise-free oscillometric pulse amplitude as precise as possible, one way is to reduce the noise before curve fitting.Some earlier works utilize digital filters to diminish the interferences (Baker, 1997;J. Y. Lee, 2002).The results are satisfying, but their good performance is only limited to time domain.Part of the researchers use fuzzy logic to estimate the noise level of each pulse and delete those with relatively large disturbance (Lin, 2003).
The key to this method is to define an appropriate fuzzy rule that may vary with different devices, which can hinder the popularization of this method.Some useful information is likely to be removed by mistake.Both of the two methods are followed by curve fitting.But curve fitting alone is capable of denoising to a certain degree if it is carefully designed.And more importantly, it is easy to use.
Several methods are currently available for the envelope curve fitting.Gregor Geršak et al regarded a curve that connecting the contiguous peaks with straight lines as the envelope (Geršak, 2006).It's easy to implement and measure accuracy can sometimes be satisfied, but the envelope may not represent the exact smooth one since it's made of straight lines.A better method is the least square fitting using polynomial function (Zheng, 2011), which is smoother and also very effective in many cases.But peak values of the oscillometric waveform are not always changing slowly, especially when there is severe disturbance caused by motion artifacts (Moer, 2011).That means the fitting method with an order ranging from 4 to 11 will not fit those points well enough (S.Lee, 2011).Silu Chen proposed asymmetric Gaussian and Lorentzian functions (Chen, 2013) instead of symmetric ones (Deng, 2013) to fit the envelopes.Comparing to polynomial functions, asymmetric Gaussian and Lorentzian functions are smoother and have almost no oscillations (Forouzanfar, 2011).However, as the name implies, this method is only suitable for oscillometric waveforms whose profiles are close to the proposed asymmetric Gaussian or Lorentzian functions.
In this work, artificial neural network (ANN) is proposed as the envelope fitting algorithm.ANN is well-known for mapping any nonlinear function with theoretically any given accuracy and has significant dominance in robustness compared to aforementioned approaches.

Method
The proposed method is tested after some requisite preparations, such as data acquisition and peak detection.Data acquisition in our work is mainly conducted in LabVIEW environment with auxiliary devices.Peak detection and curve fitting algorithms are conducted by the MATLAB Script node in LabVIEW.A diagram of the set up for blood pressure data acquisition is illustrated in Figure 2. The cuff and the cuff inflation/deflation system, which is chiefly composed of an air pump as well as controlling circuit, are connected by a hose.Controlling signal that tell the air pump when to start working or halt with a given rotating speed is sent by an NI USB-6008 DAQ device.NI USB-6008 is also used for acquiring data from the embedded pressure sensors and communicate with PC.A PC running LabVIEW program is exploited here for sending commands to the cuff inflation/deflation system, receiving data and continuing further data processing.

Peak Detection
Basically, there are four algorithms for peak detection: Hilbert transform, amplitude threshold, slop threshold and zero crossing.These methods are all capable of detecting most of the peaks when the QRS complexes are neither too small nor too wide.But oscillometric waveforms don't always meet this condition.Therefore, a modified method proposed by M.Sabarimalai Manikandan et al is applied in this work.
The method consists of four sequential steps: digital filtering, Shannon energy envelope extraction, peak-finding logic and true R-peak locator.In the main step, peak-finding logic, Hilbert transform and zero crossing are combined so as to get a better performance (Manikandana, 2012).

ANN for Curve Fitting
Since Warren McCulloch and Walter Pitts introduced the first "threshold logic", ANN has made much progress both on theory and its application.As a result, over 40 different architectures of ANN are available at present.Whether the structure and learning algorithm are properly selected will affect the performance of the ANN to a great extent (Ren, 2014).
Multilayer Perceptrons (MLPs), Back Propagation (BP) networks, Radial Basis Function (RBF) networks, Hopfield networks and Elman networks are widely implemented architectures.MLPs are often used as a method of speech recognition, image recognition and machine translation.BP networks can avoid converging to a local minimum and their convergence speed is relatively faster than MLPs.RBF networks excel at exact interpolation but perform poorly with noisy data.Hopfield networks and Elman networks are applied to simulate associative memory more often compared to other applications.For these reasons, BP network is chosen as the ANN architecture for this curve fitting.
Figure 3.Typical structure of ANN Generally, BP networks include three layers: input layer, hidden layer and output layer (Lan, 2015).As is shown in Figure 3, the number of neurons in the input/output layer equals to the number of input/output parameters.However, there is still no universal method to figure out what the optimum number of neurons in the hidden layer is (Hunter, 2012).It will be determined through experimentation in this research.
Weight initialization plays a very important role in the training speed of ANN.Many researches have focused on this topic.A weight initialization algorithm based on the Cauchy's inequality and a linear algebraic method is used in this work (Yam, 2000).Suppose we have P training patterns, and the lth layer of the ANN with L layers in total consists of n l +1 neurons where the output of the last neuron is 1.And the output of the jth neuron in the l+1th layer 1 , l p j a + when the pth training pattern is going on is determined by Eq.1 and Eq.2.Where 1≤l≤L-1, 1≤p≤P, is the weighted sum that is also the input of jth neuron in layer l+1, and f the activation function.Inequality of the weights l p θ and l θ are defined by Eq.3 and Eq.4: (3) (4) Where s ≈4.59 because sigmoid function is applied here as activation function, and k varies with the distribution of the weights.If the weights obey normal distribution, then k=1; else if they obey uniform distribution, then k=3.
The weight initiation algorithm can be described in the following steps (l starts with 1): Step 1: Estimate l p θ and l θ with Eq.3 and Eq.4.Step2: Initialize weights according to uniform distribution with parameters l θ − and l θ or normal distribution N(0, ( l p θ ) 2 ).
Step 3: With the initialized weights , Step 4: If l≤L-2, jump to Step 1. Otherwise, find the weights of the last layer W L-1 by minimize the following 2-norm with least square method: Where t i,j is the elements of the P×n L target matrix T.
For the purpose of fitting the curve as fast as possible, Levenberg-Marquardt algorithm will be chosen as the learning algorithm (Azar, 2013).Besides that, training goal is another parameter that should be carefully considered.If it is too small, overfitting will occur and the relationship between inputs and outputs won't be described in a right way.Otherwise, if it is too big, the shape of the trained curve will have a poor repeatability.
Training goal is the aim of mean square error (MSE), which is expressed as Eq.8: (8) Where N is the sample size in the calibration set, n is the number of output parameters, y i (k) is the ith desired value in the Nth training pattern, and ˆ( ) i y k is the corresponding output of the trained ANN.The results in Figure 4 demonstrate that the curves tend to be more oscillometric and more peaks tend to be passed through when the number of neurons in the hidden layer increases.This is because the ANN's ability to describe the relationship between the inputs and outputs becomes stronger, which is a disadvantage in this circumstance for it overfits the peaks contaminated by noise.So it is obvious that the optimum number is around 10.

Determination of Hidden Layer Size and Training Goal
Numbers of neurons in the hidden layer ranging from 5 to 15 are then examined, and the results shows no significant changes in the shape of the curves when the number varies.But they sometimes changes a lot when we repeat calculating the envelopes with fixed neuron numbers.This is not allowed in the blood pressure measurement.And the value of training goal is then lowered.
After numerous experiments, the ANN with 10 neurons in the hidden layer and training goal of 0.002 is found to be the much better.It has a good repeatability of the shape and a relatively fast learning speed.Figure 5 shows the training performance of the ANNs with 9, 10, and 11 neurons in the hidden layer.ANN with 11 neurons in the hidden layer performs slightly better than the one with 10 neurons, but the latter is preferred since it is less computationally intensive.The ANN with 9 neurons in the hidden layer is not so stable, and sometimes it takes much more time to reach the training goal as Figure 5 (a) shows.

Comparison of Asymmetric Gaussian/Lorentzian Functions and ANN
Each of the 48 subjects are tested with the three algorithms.Criteria for evaluating the performance of the algorithms are MSE and the time they consume.It should be point out that we use BP network with fixed architecture: 11 neurons in the hidden layer, 0.0025 as training goal and Levenberg-Marquardt learning algorithm.Figure 6 shows some representative results of the comparison.
It is intuitive in Figure 6 that oblate "bell curves" of Asymmetric Gaussian/Lorentzian functions are almost the same with ANNs.But their MSEs are much higher than ANNs.Architecture of the ANN implemented in this research is relatively simple while its computational intensity is slightly higher than the other two approaches that also apply Levenberg-Marquardt as the optimization algorithm.In most cases, both of the three algorithms can complete their calculation in less than a second (3.3 GHz of CPU and 4GB of RAM).However, sometimes it takes ANN for over 10 seconds to finish the process.A simple solution to this problem is lower the max epoch to about 50.As a result, the training goal can't be reached sometimes, but we find that they still have much lower MSEs than the other two methods in all of our experiments.ANN and the other two methods are then implemented to measure blood pressure for further comparison.The 48 subjects are divided into two groups according to their health condition.Group 1 contains 36 healthy subjects, and the other 12 subjects are people with cardiovascular disease.The cuff pressure value corresponding to 70% of the amplitude of the MAP pulse when the cuff pressure is above MAP is considered as SBP, and DBP is acquired when 50% of the MAP amplitude appears.Reference blood pressure values are measured by auscultatory method.The oscillometric and auscultatory method are performed simultaneously by a skilled nurse with the same cuff.That is, the cuff is connected to both the oscillometric device and a mercury manometer.SBP and DBP by auscultation method is defined as the appearance of Korotkoff sounds and the disappearance of the sounds during deflation.1 shows the standard deviation (SD) of the three methods when the subjects are divided in to two groups.
It can be implied from the results that all the three algorithms performs well, and none of them performs significantly better than others when the subjects are all healthy.But when they deal with oscillometric waveforms that interfered by cardiovascular disease, ANN is more precise than the other two methods that are relatively easy to be by the shape of the oscillometric waveforms.

Conclusion
Curve fitting for envelope is crucial in blood pressure measurement.After careful consideration, ANN is implemented for curve fitting instead of asymmetric Gaussian/Lorentzian functions, and the selection of its topology is conducted.Through abundant tests, it is found that 11 neurons in the hidden layer and 0.0025 as training goal are in a sense optimum parameters for the BP network in the curve fitting.This architecture is then compared with the widely used methods.The results indicate that it outperforms the other two algorithms in terms of MSE and has an acceptable computing time.By properly reducing the max epoch, the problem that sometimes ANN consumes too much time to reach the training goal is solved while its MSE still maintains at a lower level.

Figure 1 .
Figure 1.Illustration of MAP, SBP and DBP determination with MAA

Figure 2 .
Figure 2. Block diagram of the data acquisition system entering the training patterns.

Figure 5 .
Figure 5. Training performance of the 3 ANNs with different neuron number in the hidden layers: (a) 9 neurons; (b) 10 neurons; (c) 11 neurons

Figure 6 .
Figure 6.Representative results of the comparison with MSE in brackets

Table 1 .
Summary of the comparison between ANN, AGM (Asymmetric Gaussian Method) and ALM (Asymmetric Lorentzian Method) in terms of SD