etd@IISc Collection:
http://hdl.handle.net/2005/19
Fri, 08 Jul 2016 15:23:55 GMT2016-07-08T15:23:55ZAnalysis Of Multichannel And Multimodal Biomedical Signals Using Recurrence Plot Based Techniques
http://hdl.handle.net/2005/2491
Title: Analysis Of Multichannel And Multimodal Biomedical Signals Using Recurrence Plot Based Techniques
Authors: Rangaprakash, D
Abstract: For most of the naturally occurring signals, especially biomedical signals, the underlying physical process generating the signal is often not fully known, making it difficult to obtain a parametric model. Therefore, signal processing techniques are used to analyze the signal for non-parametrically characterizing the underlying system from which the signals are produced. Most of the real life systems are nonlinear and time varying, which poses a challenge while characterizing them. Additionally, multiple sensors are used to extract signals from such systems, resulting in multichannel signals which are inherently coupled. In this thesis, we counter this challenge by using Recurrence Plot based techniques for characterizing biomedical systems such as heart or brain, using signals such as heart rate variability (HRV), electroencephalogram(EEG) or functional magnetic resonance imaging (fMRI), respectively, extracted from them.
In time series analysis, it is well known that a system can be represented by a trajectory in an N-dimensional state space, which completely represents an instance of the system behavior. Such a system characterization has been done using dynamical invariants such as correlation dimension, Lyapunov exponent etc. Takens has shown that when the state variables of the underlying system are not known, one can obtain a trajectory in ‘phase space’ using only the signals obtained from such a system. The phase space trajectory is topologically equivalent to the state space trajectory. This enables us to characterize the system behavior from only the signals sensed from them. However, estimation of correlation dimension, Lyapunov exponent, etc, are vulnerable to non-stationarities in the signal and require large number of sample points for accurate computation, both of which are important in the case of biomedical signals. Alternatively, a technique called Recurrence Plots (RP) has been proposed, which addresses these concerns, apart from providing additional insights. Measures to characterize RPs of single and two channel data are called Recurrence Quantification Analysis (RQA) and cross RQA (CRQA), respectively. These methods have been applied with a good measure of success in diverse areas. However, they have not been studied extensively in the context of experimental biomedical signals, especially multichannel data.
In this thesis, the RP technique and its associated measures are briefly reviewed. Using the computational tools developed for this thesis, RP technique has been applied on select single
channel, multichannel and multimodal (i.e. multiple channels derived from different modalities) biomedical signals. Connectivity analysis is demonstrated as post-processing of RP analysis on multichannel signals such as EEG and fMRI. Finally, a novel metric, based on the modification of a CRQA measure is proposed, which shows improved results.
For the case of single channel signal, we have considered a large database of HRV signals of 112 subjects recorded for both normal and abnormal (anxiety disorder and depression disorder) subjects, in both supine and standing positions. Existing RQA measures, Recurrence Rate and Determinism, were used to distinguish between normal and abnormal subjects with an accuracy of 58.93%. A new measure, MLV has been introduced, using which a classification accuracy of 98.2% is obtained.
Correlation between probabilities of recurrence (CPR) is a CRQA measure used to characterize phase synchronization between two signals. In this work, we demonstrate its utility with application to multimodal and multichannel biomedical signals. First, for the multimodal case, we have computed running CPR (rCPR), a modification proposed by us, which allows dynamic estimation of CPR as a function of time, on multimodal cardiac signals (electrocardiogram and arterial blood pressure) and demonstrated that the method can clearly detect abnormalities (premature ventricular contractions); this has potential applications in cardiac care such as assisted automated diagnosis. Second, for the multichannel case, we have used 16 channel EEG signals recorded under various physiological states such as (i) global epileptic seizure and pre-seizure and (ii) focal epilepsy. CPR was computed pair-wise between the channels and a CPR matrix of all pairs was formed. Contour plot of the CPR matrix was obtained to illustrate synchronization. Statistical analysis of CPR matrix for 16 subjects of global epilepsy showed clear differences between pre-seizure and seizure conditions, and a linear discriminant classifier was used in distinguishing between the two conditions with 100% accuracy.
Connectivity analysis of multichannel EEG signals was performed by post-processing of the CPR matrix to understand global network-level characterization of the brain. Brain connectivity using thresholded CPR matrix of multichannel EEG signals showed clear differences in the number and pattern of connections in brain connectivity graph between epileptic seizure and pre-seizure. Corresponding brain headmaps provide meaningful insights about synchronization in the brain in those states. K-means clustering of connectivity parameters of CPR and linear correlation obtained from global epileptic seizure and pre-seizure showed significantly larger cluster centroid distances for CPR as opposed to linear correlation, thereby demonstrating the efficacy of CPR. The headmap in the case of focal epilepsy clearly enables us to identify the focus of the epilepsy which provides certain diagnostic value.
Connectivity analysis on multichannel fMRI signals was performed using CPR matrix and graph theoretic analysis. Adjacency matrix was obtained from CPR matrices after thresholding it using statistical significance tests. Graph theoretic analysis based on communicability was performed to obtain community structures for awake resting and anesthetic sedation states. Concurrent behavioral data showed memory impairment due to anesthesia. Given the fact that previous studies have implicated the hippocampus in memory function, the CPR results showing the hippocampus within the community in awake state and out of it in anesthesia state, demonstrated the biological plausibility of the CPR results. On the other hand, results from linear correlation were less biologically plausible.
In biological systems, highly synchronized and desynchronized systems are of interest rather than moderately synchronized ones. However, CPR is approximately a monotonic function of synchronization and hence can assume values which indicate moderate synchronization. In order to emphasize high synchronization/ desynchronization and de-emphasize moderate synchronization, a new method of Correlation Synchronization Convergence Time (CSCT) is proposed. It is obtained using an iterative procedure involving the evaluation of CPR for successive autocorrelations until CPR converges to a chosen threshold. CSCT was evaluated for 16 channel EEG data and corresponding contour plots and histograms were obtained, which shows better discrimination between synchronized and asynchronized states compared to the conventional CPR.
This thesis has demonstrated the efficacy of RP technique and associated measures in characterizing various classes of biomedical signals. The results obtained are corroborated by well known physiological facts, and they provide physiologically meaningful insights into the functioning of the underlying biological systems, with potential diagnostic value in healthcare.Mon, 30 Nov 2015 18:30:00 GMThttp://hdl.handle.net/2005/24912015-11-30T18:30:00ZEqualization Algorithms And Performance Analysis In Cyclic-Prefixed Single Carrier And Multicarrier Wireless Systems
http://hdl.handle.net/2005/2314
Title: Equalization Algorithms And Performance Analysis In Cyclic-Prefixed Single Carrier And Multicarrier Wireless Systems
Authors: Itankar, Yogendra Umesh
Abstract: The work reported in this thesis is divided in to two parts.
In the first part, we report a closed-form bit error rate (BER) performance analysis of orthogonal frequency division multiple access (OFDMA) on the uplink in the presence of carrier frequency offsets (CFOs) and/or timing offsets (TOs) of other users with respect to a desired user. We derive BER expressions using probability density function (pdf) and characteristic function approaches, for a Rician faded multi-cluster multi-path channel model that is typical of indoor ultrawideband channels and underwater acoustic channels. Numerical and simulation results show that the BER expressions derived accurately quantify the performance degradation due to non-zero CFOs and TOs.
Ultrawideband channels in indoor/industrial environment and underwater acoustic channels are severely delay-spread channels, where the number of multipath components can be of the order of tens to hundreds. In the second part of the thesis, we report low complexity equalization algorithms for cyclic-prefixed single carrier (CPSC)systems that operate on such inter-symbol interference(ISI) channels characterized by large delay spreads. Both single-input single-output and multiple-input multiple-output(MIMO) systems are considered. For these systems, we propose a low complexity graph based equalization carried out in frequency domain. Because of the noise whitening effect that happens for large frame sizes and delay spreads in the frequency domain processing, improved performance compared to time domain processing is shown to be achieved. Since the graph based equalizer is a soft-input soft-output equalizer, iterative techniques(turbo-equalization) between detection and decoding are shown to yield good coded BER performance at low complexities in convolutional and LDPC coded systems. We also study joint decoding of LDPC code and equalization of MIMO-ISI channels using a joint factor graph.Tue, 20 May 2014 18:30:00 GMThttp://hdl.handle.net/2005/23142014-05-20T18:30:00ZTowards Development Of Low Cost Electrochemical Biosensor For Detecting Percentage Glycated Hemoglobin
http://hdl.handle.net/2005/2340
Title: Towards Development Of Low Cost Electrochemical Biosensor For Detecting Percentage Glycated Hemoglobin
Authors: Siva Rama Krishna, V
Abstract: There is an ever growing demand for low cost biosensors in medical diagnostics. A well known commercially successful example is glucose biosensors which are used to diagonize and monitor diabetes. These biosensors use electrochemical analysis (electro analysis) as transduction mechanism. Electro analytical techniques involve application of electrical stimulus to the chemical/biochemical system under consideration and measurement of electrical response due to the oxidation and reduction reactions that occur because of the stimulus. They offer a lot of advantages in terms of sensitivity, selectivity, cost effectiveness and compatibility towards integration with electronics. Besides glucose, there are several biomolecules of significance for which electro analysis can potentially be used to develop low cost, rapid, easy to use biosensors. One such biomolecule is Glycated Hemoglobin (GHb). It is a post translational, non-enzymatic modification of hemoglobin with glucose and is a very good biomarker that indicates the average value of blood glucose over the past 120 days. It is always expresses as a percentage of total hemoglobin present in blood. Monitoring diabetes based on the value of percentage Glycated hemoglobin is advantageous as it gives an average value of glucose unlike plasma glucose values which vary a lot on a day to day basis depending on the dietary habits and the stress levels of the individual. This thesis is focused on the development of a low coat, easy to use, disposable sensor for measuring percentage Glycated hemoglobin.
The first challenge in developing such a sensor is isolation of hemoglobin. Unlike glucose which is present in blood plasma (liquid content of blood), hemoglobin resides inside red blood cells also known as erythrocytes. O isolate hemoglobin, these cells have to be broken or lysed. All the existing approaches rely on mixing blood with lysing reagents to lyse erythrocytes. Ideal biosensors should be devoid of liquid reagents. Keeping this in perspective, in this thesis, this challenge is addressed by developing two entirely buffer/reagentless techniques to lyse erythrocytes and isolate hemoglobin. In the first technique, cellulose acetate membranes are embedded with lysing reagents and are used for lysing reagents and are used for lysing application. In the second techniques, commercially available nylon mesh nets are modified with lysing reagents to lyse and isolate hemoglobin. These membranes or mesh nets can be easily integrated on top of a disposable strip.
After isolating hemoglobin, the next challenge is to selectively detect Glycated hemoglobin. Boronic acid conjugates are known to bind Glycated hemoglobin. Using this principle, a new composite is sysnthesized to specifically detect glc\ycated hemoglobin. The composite (GO-APBA) is a result of functionalization of Graphene Oxide (GO) with 3-aminophenylboronic acide (APBA). Detection of Glycated hemoglobin is achieved by modifying screen printed electrode strips with the synthesized compound, thus taking a step forwards achieving the objective.
Since Glycated hemoglobin is always expressed as a percentage of hemoglobin, the next challenge is to detect total hemoglobin. In this thesis a low cost way of detecting hemoglobin is achieved by using GO modified or surfactant modified screen printed electrode strips. Furthermore, the potential interferences that blood plasma can cause in these measurements are eliminated with the help of permselective coatings.
Thus using the technologies developed in this thesis, measurements of percentage Glycated hemoglobin can be potentially made on handheld electronic devices akin to glucose meters by using just a drop of blood.Thu, 10 Jul 2014 18:30:00 GMThttp://hdl.handle.net/2005/23402014-07-10T18:30:00ZNonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals
http://hdl.handle.net/2005/2452
Title: Nonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals
Authors: Sreenivasa Murthy, A
Abstract: For time-varying signals such as speech and audio, short-time analysis becomes necessary to compute specific signal attributes and to keep track of their evolution. The standard technique is the short-time Fourier transform (STFT), using which one decomposes a signal in terms of windowed Fourier bases. An advancement over STFT is the wavelet analysis in which a function is represented in terms of shifted and dilated versions of a localized function called the wavelet. A specific modeling approach particularly in the context of speech is based on short-time linear prediction or short-time Wiener filtering of noisy speech. In most nonstationary signal processing formalisms, the key idea is to analyze the properties of the signal locally, either by first truncating the signal and then performing a basis expansion (as in the case of STFT), or by choosing compactly-supported basis functions (as in the case of wavelets). We retain the same motivation as these approaches, but use polynomials to model the signal on a short-time basis (“short-time polynomial representation”). To emphasize the local nature of the modeling aspect, we refer to it as “local polynomial modeling (LPM).”
We pursue two main threads of research in this thesis: (i) Short-time approaches for speech enhancement; and (ii) LPM for enhancing smooth signals, with applications to ECG, noisy nonuniformly-sampled signals, and voiced/unvoiced segmentation in noisy speech.
Improved iterative Wiener filtering for speech enhancement
A constrained iterative Wiener filter solution for speech enhancement was proposed by Hansen and Clements. Sreenivas and Kirnapure improved the performance of the technique by imposing codebook-based constraints in the process of parameter estimation. The key advantage is that the optimal parameter search space is confined to the codebook. The Nonstationary signal enhancement solutions assume stationary noise. However, in practical applications, noise is not stationary and hence updating the noise statistics becomes necessary. We present a new approach to perform reliable noise estimation based on spectral subtraction. We first estimate the signal spectrum and perform signal subtraction to estimate the noise power spectral density. We further smooth the estimated noise spectrum to ensure reliability. The key contributions are: (i) Adaptation of the technique for non-stationary noises; (ii) A new initialization procedure for faster convergence and higher accuracy; (iii) Experimental determination of the optimal LP-parameter space; and (iv) Objective criteria and speech recognition tests for performance comparison.
Optimal local polynomial modeling and applications
We next address the problem of fitting a piecewise-polynomial model to a smooth signal corrupted by additive noise. Since the signal is smooth, it can be represented using low-order polynomial functions provided that they are locally adapted to the signal. We choose the mean-square error as the criterion of optimality. Since the model is local, it preserves the temporal structure of the signal and can also handle nonstationary noise. We show that there is a trade-off between the adaptability of the model to local signal variations and robustness to noise (bias-variance trade-off), which we solve using a stochastic optimization technique known as the intersection of confidence intervals (ICI) technique. The key trade-off parameter is the duration of the window over which the optimum LPM is computed.
Within the LPM framework, we address three problems: (i) Signal reconstruction from noisy uniform samples; (ii) Signal reconstruction from noisy nonuniform samples; and (iii) Classification of speech signals into voiced and unvoiced segments.
The generic signal model is
x(tn)=s(tn)+d(tn),0 ≤ n ≤ N - 1.
In problems (i) and (iii) above, tn=nT(uniform sampling); in (ii) the samples are taken at nonuniform instants. The signal s(t)is assumed to be smooth; i.e., it should admit a local polynomial representation. The problem in (i) and (ii) is to estimate s(t)from x(tn); i.e., we are interested in optimal signal reconstruction on a continuous domain starting from uniform or nonuniform samples.
We show that, in both cases, the bias and variance take the general form:
The mean square error (MSE) is given by
where L is the length of the window over which the polynomial fitting is performed, f is a function of s(t), which typically comprises the higher-order derivatives of s(t), the order itself dependent on the order of the polynomial, and g is a function of the noise variance. It is clear that the bias and variance have complementary characteristics with respect to L. Directly optimizing for the MSE would give a value of L, which involves the functions f and g. The function g may be estimated, but f is not known since s(t)is unknown. Hence, it is not practical to compute the minimum MSE (MMSE) solution. Therefore, we obtain an approximate result by solving the bias-variance trade-off in a probabilistic sense using the ICI technique. We also propose a new approach to optimally select the ICI technique parameters, based on a new cost function that is the sum of the probability of false alarm and the area covered over the confidence interval. In addition, we address issues related to optimal model-order selection, search space for window lengths, accuracy of noise estimation, etc.
The next issue addressed is that of voiced/unvoiced segmentation of speech signal. Speech segments show different spectral and temporal characteristics based on whether the segment is voiced or unvoiced. Most speech processing techniques process the two segments differently. The challenge lies in making detection techniques offer robust performance in the presence of noise. We propose a new technique for voiced/unvoiced clas-sification by taking into account the fact that voiced segments have a certain degree of regularity, and that the unvoiced segments do not possess any smoothness. In order to capture the regularity in voiced regions, we employ the LPM. The key idea is that regions where the LPM is inaccurate are more likely to be unvoiced than voiced. Within this frame-work, we formulate a hypothesis testing problem based on the accuracy of the LPM fit and devise a test statistic for performing V/UV classification. Since the technique is based on LPM, it is capable of adapting to nonstationary noises. We present Monte Carlo results to demonstrate the accuracy of the proposed technique.Tue, 21 Jul 2015 18:30:00 GMThttp://hdl.handle.net/2005/24522015-07-21T18:30:00Z