etd@IISc Collection:
http://hdl.handle.net/2005/19
2016-01-29T05:42:12ZAnalysis Of Multichannel And Multimodal Biomedical Signals Using Recurrence Plot Based Techniques
http://hdl.handle.net/2005/2491
Title: Analysis Of Multichannel And Multimodal Biomedical Signals Using Recurrence Plot Based Techniques
Authors: Rangaprakash, D
Abstract: For most of the naturally occurring signals, especially biomedical signals, the underlying physical process generating the signal is often not fully known, making it difficult to obtain a parametric model. Therefore, signal processing techniques are used to analyze the signal for non-parametrically characterizing the underlying system from which the signals are produced. Most of the real life systems are nonlinear and time varying, which poses a challenge while characterizing them. Additionally, multiple sensors are used to extract signals from such systems, resulting in multichannel signals which are inherently coupled. In this thesis, we counter this challenge by using Recurrence Plot based techniques for characterizing biomedical systems such as heart or brain, using signals such as heart rate variability (HRV), electroencephalogram(EEG) or functional magnetic resonance imaging (fMRI), respectively, extracted from them.
In time series analysis, it is well known that a system can be represented by a trajectory in an N-dimensional state space, which completely represents an instance of the system behavior. Such a system characterization has been done using dynamical invariants such as correlation dimension, Lyapunov exponent etc. Takens has shown that when the state variables of the underlying system are not known, one can obtain a trajectory in ‘phase space’ using only the signals obtained from such a system. The phase space trajectory is topologically equivalent to the state space trajectory. This enables us to characterize the system behavior from only the signals sensed from them. However, estimation of correlation dimension, Lyapunov exponent, etc, are vulnerable to non-stationarities in the signal and require large number of sample points for accurate computation, both of which are important in the case of biomedical signals. Alternatively, a technique called Recurrence Plots (RP) has been proposed, which addresses these concerns, apart from providing additional insights. Measures to characterize RPs of single and two channel data are called Recurrence Quantification Analysis (RQA) and cross RQA (CRQA), respectively. These methods have been applied with a good measure of success in diverse areas. However, they have not been studied extensively in the context of experimental biomedical signals, especially multichannel data.
In this thesis, the RP technique and its associated measures are briefly reviewed. Using the computational tools developed for this thesis, RP technique has been applied on select single
channel, multichannel and multimodal (i.e. multiple channels derived from different modalities) biomedical signals. Connectivity analysis is demonstrated as post-processing of RP analysis on multichannel signals such as EEG and fMRI. Finally, a novel metric, based on the modification of a CRQA measure is proposed, which shows improved results.
For the case of single channel signal, we have considered a large database of HRV signals of 112 subjects recorded for both normal and abnormal (anxiety disorder and depression disorder) subjects, in both supine and standing positions. Existing RQA measures, Recurrence Rate and Determinism, were used to distinguish between normal and abnormal subjects with an accuracy of 58.93%. A new measure, MLV has been introduced, using which a classification accuracy of 98.2% is obtained.
Correlation between probabilities of recurrence (CPR) is a CRQA measure used to characterize phase synchronization between two signals. In this work, we demonstrate its utility with application to multimodal and multichannel biomedical signals. First, for the multimodal case, we have computed running CPR (rCPR), a modification proposed by us, which allows dynamic estimation of CPR as a function of time, on multimodal cardiac signals (electrocardiogram and arterial blood pressure) and demonstrated that the method can clearly detect abnormalities (premature ventricular contractions); this has potential applications in cardiac care such as assisted automated diagnosis. Second, for the multichannel case, we have used 16 channel EEG signals recorded under various physiological states such as (i) global epileptic seizure and pre-seizure and (ii) focal epilepsy. CPR was computed pair-wise between the channels and a CPR matrix of all pairs was formed. Contour plot of the CPR matrix was obtained to illustrate synchronization. Statistical analysis of CPR matrix for 16 subjects of global epilepsy showed clear differences between pre-seizure and seizure conditions, and a linear discriminant classifier was used in distinguishing between the two conditions with 100% accuracy.
Connectivity analysis of multichannel EEG signals was performed by post-processing of the CPR matrix to understand global network-level characterization of the brain. Brain connectivity using thresholded CPR matrix of multichannel EEG signals showed clear differences in the number and pattern of connections in brain connectivity graph between epileptic seizure and pre-seizure. Corresponding brain headmaps provide meaningful insights about synchronization in the brain in those states. K-means clustering of connectivity parameters of CPR and linear correlation obtained from global epileptic seizure and pre-seizure showed significantly larger cluster centroid distances for CPR as opposed to linear correlation, thereby demonstrating the efficacy of CPR. The headmap in the case of focal epilepsy clearly enables us to identify the focus of the epilepsy which provides certain diagnostic value.
Connectivity analysis on multichannel fMRI signals was performed using CPR matrix and graph theoretic analysis. Adjacency matrix was obtained from CPR matrices after thresholding it using statistical significance tests. Graph theoretic analysis based on communicability was performed to obtain community structures for awake resting and anesthetic sedation states. Concurrent behavioral data showed memory impairment due to anesthesia. Given the fact that previous studies have implicated the hippocampus in memory function, the CPR results showing the hippocampus within the community in awake state and out of it in anesthesia state, demonstrated the biological plausibility of the CPR results. On the other hand, results from linear correlation were less biologically plausible.
In biological systems, highly synchronized and desynchronized systems are of interest rather than moderately synchronized ones. However, CPR is approximately a monotonic function of synchronization and hence can assume values which indicate moderate synchronization. In order to emphasize high synchronization/ desynchronization and de-emphasize moderate synchronization, a new method of Correlation Synchronization Convergence Time (CSCT) is proposed. It is obtained using an iterative procedure involving the evaluation of CPR for successive autocorrelations until CPR converges to a chosen threshold. CSCT was evaluated for 16 channel EEG data and corresponding contour plots and histograms were obtained, which shows better discrimination between synchronized and asynchronized states compared to the conventional CPR.
This thesis has demonstrated the efficacy of RP technique and associated measures in characterizing various classes of biomedical signals. The results obtained are corroborated by well known physiological facts, and they provide physiologically meaningful insights into the functioning of the underlying biological systems, with potential diagnostic value in healthcare.2015-11-30T18:30:00ZNonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals
http://hdl.handle.net/2005/2452
Title: Nonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals
Authors: Sreenivasa Murthy, A
Abstract: For time-varying signals such as speech and audio, short-time analysis becomes necessary to compute specific signal attributes and to keep track of their evolution. The standard technique is the short-time Fourier transform (STFT), using which one decomposes a signal in terms of windowed Fourier bases. An advancement over STFT is the wavelet analysis in which a function is represented in terms of shifted and dilated versions of a localized function called the wavelet. A specific modeling approach particularly in the context of speech is based on short-time linear prediction or short-time Wiener filtering of noisy speech. In most nonstationary signal processing formalisms, the key idea is to analyze the properties of the signal locally, either by first truncating the signal and then performing a basis expansion (as in the case of STFT), or by choosing compactly-supported basis functions (as in the case of wavelets). We retain the same motivation as these approaches, but use polynomials to model the signal on a short-time basis (“short-time polynomial representation”). To emphasize the local nature of the modeling aspect, we refer to it as “local polynomial modeling (LPM).”
We pursue two main threads of research in this thesis: (i) Short-time approaches for speech enhancement; and (ii) LPM for enhancing smooth signals, with applications to ECG, noisy nonuniformly-sampled signals, and voiced/unvoiced segmentation in noisy speech.
Improved iterative Wiener filtering for speech enhancement
A constrained iterative Wiener filter solution for speech enhancement was proposed by Hansen and Clements. Sreenivas and Kirnapure improved the performance of the technique by imposing codebook-based constraints in the process of parameter estimation. The key advantage is that the optimal parameter search space is confined to the codebook. The Nonstationary signal enhancement solutions assume stationary noise. However, in practical applications, noise is not stationary and hence updating the noise statistics becomes necessary. We present a new approach to perform reliable noise estimation based on spectral subtraction. We first estimate the signal spectrum and perform signal subtraction to estimate the noise power spectral density. We further smooth the estimated noise spectrum to ensure reliability. The key contributions are: (i) Adaptation of the technique for non-stationary noises; (ii) A new initialization procedure for faster convergence and higher accuracy; (iii) Experimental determination of the optimal LP-parameter space; and (iv) Objective criteria and speech recognition tests for performance comparison.
Optimal local polynomial modeling and applications
We next address the problem of fitting a piecewise-polynomial model to a smooth signal corrupted by additive noise. Since the signal is smooth, it can be represented using low-order polynomial functions provided that they are locally adapted to the signal. We choose the mean-square error as the criterion of optimality. Since the model is local, it preserves the temporal structure of the signal and can also handle nonstationary noise. We show that there is a trade-off between the adaptability of the model to local signal variations and robustness to noise (bias-variance trade-off), which we solve using a stochastic optimization technique known as the intersection of confidence intervals (ICI) technique. The key trade-off parameter is the duration of the window over which the optimum LPM is computed.
Within the LPM framework, we address three problems: (i) Signal reconstruction from noisy uniform samples; (ii) Signal reconstruction from noisy nonuniform samples; and (iii) Classification of speech signals into voiced and unvoiced segments.
The generic signal model is
x(tn)=s(tn)+d(tn),0 ≤ n ≤ N - 1.
In problems (i) and (iii) above, tn=nT(uniform sampling); in (ii) the samples are taken at nonuniform instants. The signal s(t)is assumed to be smooth; i.e., it should admit a local polynomial representation. The problem in (i) and (ii) is to estimate s(t)from x(tn); i.e., we are interested in optimal signal reconstruction on a continuous domain starting from uniform or nonuniform samples.
We show that, in both cases, the bias and variance take the general form:
The mean square error (MSE) is given by
where L is the length of the window over which the polynomial fitting is performed, f is a function of s(t), which typically comprises the higher-order derivatives of s(t), the order itself dependent on the order of the polynomial, and g is a function of the noise variance. It is clear that the bias and variance have complementary characteristics with respect to L. Directly optimizing for the MSE would give a value of L, which involves the functions f and g. The function g may be estimated, but f is not known since s(t)is unknown. Hence, it is not practical to compute the minimum MSE (MMSE) solution. Therefore, we obtain an approximate result by solving the bias-variance trade-off in a probabilistic sense using the ICI technique. We also propose a new approach to optimally select the ICI technique parameters, based on a new cost function that is the sum of the probability of false alarm and the area covered over the confidence interval. In addition, we address issues related to optimal model-order selection, search space for window lengths, accuracy of noise estimation, etc.
The next issue addressed is that of voiced/unvoiced segmentation of speech signal. Speech segments show different spectral and temporal characteristics based on whether the segment is voiced or unvoiced. Most speech processing techniques process the two segments differently. The challenge lies in making detection techniques offer robust performance in the presence of noise. We propose a new technique for voiced/unvoiced clas-sification by taking into account the fact that voiced segments have a certain degree of regularity, and that the unvoiced segments do not possess any smoothness. In order to capture the regularity in voiced regions, we employ the LPM. The key idea is that regions where the LPM is inaccurate are more likely to be unvoiced than voiced. Within this frame-work, we formulate a hypothesis testing problem based on the accuracy of the LPM fit and devise a test statistic for performing V/UV classification. Since the technique is based on LPM, it is capable of adapting to nonstationary noises. We present Monte Carlo results to demonstrate the accuracy of the proposed technique.2015-07-21T18:30:00ZDesign And Development Of Solutions To Some Of The Networking Problems In Hybrid Wireless Superstore Networks
http://hdl.handle.net/2005/2430
Title: Design And Development Of Solutions To Some Of The Networking Problems In Hybrid Wireless Superstore Networks
Authors: Shankaraiah, *
Abstract: Hybrid Wireless Networks (HWNs) are composite networks comprises of different technologies, possibly with overlapping coverage. Users with multimode terminals in HWNs are able to initiate connectivity that best suits their attributes and the requirements of their applications. There are many complexities in hybrid wireless networks due to changing data rates, frequency of operation, resource availability, QoS and also, complexities in terms of mobility management across different technologies.
A superstore is a very large retail store that serves as a one-stop shopping destination by offering a wide variety of goods that range from groceries to appliances. It provide all types services such as banking, photo center, catering, etc. The good examples of superstores are: Tesco (hypermarkets, United Kingdom), Carrefour(hypermarkets, France), etc.
Generally, the mobile customer communicates with superstore server using a transaction. A transaction corresponds to a finite number of interactive processes between the customer and superstore server. A few superstore transactions, examples are, product browsing, Technical details inquiry, Financial transactions, billing, etc.
This thesis aims to design and develop the following schemes to solve some of the above indicated problems of a hybrid wireless superstore network:
1 Transaction based bandwidth management.
2 Transaction-based resource management.
3 Transaction-based Quality of Service management.
4. Transactions-based topology management. We, herewith, present these developed schemes, the simulation carried out and results obtained, in brief.
Transaction-based bandwidth management
The designed Transaction-Based Bandwidth Management Scheme (TB-BMS) operates at application-level and intelligently allocates the bandwidth by monitoring the profit oriented sensitivity variations in the transactions, which are linked with various profit profiles created over type, time, and history of transactions. The scheme mainly consists of transaction classifier, bandwidth determination and transactions scheduling modules. We have deployed these scheme over a downlink of HWNs, since the uplink caries simple quires from customers to superstore server. The scheme uses transaction scheduling algorithm, which decides how to schedule an outgoing transaction based on its priority with efficient use of available BW.
As we observe, not all superstore transactions can have the same profit sensitive information, data size and operation type. Therefore, we classify the superstore transactions into four levels based on profit, data size, operation type and the degree of severity of information that they are handling. The aim of transaction classification module is to find the transaction sensitivity level(TSL) for a given transaction.
The bandwidth determination module estimates bandwidth requirement for each of the transactions. The transactions scheduling module schedules the transactions based on availability of bandwidth as per the TSL of the transaction. The scheme schedules the highest priority transactions first, keeping the lowest priority transaction pending. If all the highest priority transactions are over, then it continues with next priority level transactions, and so on, in every slot. We have simulated the hybrid wireless superstore network environment with WiFi and GSM technologies. We simulated four TSL levels with different bandwidth. The simulation under consideration uses different transactions with different bandwidth requirements.
The performance results describe that the proposed scheme considerably improves the bandwidth utilization by reducing transaction blocking and accommodating more essential transactions at the peak time of the business.
Transaction-based resource management
In the next work, we have proposed the transaction-based resource management scheme (TB-RMS) to allocate the required resources among the various customer services based on priority of transactions. The scheme mainly consists of transaction classifier, resource estimation and transactions scheduling modules. This scheme also uses a downlink transaction scheduling algorithm, which decides how to schedule an outgoing transaction based on its priority with efficient use of available resources.
The transaction-based resource management is similar to that of TB-BMS scheme, except that the scheme estimates the resources like buffer, bandwidth, processing time for each of transaction rather than bandwidth.
The performance results indicate that the proposed TB-RMS scheme considerably improves the resource utilization by reducing transaction blocking and accommodating more essential transactions at the peak time.
Transaction-based Quality of Service management
In the third segment, we have proposed a police-based transaction-aware QoS management architecture for the downlink QoS management. We derive a policy for the estimation of QoS parameters, like, delay, jitter, bandwidth, transaction loss for every transaction before scheduling on the downlink. We use Policy-based Transaction QoS Management(PTQM) to achieve the transaction based QoS management. Policies are rules that govern a transaction behavior, usually implemented in the form of if(condition) then(action) policies.
The QoS management scheme is fully centralized, and is based on the ideas of client-server interaction. Each mobile terminal is connected to a server via WiFi or GSM. The master policy controller (MPDF) connects to the policy controller of the WiFi network (WPDF)and the GSM policy controller(PDF).
We have considered the simulation environment similar to earlier schemes. The results shows that the policy-based transaction QoS management is improves performance and utilizes network resources efficiently at the peak time of the superstore business.
Transactions-Aware Topology Management(TATM)
Finally, we have proposed a topology management scheme to the superstore hybrid wireless networks. A wireless topology management that manages the activities and features of a wireless network connection. It may control the process of selecting an available access points, authentication and associating to it and setting up other parameters of the wireless connection.
The proposed topology management scheme consists of the transaction classifier, resource estimation module, network availability and status module and transaction-aware topology management module. The TATM scheme is to select the best network among available networks to provide transaction response(or execution).
We have simulated hybrid wireless superstore network with five WiFi and two GSM technologies. The performance results indicate that the transaction-based topology management scheme utilizes the available resources efficiently and distributed transaction loads evenly in both WiFi and GSM networks based on the capacity.2015-01-13T18:30:00ZResource Management In Celluar And Mobile Opportunistic Networks
http://hdl.handle.net/2005/2424
Title: Resource Management In Celluar And Mobile Opportunistic Networks
Authors: Singh, Chandramani Kishore
Abstract: In this thesis we study several resource management problems in two classes of wireless networks. The thesis is in two parts, the first being concerned with game theoretic approaches for cellular networks, and the second with control theoretic approaches for mobile opportunistic networks.
In Part I of the thesis, we first investigate optimal association and power control for the uplink of multichannel multicell cellular networks, in which each channel is used by exactly one base station (BS) (i.e., cell). Users have minimum signal to interference ratio(SINR) requirements and associate with BSs where least transmission powers are required. We formulate the problem as a non-cooperative game among users. We propose a distributed association and power update algorithm, and show its convergence to a Nash equilibrium of the game. We consider network models with discrete mobiles(yielding an atomic congestion game),as well as a continuum of mobiles(yielding a population game). We find that the equilibria need not be Pareto efficient, nor need they be system optimal. To address the lack of system optimality, we propose pricing mechanisms. We show that these prices weakly enforce system optimality in general, and strongly enforce it in special settings. We also show that these mechanisms can be implemented in distributed fashions.
Next, we consider the hierarchical problems of user association and BS placement, where BSs may belong to the same(or, cooperating) or to competing service providers. Users transmit with constant power, and associate with base stations that yield better SINRs. We formulate the association problem as a game among users; it determines the cell corresponding to each BS. Some intriguing observations we report are:(i)displacing a BS a little in one direction may result in a displacement of the boundary of the corresponding cell to the opposite direction;(ii)A cell corresponding to a BS may be the union of disconnected sub-cells. We then study the problem of the placement of BSs so as to maximize service providers’ revenues. The service providers need to take into account the mobiles’ behavior that will be induced by the placement decisions. We consider the cases of single frequency band and disjoint frequency bands of operation. We also consider the networks in which BSs employ successive interference cancellation(SIC) decoding. We observe that the BS locations are closer to each other in the competitive case than in the cooperative case, in all scenarios considered.
Finally, we study cooperation among cellular service providers. We consider networks in which communications involving different BSs do not interfere. If service providers jointly deploy and pool their resources, such as spectrum and BSs, and agree to serve each others’ customers, their aggregate payoff substantially increases. The potential of such cooperation can, however ,be realized only if the service providers intelligently determine who they would cooperate with, how they would deploy and share their resources, and how they would share the aggregate payoff. We first assume that the service providers can arbitrarily share the aggregate payoff. A rational basis for payoff sharing is imperative for the stability of the coalitions. We study cooperation using the theory of transferable payoff coalitional games. We show that the optimum cooperation strategy, which involves the acquisition of channels, and deployment and allocation of BSs to customers, is the solution of a concave or an integer optimization problem. We then show that the grand coalition is stable, i.e., if all the service providers cooperate, there is an operating point offering each service provider a share that eliminates the possibility of a subset of service providers splitting from the grand coalition; this operating point also maximizes the service providers’ aggregate payoff. These stabilizing payoff shares are computed by solving the dual of the above optimization problem. Moreover, the optimal cooperation strategy and the stabilizing payoff shares can be obtained in polynomial time using distributed computations and limited exchange of confidential information among the service providers. We then extend the analysis to the scenario where service providers may not be able to share their payoffs. We now model cooperation as a nontransferable payoff coalitional game. We again show that there exists a cooperation strategy that leaves no incentive for any subset of service providers to split from the grand coalition. To compute this cooperation strategy and the corresponding payoffs, we relate this game and its core to an exchange market and its equilibrium. Finally, we extend the formulations and the results to the case when customers are also decision makers in coalition formation.
In Part II of this thesis, we consider the problem of optimal message forwarding in mobile opportunistic wireless networks. A message originates at a node(source), and has to be delivered to another node (destination). In the network, there are several other nodes that can assist in relaying the message at the expense of additional transmission energies. We study the trade-off between delivery delay and energy consumption. First, we consider mobile opportunistic networks employing two-hop relaying. Because of the intermittent connectivity, the source may not have perfect knowledge of the delivery status at every instant. We formulate the problem as a stochastic control problem with partial information, and study structural properties of the optimal policy. We also propose a simple suboptimal policy. We then compare the performance of the suboptimal policy against that of the optimal control with perfect information. These are bounds on the performance of the proposed policy with partial information. We also discuss a few other related open loop policies.
Finally, we investigate the case where a message has to be delivered to several destinations, but we are concerned with delay until a certain fraction of them receive the message. The network employs epidemic relaying. We first assume that, at every instant, all the nodes know the number of relays carrying the packet and the number of destinations that have received the packet. We formulate the problem as a controlled continuous time Markov chain, and derive the optimal forwarding policy. As observed earlier, the intermittent connectivity in the network implies that the nodes may not have the required perfect knowledge of the system state. To address this issue, we then obtain an ODE(i.e., a deterministic fluid) approximation for the optimally controlled Markov chain. This fluid approximation also yields an asymptotically optimal deterministic policy. We evaluate the performance of this policy over finite networks, and demonstrate that this policy performs close to the optimal closed loop policy. We also briefly discuss the case where message forwarding is accomplished via two-hop relaying.2015-01-06T18:30:00Z