etd@IISc Collection:http://hdl.handle.net/2005/192015-11-29T05:53:58Z2015-11-29T05:53:58ZEqualization Algorithms And Performance Analysis In Cyclic-Prefixed Single Carrier And Multicarrier Wireless SystemsItankar, Yogendra Umeshhttp://hdl.handle.net/2005/23142014-05-21T09:10:43Z2014-05-20T18:30:00ZTitle: Equalization Algorithms And Performance Analysis In Cyclic-Prefixed Single Carrier And Multicarrier Wireless Systems
Authors: Itankar, Yogendra Umesh
Abstract: The work reported in this thesis is divided in to two parts.
In the first part, we report a closed-form bit error rate (BER) performance analysis of orthogonal frequency division multiple access (OFDMA) on the uplink in the presence of carrier frequency offsets (CFOs) and/or timing offsets (TOs) of other users with respect to a desired user. We derive BER expressions using probability density function (pdf) and characteristic function approaches, for a Rician faded multi-cluster multi-path channel model that is typical of indoor ultrawideband channels and underwater acoustic channels. Numerical and simulation results show that the BER expressions derived accurately quantify the performance degradation due to non-zero CFOs and TOs.
Ultrawideband channels in indoor/industrial environment and underwater acoustic channels are severely delay-spread channels, where the number of multipath components can be of the order of tens to hundreds. In the second part of the thesis, we report low complexity equalization algorithms for cyclic-prefixed single carrier (CPSC)systems that operate on such inter-symbol interference(ISI) channels characterized by large delay spreads. Both single-input single-output and multiple-input multiple-output(MIMO) systems are considered. For these systems, we propose a low complexity graph based equalization carried out in frequency domain. Because of the noise whitening effect that happens for large frame sizes and delay spreads in the frequency domain processing, improved performance compared to time domain processing is shown to be achieved. Since the graph based equalizer is a soft-input soft-output equalizer, iterative techniques(turbo-equalization) between detection and decoding are shown to yield good coded BER performance at low complexities in convolutional and LDPC coded systems. We also study joint decoding of LDPC code and equalization of MIMO-ISI channels using a joint factor graph.2014-05-20T18:30:00ZTowards Development Of Low Cost Electrochemical Biosensor For Detecting Percentage Glycated HemoglobinSiva Rama Krishna, Vhttp://hdl.handle.net/2005/23402014-07-11T05:40:34Z2014-07-10T18:30:00ZTitle: Towards Development Of Low Cost Electrochemical Biosensor For Detecting Percentage Glycated Hemoglobin
Authors: Siva Rama Krishna, V
Abstract: There is an ever growing demand for low cost biosensors in medical diagnostics. A well known commercially successful example is glucose biosensors which are used to diagonize and monitor diabetes. These biosensors use electrochemical analysis (electro analysis) as transduction mechanism. Electro analytical techniques involve application of electrical stimulus to the chemical/biochemical system under consideration and measurement of electrical response due to the oxidation and reduction reactions that occur because of the stimulus. They offer a lot of advantages in terms of sensitivity, selectivity, cost effectiveness and compatibility towards integration with electronics. Besides glucose, there are several biomolecules of significance for which electro analysis can potentially be used to develop low cost, rapid, easy to use biosensors. One such biomolecule is Glycated Hemoglobin (GHb). It is a post translational, non-enzymatic modification of hemoglobin with glucose and is a very good biomarker that indicates the average value of blood glucose over the past 120 days. It is always expresses as a percentage of total hemoglobin present in blood. Monitoring diabetes based on the value of percentage Glycated hemoglobin is advantageous as it gives an average value of glucose unlike plasma glucose values which vary a lot on a day to day basis depending on the dietary habits and the stress levels of the individual. This thesis is focused on the development of a low coat, easy to use, disposable sensor for measuring percentage Glycated hemoglobin.
The first challenge in developing such a sensor is isolation of hemoglobin. Unlike glucose which is present in blood plasma (liquid content of blood), hemoglobin resides inside red blood cells also known as erythrocytes. O isolate hemoglobin, these cells have to be broken or lysed. All the existing approaches rely on mixing blood with lysing reagents to lyse erythrocytes. Ideal biosensors should be devoid of liquid reagents. Keeping this in perspective, in this thesis, this challenge is addressed by developing two entirely buffer/reagentless techniques to lyse erythrocytes and isolate hemoglobin. In the first technique, cellulose acetate membranes are embedded with lysing reagents and are used for lysing reagents and are used for lysing application. In the second techniques, commercially available nylon mesh nets are modified with lysing reagents to lyse and isolate hemoglobin. These membranes or mesh nets can be easily integrated on top of a disposable strip.
After isolating hemoglobin, the next challenge is to selectively detect Glycated hemoglobin. Boronic acid conjugates are known to bind Glycated hemoglobin. Using this principle, a new composite is sysnthesized to specifically detect glc\ycated hemoglobin. The composite (GO-APBA) is a result of functionalization of Graphene Oxide (GO) with 3-aminophenylboronic acide (APBA). Detection of Glycated hemoglobin is achieved by modifying screen printed electrode strips with the synthesized compound, thus taking a step forwards achieving the objective.
Since Glycated hemoglobin is always expressed as a percentage of hemoglobin, the next challenge is to detect total hemoglobin. In this thesis a low cost way of detecting hemoglobin is achieved by using GO modified or surfactant modified screen printed electrode strips. Furthermore, the potential interferences that blood plasma can cause in these measurements are eliminated with the help of permselective coatings.
Thus using the technologies developed in this thesis, measurements of percentage Glycated hemoglobin can be potentially made on handheld electronic devices akin to glucose meters by using just a drop of blood.2014-07-10T18:30:00ZNonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled SignalsSreenivasa Murthy, Ahttp://hdl.handle.net/2005/24522015-07-22T10:25:36Z2015-07-21T18:30:00ZTitle: Nonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals
Authors: Sreenivasa Murthy, A
Abstract: For time-varying signals such as speech and audio, short-time analysis becomes necessary to compute specific signal attributes and to keep track of their evolution. The standard technique is the short-time Fourier transform (STFT), using which one decomposes a signal in terms of windowed Fourier bases. An advancement over STFT is the wavelet analysis in which a function is represented in terms of shifted and dilated versions of a localized function called the wavelet. A specific modeling approach particularly in the context of speech is based on short-time linear prediction or short-time Wiener filtering of noisy speech. In most nonstationary signal processing formalisms, the key idea is to analyze the properties of the signal locally, either by first truncating the signal and then performing a basis expansion (as in the case of STFT), or by choosing compactly-supported basis functions (as in the case of wavelets). We retain the same motivation as these approaches, but use polynomials to model the signal on a short-time basis (“short-time polynomial representation”). To emphasize the local nature of the modeling aspect, we refer to it as “local polynomial modeling (LPM).”
We pursue two main threads of research in this thesis: (i) Short-time approaches for speech enhancement; and (ii) LPM for enhancing smooth signals, with applications to ECG, noisy nonuniformly-sampled signals, and voiced/unvoiced segmentation in noisy speech.
Improved iterative Wiener filtering for speech enhancement
A constrained iterative Wiener filter solution for speech enhancement was proposed by Hansen and Clements. Sreenivas and Kirnapure improved the performance of the technique by imposing codebook-based constraints in the process of parameter estimation. The key advantage is that the optimal parameter search space is confined to the codebook. The Nonstationary signal enhancement solutions assume stationary noise. However, in practical applications, noise is not stationary and hence updating the noise statistics becomes necessary. We present a new approach to perform reliable noise estimation based on spectral subtraction. We first estimate the signal spectrum and perform signal subtraction to estimate the noise power spectral density. We further smooth the estimated noise spectrum to ensure reliability. The key contributions are: (i) Adaptation of the technique for non-stationary noises; (ii) A new initialization procedure for faster convergence and higher accuracy; (iii) Experimental determination of the optimal LP-parameter space; and (iv) Objective criteria and speech recognition tests for performance comparison.
Optimal local polynomial modeling and applications
We next address the problem of fitting a piecewise-polynomial model to a smooth signal corrupted by additive noise. Since the signal is smooth, it can be represented using low-order polynomial functions provided that they are locally adapted to the signal. We choose the mean-square error as the criterion of optimality. Since the model is local, it preserves the temporal structure of the signal and can also handle nonstationary noise. We show that there is a trade-off between the adaptability of the model to local signal variations and robustness to noise (bias-variance trade-off), which we solve using a stochastic optimization technique known as the intersection of confidence intervals (ICI) technique. The key trade-off parameter is the duration of the window over which the optimum LPM is computed.
Within the LPM framework, we address three problems: (i) Signal reconstruction from noisy uniform samples; (ii) Signal reconstruction from noisy nonuniform samples; and (iii) Classification of speech signals into voiced and unvoiced segments.
The generic signal model is
x(tn)=s(tn)+d(tn),0 ≤ n ≤ N - 1.
In problems (i) and (iii) above, tn=nT(uniform sampling); in (ii) the samples are taken at nonuniform instants. The signal s(t)is assumed to be smooth; i.e., it should admit a local polynomial representation. The problem in (i) and (ii) is to estimate s(t)from x(tn); i.e., we are interested in optimal signal reconstruction on a continuous domain starting from uniform or nonuniform samples.
We show that, in both cases, the bias and variance take the general form:
The mean square error (MSE) is given by
where L is the length of the window over which the polynomial fitting is performed, f is a function of s(t), which typically comprises the higher-order derivatives of s(t), the order itself dependent on the order of the polynomial, and g is a function of the noise variance. It is clear that the bias and variance have complementary characteristics with respect to L. Directly optimizing for the MSE would give a value of L, which involves the functions f and g. The function g may be estimated, but f is not known since s(t)is unknown. Hence, it is not practical to compute the minimum MSE (MMSE) solution. Therefore, we obtain an approximate result by solving the bias-variance trade-off in a probabilistic sense using the ICI technique. We also propose a new approach to optimally select the ICI technique parameters, based on a new cost function that is the sum of the probability of false alarm and the area covered over the confidence interval. In addition, we address issues related to optimal model-order selection, search space for window lengths, accuracy of noise estimation, etc.
The next issue addressed is that of voiced/unvoiced segmentation of speech signal. Speech segments show different spectral and temporal characteristics based on whether the segment is voiced or unvoiced. Most speech processing techniques process the two segments differently. The challenge lies in making detection techniques offer robust performance in the presence of noise. We propose a new technique for voiced/unvoiced clas-sification by taking into account the fact that voiced segments have a certain degree of regularity, and that the unvoiced segments do not possess any smoothness. In order to capture the regularity in voiced regions, we employ the LPM. The key idea is that regions where the LPM is inaccurate are more likely to be unvoiced than voiced. Within this frame-work, we formulate a hypothesis testing problem based on the accuracy of the LPM fit and devise a test statistic for performing V/UV classification. Since the technique is based on LPM, it is capable of adapting to nonstationary noises. We present Monte Carlo results to demonstrate the accuracy of the proposed technique.2015-07-21T18:30:00ZAnonymity With AuthenticitySwaroop, Dhttp://hdl.handle.net/2005/23742014-09-02T05:48:29Z2014-09-01T18:30:00ZTitle: Anonymity With Authenticity
Authors: Swaroop, D
Abstract: Cryptography is science of secure message transmission. Cryptanalysis is involved with breaking these encrypted messages. Both cryptography and cryptanalysis constitute together to form cryptology.
Anonymity means namelessness i.e., the quality or state of being unknown while authenticity translates to the quality or condition of being authentic or genuine. Anonymity and authenticity are two different embodiments of personal secrecy. Modern power has increased in its capacity to designate individuals, due to which they find it inconvenient to continue communicating, remaining anonymous.
In this thesis we are going to describe an anonymous system which consists of a number of entities which are anonymous and are communicating with each other without revealing their identity and at the same time maintaining their authenticity such that an anonymous entity(sayE1)will be able to verify that, the message it received from another anonymous entity(sayE2)subsequent to an initial message from E2, are in fact from E2 itself. Later when E2 tries to recommend a similar communication to E1 with another anonymous entity E3 in the system, E1 must be able to verify that recommendation, without E2 losing its authenticity of its communication with E1 to E3.
This thesis is divided into four chapters. The first chapter is an introduction to cryptography, symmetric key cryptography and public key cryptography. It also summarizes the contribution of this thesis.
The second chapter gives various protocol for the above problem ’Anonymity with Authenticity’ along with its extension. Totally six protocols are proposed for the above problem.
In third chapter all these six protocols are realized using four different schemes, where each scheme has its own pros and cons.
The fourth and final chapter concludes with a note on what possible factors these four different realization schemes need to be chosen and other possible realization schemes.2014-09-01T18:30:00Z