etd@IISc Community:http://hdl.handle.net/2005/12015-03-24T10:49:17Z2015-03-24T10:49:17ZCompact Modeling Of Asymmetric/Independent Double Gate MOSFETSrivatsava, Jhttp://hdl.handle.net/2005/23462014-07-18T07:16:54Z2014-07-17T18:30:00ZTitle: Compact Modeling Of Asymmetric/Independent Double Gate MOSFET
Authors: Srivatsava, J
Abstract: For the past 40 years, relentless focus on Moore’s Law transistor scaling has provided ever-increasing transistor performance and density. In order to continue the technology scaling beyond 22nm node, it is clear that conventional bulk-MOSFET needs to be replaced by new device architectures, most promising being the Multiple-Gate MOSFETs (MuGFET). Intel in mid 2011 announced the use of bulk Tri-Gate FinFETs in 22nm high volume logic process for its next-gen IvyBridge Microprocessor. It is expected that soon other semiconductor companies will also adopt the MuGFET devices. As like bulk-MOSFET, an accurate and physical compact model is important for MuGFET based circuit design.
Compact modeling effort for MuGFET started in late nineties with planar double gate MOSFET(DGFET),as it is the simplest structure that one can conceive for MuGFET devices. The models so far proposed for DG MOSFETs are applicable for common gate symmetric DG (SDG) MOSFETs where both the gates have equal oxide thicknesses. However, for practical devices at nanoscale regime, there will always be some amount of asymmetry between the gate oxide thicknesses due to process variations and uncertainties, which can affect device performance significantly. At the same time, Independently controlled DG(IDG) MOSFETs have gained tremendous attention owing to its ability to modulate threshold voltage and transconductance dynamically. Due to the asymmetric nature of the electrostatic, developing efficient compact models for asymmetric/independent DG MOSFET is a daunting task. In this thesis effort has been put to provide some solutions to this challenge.
We propose simple surface-potential based compact terminal charge models, applicable for Asymmetric Double gate MOSFETs (ADG) in two conﬁgurations1) Common-gate 2) Independent-gate. The charge model proposed for the common-gate ADG (CDG) MOSFET is seamless between the symmetric and asymmetric devices and utilizes the unique so-far-unexplored quasi-linear relationship between the surface potentials along the channel. In this model, the terminal charges could be computed by basic arithmetic operations from the surface potentials and applied biases, and can be easily implemented in any circuit simulator and extendable to short-channel devices. The charge model proposed for independent ADG(IDG)MOSFET is based on a novel piecewise linearization technique of surface potential along the channel. We show that the conventional “charge linearization techniques that have been used over the years in advanced compact models for bulk and double-gate(DG) MOSFETs are accurate only when the channel is fully hyperbolic in nature or the effective gate voltages are same. For other bias conditions, it leads to significant error in terminal charge computation. We demonstrate that the amount of nonlinearity that prevails between the surface potentials along the channel for a particular bias condition actually dictates if the conventional charge linearization technique could be applied or not. We propose a piecewise linearization technique that segments the channel into multiple sections where in each section, the assumption of quasi-linear relationship between the surface potentials remains valid. The cumulative sum of the terminal charges obtained for each of these channel sections yield terminal charges of the IDG device.
We next present our work on modeling the non-ideal scenarios like presence of body doping in CDG devices and the non-planar devices like Tri-gate FinFETs. For a fully depleted channel, a simple technique to include body doping term in our charge model for CDG devices, using a perturbation on the effective gate voltage and correction to the coupling factor, is proposed. We present our study on the possibility of mapping a non-planar Tri-gate FinFET onto a planar DG model. In this framework, we demonstrate that, except for the case of large or tall devices, the generic mapping parameters become bias-dependent and an accurate bias-independent model valid for geometries is not possible.
An efficient and robust “Root Bracketing Method” based algorithm for computation of surface potential in IDG MOSFET, where the conventional Newton-Raphson based techniques are inefficient due to the presence of singularity and discontinuity in input voltage equations, is presented. In case of small asymmetry for a CDG devices, a simple physics based perturbation technique to compute the surface potential with computational complexity of the same order of an SDG device is presented next. All the models proposed show excellent agreement with numerical and Technology Computer-Aided Design(TCAD) simulations for all wide range of bias conditions and geometries. The models are implemented in a professional circuit simulator through Verilog-A, and simulation examples for different circuits verify good model convergence.2014-07-17T18:30:00ZImage Reconstruction Based On Hilbert And Hybrid Filtered Algorithms With Inverse Distance Weight And No Backprojection WeightNarasimhadhan, A Vhttp://hdl.handle.net/2005/23422014-07-16T07:08:19Z2014-07-15T18:30:00ZTitle: Image Reconstruction Based On Hilbert And Hybrid Filtered Algorithms With Inverse Distance Weight And No Backprojection Weight
Authors: Narasimhadhan, A V
Abstract: Filtered backprojection (FBP) reconstruction algorithms are very popular in the field of X-ray computed tomography (CT) because they give advantages in terms of the numerical accuracy and computational complexity. Ramp filter based fan-beam FBP reconstruction algorithms have the position dependent weight in the backprojection which is responsible for spatially non-uniform distribution of noise and resolution, and artifacts. Many algorithms based on shift variant filtering or spatially-invariant interpolation in the backprojection step have been developed to deal with this issue. However, these algorithms are computationally demanding. Recently, fan-beam algorithms based on Hilbert filtering with inverse distance weight and no weight in the backprojection have been derived using the Hamaker’s relation. These fan-beam reconstruction algorithms have been shown to improve noise uniformity and uniformity in resolution.
In this thesis, fan-beam FBP reconstruction algorithms with inverse distance back-projection weight and no backprojection weight for 2D image reconstruction are presented and discussed for the two fan-beam scan geometries -equi-angular and equispace detector array. Based on the proposed and discussed fan-beam reconstruction algorithms with inverse distance backprojection and no backprojection weight, new 3D cone-beam FDK reconstruction algorithms with circular and helical scan trajectories for curved and planar detector geometries are proposed. To start with three rebinning formulae from literature are presented and it is shown that one can derive all fan-beam FBP reconstruction algorithms from these rebinning formulae. Specifically, two fan-beam algorithms with no backprojection weight based on Hilbert filtering for equi-space linear array detector and one new fan-beam algorithm with inverse distance backprojection weight based on hybrid filtering for both equi-angular and equi-space linear array detector are derived. Simulation results for these algorithms in terms of uniformity of noise and resolution in comparison to standard fan-beam FBP reconstruction algorithm (ramp filter based fan-beam reconstruction algorithm) are presented. It is shown through simulation that the fan-beam reconstruction algorithm with inverse distance in the backprojection gives better noise performance while retaining the resolution properities. A comparison between above mentioned reconstruction algorithms is given in terms of computational complexity.
The state of the art 3D X-ray imaging systems in medicine with cone-beam (CB) circular and helical computed tomography scanners use non-exact (approximate) FBP based reconstruction algorithm. They are attractive because of their simplicity and low computational cost. However, they produce sub-optimal reconstructed images with respect to cone-beam artifacts, noise and axial intensity drop in case of circular trajectory scan imaging. Axial intensity drop in the reconstructed image is due to the insufficient data acquired by the circular-scan trajectory CB CT. This thesis deals with investigations to improve the image quality by means of the Hilbert and hybrid filtering based algorithms using redundancy data for Feldkamp, Davis and Kress (FDK) type reconstruction algorithms. In this thesis, new FDK type reconstruction algorithms for cylindrical detector and planar detector for CB circular CT are developed, which are obtained by extending to three dimensions (3D) an exact Hilbert filtering based FBP algorithm for 2D fan-beam beam algorithms with no position dependent backprojection weight and fan-beam algorithm with inverse distance backprojection weight. The proposed FDK reconstruction algorithm with inverse distance weight in the backprojection requires full-scan projection data while the FDK reconstruction algorithm with no backprojection weight can handle partial-scan data including very short-scan. The FDK reconstruction algorithms with no backprojection weight for circular CB CT are compared with Hu’s, FDK and T-FDK reconstruction algorithms in-terms of axial intensity drop and computational complexity. The simulation results of noise, CB artifacts performance and execution timing as well as the partial-scan reconstruction abilities are presented. We show that FDK reconstruction algorithms with no backprojection weight have better noise performance characteristics than the conventional FDK reconstruction algorithm where the backprojection weight is known to result in spatial non-uniformity in the noise characteristics.
In this thesis, we present an efficient method to reduce the axial intensity drop in circular CB CT. The efficient method consists of two steps: the first one is reconstruction of the object using FDK reconstruction algorithm with no backprojection weight and the second is estimating the missing term. The efficient method is comparable to Zhu et al.’s method in terms of reduction in axial intensity drop, noise and computational complexity.
The helical scanning trajectory satisfies the Tuy-smith condition, hence an exact and stable reconstruction is possible. However, the helical FDK reconstruction algorithm is responsible for the cone-beam artifacts since the helical FDK reconstruction algorithm is approximate in its derivation. In this thesis, helical FDK reconstruction algorithms based on Hilbert filtering with no backprojection weight and FDK reconstruction algorithm based on hybrid filtering with inverse distance backprojection weight are presented to reduce the CB artifacts. These algorithms are compared with standard helical FDK in-terms of noise, CB artifacts and computational complexity.2014-07-15T18:30:00ZPerformance Evaluation Of Fan-beam And Cone-beam Reconstruction Algorithms With No Backprojection Weight On Truncated Data ProblemsSumith, Khttp://hdl.handle.net/2005/23432014-07-16T09:42:21Z2014-07-15T18:30:00ZTitle: Performance Evaluation Of Fan-beam And Cone-beam Reconstruction Algorithms With No Backprojection Weight On Truncated Data Problems
Authors: Sumith, K
Abstract: This work focuses on using the linear prediction based projection completion for the fan-beam and cone-beam reconstruction algorithm with no backprojection weight. The truncated data problems are addressed in the computed tomography research. However, the image reconstruction from truncated data perfectly has not been achieved yet and only approximately accurate solutions have been obtained. Thus research in this area continues to strive to obtain close result to the perfect. Linear prediction techniques are adopted for truncation completion in this work, because previous research on the truncated data problems also have shown that this technique works well compared to some other techniques like polynomial fitting and iterative based methods. The Linear prediction technique is a model based technique. The autoregressive (AR) and moving average (MA) are the two important models along with autoregressive moving average (ARMA) model. The AR model is used in this work because of the simplicity it provides in calculating the prediction coefficients. The order of the model is chosen based on the partial autocorrelation function of the projection data proved in the previous researches that have been carried out in this area of interest. The truncated projection completion using linear prediction and windowed linear prediction show that reasonably accurate reconstruction is achieved. The windowed linear prediction provide better estimate of the missing data, the reason for this is mentioned in the literature and is restated for the reader’s convenience in this work.
The advantages associated with the fan-beam reconstruction algorithms with no backprojection weights compared to the fan-beam reconstruction algorithm with backprojection weights motivated us to use the fan-beam reconstruction algorithm with no backprojection weight for reconstructing the truncation completed projection data. The results obtained are compared with the previous work which used conventional fan-beam reconstruction algorithms with backprojection weight. The intensity plots and the noise performance results show improvements resulting from using the fan-beam reconstruction algorithm with no backprojection weight. The work is also extended to the Feldkamp, Davis, and Kress (FDK) reconstruction algorithm with no backprojection weight for the helical scanning geometry and the results obtained are compared with the FDK reconstruction algorithm with backprojection weight for the helical scanning geometry.2014-07-15T18:30:00ZGraph Models For Query Focused Text Summarization And Assessment Of Machine Translation Using StopwordsRama, Bhttp://hdl.handle.net/2005/22942014-04-09T10:57:00Z2014-04-08T18:30:00ZTitle: Graph Models For Query Focused Text Summarization And Assessment Of Machine Translation Using Stopwords
Authors: Rama, B
Abstract: Text summarization is the task of generating a shortened version of the original text where core ideas of the original text are retained. In this work, we focus on query focused summarization. The task is to generate the summary from a set of documents which answers the query. Query focused summarization is a hard task because it expects the summary to be biased towards the query and at the same time important concepts in the original documents must be preserved with high degree of novelty.
Graph based ranking algorithms which use biased random surfer model like Topic-sensitive LexRank have been applied to query focused summarization. In our work, we propose look-ahead version of Topic-sensitive LexRank. We incorporate the option of look-ahead in the random walk model and we show that it helps in generating better quality summaries.
Next, we consider assessment of machine translation. Assessment of a machine translation output is important for establishing benchmarks for translation quality. An obvious way to assess the quality of machine translation is through the perception of human subjects. Though highly reliable, this approach is not scalable and is time consuming. Hence mechanisms have been devised to automate the assessment process. All such assessment methods are essentially a study of correlations between human translation and the machine translation.
In this work, we present a scalable approach to assess the quality of machine translation that borrows features from the study of writing styles, popularly known as Stylometry. Towards this, we quantify the characteristic styles of individual machine translators and compare them with that of human generated text. The translator whose style is closest to human style is deemed to generate a higher quality translation. We show that our approach is scalable and does not require actual source text translations for evaluation.2014-04-08T18:30:00Z