etd@IISc Collection:
http://hdl.handle.net/2005/20
Fri, 27 Feb 2015 18:10:24 GMT2015-02-27T18:10:24ZImage Reconstruction Based On Hilbert And Hybrid Filtered Algorithms With Inverse Distance Weight And No Backprojection Weight
http://hdl.handle.net/2005/2342
Title: Image Reconstruction Based On Hilbert And Hybrid Filtered Algorithms With Inverse Distance Weight And No Backprojection Weight
Authors: Narasimhadhan, A V
Abstract: Filtered backprojection (FBP) reconstruction algorithms are very popular in the field of X-ray computed tomography (CT) because they give advantages in terms of the numerical accuracy and computational complexity. Ramp filter based fan-beam FBP reconstruction algorithms have the position dependent weight in the backprojection which is responsible for spatially non-uniform distribution of noise and resolution, and artifacts. Many algorithms based on shift variant filtering or spatially-invariant interpolation in the backprojection step have been developed to deal with this issue. However, these algorithms are computationally demanding. Recently, fan-beam algorithms based on Hilbert filtering with inverse distance weight and no weight in the backprojection have been derived using the Hamaker’s relation. These fan-beam reconstruction algorithms have been shown to improve noise uniformity and uniformity in resolution.
In this thesis, fan-beam FBP reconstruction algorithms with inverse distance back-projection weight and no backprojection weight for 2D image reconstruction are presented and discussed for the two fan-beam scan geometries -equi-angular and equispace detector array. Based on the proposed and discussed fan-beam reconstruction algorithms with inverse distance backprojection and no backprojection weight, new 3D cone-beam FDK reconstruction algorithms with circular and helical scan trajectories for curved and planar detector geometries are proposed. To start with three rebinning formulae from literature are presented and it is shown that one can derive all fan-beam FBP reconstruction algorithms from these rebinning formulae. Specifically, two fan-beam algorithms with no backprojection weight based on Hilbert filtering for equi-space linear array detector and one new fan-beam algorithm with inverse distance backprojection weight based on hybrid filtering for both equi-angular and equi-space linear array detector are derived. Simulation results for these algorithms in terms of uniformity of noise and resolution in comparison to standard fan-beam FBP reconstruction algorithm (ramp filter based fan-beam reconstruction algorithm) are presented. It is shown through simulation that the fan-beam reconstruction algorithm with inverse distance in the backprojection gives better noise performance while retaining the resolution properities. A comparison between above mentioned reconstruction algorithms is given in terms of computational complexity.
The state of the art 3D X-ray imaging systems in medicine with cone-beam (CB) circular and helical computed tomography scanners use non-exact (approximate) FBP based reconstruction algorithm. They are attractive because of their simplicity and low computational cost. However, they produce sub-optimal reconstructed images with respect to cone-beam artifacts, noise and axial intensity drop in case of circular trajectory scan imaging. Axial intensity drop in the reconstructed image is due to the insufficient data acquired by the circular-scan trajectory CB CT. This thesis deals with investigations to improve the image quality by means of the Hilbert and hybrid filtering based algorithms using redundancy data for Feldkamp, Davis and Kress (FDK) type reconstruction algorithms. In this thesis, new FDK type reconstruction algorithms for cylindrical detector and planar detector for CB circular CT are developed, which are obtained by extending to three dimensions (3D) an exact Hilbert filtering based FBP algorithm for 2D fan-beam beam algorithms with no position dependent backprojection weight and fan-beam algorithm with inverse distance backprojection weight. The proposed FDK reconstruction algorithm with inverse distance weight in the backprojection requires full-scan projection data while the FDK reconstruction algorithm with no backprojection weight can handle partial-scan data including very short-scan. The FDK reconstruction algorithms with no backprojection weight for circular CB CT are compared with Hu’s, FDK and T-FDK reconstruction algorithms in-terms of axial intensity drop and computational complexity. The simulation results of noise, CB artifacts performance and execution timing as well as the partial-scan reconstruction abilities are presented. We show that FDK reconstruction algorithms with no backprojection weight have better noise performance characteristics than the conventional FDK reconstruction algorithm where the backprojection weight is known to result in spatial non-uniformity in the noise characteristics.
In this thesis, we present an efficient method to reduce the axial intensity drop in circular CB CT. The efficient method consists of two steps: the first one is reconstruction of the object using FDK reconstruction algorithm with no backprojection weight and the second is estimating the missing term. The efficient method is comparable to Zhu et al.’s method in terms of reduction in axial intensity drop, noise and computational complexity.
The helical scanning trajectory satisfies the Tuy-smith condition, hence an exact and stable reconstruction is possible. However, the helical FDK reconstruction algorithm is responsible for the cone-beam artifacts since the helical FDK reconstruction algorithm is approximate in its derivation. In this thesis, helical FDK reconstruction algorithms based on Hilbert filtering with no backprojection weight and FDK reconstruction algorithm based on hybrid filtering with inverse distance backprojection weight are presented to reduce the CB artifacts. These algorithms are compared with standard helical FDK in-terms of noise, CB artifacts and computational complexity.Tue, 15 Jul 2014 18:30:00 GMThttp://hdl.handle.net/2005/23422014-07-15T18:30:00ZPerformance Evaluation Of Fan-beam And Cone-beam Reconstruction Algorithms With No Backprojection Weight On Truncated Data Problems
http://hdl.handle.net/2005/2343
Title: Performance Evaluation Of Fan-beam And Cone-beam Reconstruction Algorithms With No Backprojection Weight On Truncated Data Problems
Authors: Sumith, K
Abstract: This work focuses on using the linear prediction based projection completion for the fan-beam and cone-beam reconstruction algorithm with no backprojection weight. The truncated data problems are addressed in the computed tomography research. However, the image reconstruction from truncated data perfectly has not been achieved yet and only approximately accurate solutions have been obtained. Thus research in this area continues to strive to obtain close result to the perfect. Linear prediction techniques are adopted for truncation completion in this work, because previous research on the truncated data problems also have shown that this technique works well compared to some other techniques like polynomial fitting and iterative based methods. The Linear prediction technique is a model based technique. The autoregressive (AR) and moving average (MA) are the two important models along with autoregressive moving average (ARMA) model. The AR model is used in this work because of the simplicity it provides in calculating the prediction coefficients. The order of the model is chosen based on the partial autocorrelation function of the projection data proved in the previous researches that have been carried out in this area of interest. The truncated projection completion using linear prediction and windowed linear prediction show that reasonably accurate reconstruction is achieved. The windowed linear prediction provide better estimate of the missing data, the reason for this is mentioned in the literature and is restated for the reader’s convenience in this work.
The advantages associated with the fan-beam reconstruction algorithms with no backprojection weights compared to the fan-beam reconstruction algorithm with backprojection weights motivated us to use the fan-beam reconstruction algorithm with no backprojection weight for reconstructing the truncation completed projection data. The results obtained are compared with the previous work which used conventional fan-beam reconstruction algorithms with backprojection weight. The intensity plots and the noise performance results show improvements resulting from using the fan-beam reconstruction algorithm with no backprojection weight. The work is also extended to the Feldkamp, Davis, and Kress (FDK) reconstruction algorithm with no backprojection weight for the helical scanning geometry and the results obtained are compared with the FDK reconstruction algorithm with backprojection weight for the helical scanning geometry.Tue, 15 Jul 2014 18:30:00 GMThttp://hdl.handle.net/2005/23432014-07-15T18:30:00ZDevelopment Of Algorithms For Improved Planning And Operation Of Deregulated Power Systems
http://hdl.handle.net/2005/2336
Title: Development Of Algorithms For Improved Planning And Operation Of Deregulated Power Systems
Authors: Surendra, S
Abstract: Transmission pricing and congestion management are two important aspects of modern power sectors working under a deregulated environment or moving towards a deregulated system (open access) from a regulated environment. The transformation of power sector for open access environment with the participation of private sector and potential power suppliers under the regime of trading electricity as a commodity is aimed at overcoming some of the limitations faced by the vertically integrated system. It is believed that this transformation will bring in new technologies, efficient and alternative sources of power
which are greener, self sustainable and competitive.
There is ever increasing demand for electrical power due to the changing life style of human beings fueled by modernization and growth. Augmentation of existing capacity, siting of new power plants, and a search for alternate viable sources of energy that have lesser impact on environment are being taken up.
With the integration of power plants into the grid depending upon the type, loca-
tion and technology used, the cost of energy production also differs. In interconnected networks, power can flow from one point to other point in infinite number of possible paths which is decided by the circuit parameters, operating conditions, topology of network and the connected loads. The transmission facility provided for power transfer has to recover the charges from the entities present in the network based on the extent of utilization. Since power transmission losses account for nearly 4 to 8% of the total generation, this has to be accounted for and shared properly among the entities depending
upon the connected generation/load.
In this context, this thesis aims to evaluate the shortcomings of existing tracing methods and proposes a tracing method based upon the actual operating conditions of the network taking into account the network parameters, voltage gradient among the connected buses and topology of the network as obtained by the online state estimator/load flow studies. The concept proposed is relatively simple and easy to implement in a given transactional period. The proposed method is compared against one of the existing tracing technique available in literature. Both active and reactive power tracing is handled at one go.
The summation of partial contributions from all the sources in any given line of the system always matches with that of the respective base case ow. The AC power flow equations themselves are nonlinear in nature. Since the sum of respective partial flows in a given branch is always equal to the original ow, these are termed as virtual flows and the effect of nonlinearity is still unknown. The virtual flows in a given line are complex in nature and their complex sum is equal to the original complex power flows as in the base case. It is required to determine whether these are the true partial flows. To answer this, a DC equivalent of the original AC network is proposed and is called as the R - P
equivalent model. This model consists of only the resistances as that of original network (the resistances of transformers and lines neglecting the series reactance and the shunt charging) only. The real power injections in a AC network i.e. sources into respective buses and loads (negative real power injections) are taken as injection measurements of this R P model and the bus voltages (purely real quantities) are estimated using the method of least squares. Complex quantities are absent in this model and only real terms which are either sums or differences are present. For this model, virtual flows are evaluated and it has been verified that the virtual real power contributions from sources are in near agreement with the original AC network. This implies that the virtual flows determined for the original network can be applied for day-to-day applications.
An important feature of the virtual flows is that it is possible to identify counter ow
components. Counter flow components are the transactions taking place in opposite direction to the net flow in that branch. If a particular source is produces counter flow in a given line, then it is in effect reducing congestion to that extent. This information is lacking in most of the existing techniques. Counter flows are useful in managing congestion.
HVDC links are integrated with HVAC systems in order to transfer bulk power and for the additional advantages they offer. The incremental cost of a DC link is zero due to the closed loop control techniques implemented to maintain constant power transfer (excluding constant voltage or constant current control). Consequently, cost allocation to HVDC is still a problem. The proposed virtual power flow tracing method is extended to HVAC systems integrated with HVDC in order to determine the extent of utilization of a given link by the sources. Before evaluating the virtual contributions to the HVDC links, the steady state operating condition of the combined system is obtained by per-forming a sequential load flow.
Congestion is one of the main aspects of a deregulated system, and is a result of
several transactions taking place simultaneously through a given transmission facility. If congestion is managed by providing pricing signals for the transmission usage by the parties involved. It can also be due to the non-availability of transmission paths due to line outages as a result of contingencies. In such a case, generation active power redispatch is considered as a viable option in addition to other available controls such as phase shifters and UPFCs to streamline the transactions within the available corridors. The virtual power flow tracing technique proposed in the thesis is used as a guiding factor for managing congestions occurring due to transactions/contingencies to the possible extent. The utilization of a given line by the sources present in the network in terms of real power flow is thus obtained. These line utilization factors are called as T-coefficients and these are approximately constant for moderate increments in active power change from the sources. A simple fuzzy logic based decision system is proposed in order to obtain active power rescheduling from the sources for managing network congestions. In order to enhance the system stability after rescheduling, reactive power optimization has life systems to illustrate the proposed approaches.
For secure operation of the network, the ideal proportion of active power schedule from the sources present in the network for a given load pattern is found from network [FLG] matrix. The elements of this matrix are used in the computation of static voltage stability index (L-index). This [FLG] matrix is obtained from the partitioned network YBUS matrix and gives the Relative Electrical Distance (RED) of each of the loads with respect to the sources present in the network. From this RED, the ideal proportion of
real power to be drawn by a given load from different sources can be determined. This proportion of active power scheduling from sources is termed as Desired Proportion of Generation (DPG). If the generations are scheduled accordingly, the network operates with less angular separation among system buses (improved angular stability), improved voltage profiles and better voltage stability. Further, the partitioned K[GL] matrix reveals information about the relative proportion in which the loads should draw active power from the sources as per DPG which is irrespective of the present scheduling. Other partitioned [Y ′ GG] matrix is useful in finding the deviation of the present active power output from the sources with respect to the ideal schedule.
Many regional power systems are interconnected to form large integrated grids for both technical and economic benefits. In such situations, Generation Expansion Planning (GEP) has to be undertaken along with augmentation of existing transmission facilities. Generation expansion at certain locations need new transmission networks which involves serious problems such as getting right-of-way and environmental clearance. An approach to find suitable generation expansion locations in different zones with least requirements
of transmission network expansion has been attempted using the concept of RED. For the anticipated load growth, the capacity and siting generation facilities are identified on zonal basis. Using sample systems and real life systems, the validity of the proposed approach is demonstrated using performance criteria such as voltage stability, effect on line MVA loadings and real power losses.Thu, 03 Jul 2014 18:30:00 GMThttp://hdl.handle.net/2005/23362014-07-03T18:30:00ZLexicon-Free Recognition Strategies For Online Handwritten Tamil Words
http://hdl.handle.net/2005/2363
Title: Lexicon-Free Recognition Strategies For Online Handwritten Tamil Words
Authors: Sundaram, Suresh
Abstract: In this thesis, we address some of the challenges involved in developing a robust writer-independent, lexicon-free system to recognize online Tamil words. Tamil, being a Dravidian language, is morphologically rich and also agglutinative and thus does not have a finite lexicon. For example, a single verb root can easily lead to hundreds of words after morphological changes and agglutination. Further, adoption of a lexicon-free recognition approach can be applied to form-filling applications, wherein the lexicon can become cumbersome (if not impossible) to capture all possible names. Under such circumstances, one must necessarily explore the possibility of segmenting a Tamil word to its individual symbols.
Modern day Tamil alphabet comprises 23 consonants and 11 vowels forming a total combination of 313 characters/aksharas. A minimal set of 155 distinct symbols have been derived to recognize these characters. A corpus of isolated Tamil symbols (IWFHR database) is used for deriving the various statistics proposed in this work. To address the challenges of segmentation and recognition (the primary focus of the thesis), Tamil words are collected using a custom application running on a tablet PC. A set of 10000 words (comprising 53246 symbols) have been collected from high school students and used for the experiments in this thesis. We refer to this database as the ‘MILE word database’.
In the first part of the work, a feedback based word segmentation mechanism has been proposed. Initially, the Tamil word is segmented based on a bounding box overlap criterion. This dominant overlap criterion segmentation (DOCS) generates a set of candidate stroke groups. Thereafter, attention is paid to certain attributes from the resulting stroke groups for detecting any possible splits or under-segmentations. By relying on feedbacks provided by
a priori knowledge of attributes such as number of dominant points and inter-stroke displacements the recognition label and likelihood of the primary SVM classifier
linguistic knowledge on the detected stroke groups, a decision is taken to correct it or not. Accordingly, we call the proposed segmentation as ‘attention feedback segmentation’ (AFS). Across the words in the MILE word database, a segmentation rate of 99.7% is achieved at symbol level with AFS. The high segmentation rate (with feedback) in turn improves the symbol recognition rate of the primary SVM classifier from 83.9% (with DOCS alone) to 88.4%.
For addressing the problem of segmentation, the SVM classifier fed with the x-y trace of the normalized and resampled online stroke groups is quite effective. However, the performance of the classifier is not robust to effectively distinguish between many sets of similar looking symbols. In order to improve the symbol recognition performance, we explore two approaches, namely reevaluation strategies and language models.
The reevaluation techniques, in particular, resolve the ambiguities in base consonants, pure consonants and vowel modifiers to a considerable extent. For the frequently confused sets (derived from the confusion matrix), a dynamic time warping (DTW) approach is proposed to automatically extract their discriminative regions. Dedicated to each confusion set, novel localized cues are derived from the discriminative region for their disambiguation. The proposed features are quite promising in improving the symbol recognition performance of the confusion sets. Comparative experimental analysis of these features with x-y coordinates are performed for judging their discriminative power. The resolving of confusions is accomplished with expert networks, comprising discriminative region extractor, feature extractor and SVM. The proposed techniques improve the symbol recognition rate by 3.5% (from 88.4% to 91.9%) on the MILE word database over the primary SVM classifier.
In the final part of the thesis, we integrate linguistic knowledge (derived from a text corpus) in the primary recognition system. The biclass, bigram and unigram language models at symbol level are compared in terms of recognition performance. Amongst the three models, the bigram model is shown to give the highest recognition accuracy. A class reduction approach for recognition is adopted by incorporating the language bigram model at the akshara level. Lastly, a judicious combination of reevaluation techniques with language models is proposed in this work. Overall, an improvement of up to 4.7% (from 88.4% to 93.1%) in symbol level accuracy is achieved.
The writer-independent and lexicon-free segmentation-recognition approach developed in this thesis for online handwritten Tamil word recognition is promising. The best performance of 93.1% (achieved at symbol level) is comparable to the highest reported accuracy in the literature for Tamil symbols. However, the latter one is on a database of isolated symbols (IWFHR competition test dataset), whereas our accuracy is on a database of 10000 words and thus, a product of segmentation and classifier accuracies. The recognition performance obtained may be enhanced further by experimenting on and choosing the best set of features and classifiers. Also, the word recognition performance can be very significantly improved by using a lexicon. However, these are not the issues addressed by the thesis. We hope that the lexicon-free experiments reported in this work will serve as a benchmark for future efforts.Wed, 06 Aug 2014 18:30:00 GMThttp://hdl.handle.net/2005/23632014-08-06T18:30:00Z