etd@IISc Collection:
http://etd.iisc.ernet.in/2005/2546
2018-06-22T19:17:59ZProjection based Variational Multiscale Methods for Incompressible Navier-Stokes Equations to Model Turbulent Flows in Time-dependent Domains
http://etd.iisc.ernet.in/2005/3714
Title: Projection based Variational Multiscale Methods for Incompressible Navier-Stokes Equations to Model Turbulent Flows in Time-dependent Domains
Authors: Pal, Birupaksha
Abstract: Numerical solution of differential equations having multitude of scales in the solution field is one of the most challenging research areas, but highly demanded in scientific and industrial applications. One of the natural approaches for handling such problems is to separate the scales and approximate the solution of the segregated scales with appropriate numerical method.
Variational multiscale method (VMS) is a predominant method in the paradigm of scale separation schemes.
In our work we have used the VMS technique to develop a numerical scheme for computations of turbulent flows in time-dependent domains. VMS allows separation of the entire range of scales in the flow field into two or three groups, thereby enabling a different numerical treatment for the different groups. In the context of computational fluid dynamics(CFD), VMS is a significant new improvement over the classical large eddy simulation (LES). VMS does away with the commutation errors arising due to filtering in LES. Further, in a three-scale VMS approach the model for the subgrid scale can be contained to only a part of the resolved scales instead of effecting the entire range of resolved scales.
The projection based VMS scheme that we have developed gives a robust and efficient method for solving problems of turbulent fluid flows in deforming domains, governed by incompressible Navier {Stokes equations. In addition to the existing challenges due to turbulence, the computational complexity of
the problem increases further when the considered domain is time-dependent. In this work, we have used an arbitrary Lagrangian-Eulerian (ALE) based VMS scheme to account for the domain deformation. In the proposed scheme, the large scales are represented by an additional tensor valued space. The resolved large and small scales are computed in a single unified equation, and the effect of unresolved scales is confined only to the resolved small scales, by using a projection operator. The popular Smagorinsky eddy viscosity model is used to approximate the effects of unresolved scales. The used ALE approach consists of an elastic mesh update technique. Moreover, a computationally efficient scheme is obtained by the choice of orthogonal finite element basis function for the resolved large scales, which allows to reformulate the ALE-VMS system matrix into the standard form of the NSE system matrix. Thus, any existing Navier{Stokes solver can be utilized for this scheme, with modifications. Further, the stability and error estimates of the scheme using a linear model of the NSE are also derived. Finally, the proposed scheme has been validated by a number of numerical examples over a wide range of problems.2018-06-14T18:30:00ZNumerical Studies of Axially Symmetric Ion Trap Mass Analysers
http://etd.iisc.ernet.in/2005/3630
Title: Numerical Studies of Axially Symmetric Ion Trap Mass Analysers
Authors: Kotana, Appala Naidu
Abstract: In this thesis we have focussed on two types of axially symmetric ion trap mass analysers viz., the quadrupole ion trap mass analyser and the toroidal ion trap mass analyser. We have undertaken three numerical studies in this thesis, one study is on the quadrupole ion trap mass analysers and two studies are on the toroidal ion trap mass analysers. The first study is related to improvement of the sensitivity of quadrupole ion trap mass analysers operated in the resonance ejection mode. In the second study we have discussed methods to determine the multipole coeﬃcients in the toroidal ion trap mass analysers. The third study investigates the stability of ions in the toroidal ion trap mass analysers.
The first study presents a technique to cause unidirectional ion ejection in a quadrupole ion trap mass spectrometer operated in the resonance ejection mode. In this technique a modified auxiliary dipolar excitation signal is applied to the endcap electrodes. This modified signal is a linear combination of two signals. The first signal is the nominal dipolar excitation signal which is applied across the endcap electrodes and the second signal is the second harmonic of the first signal, the amplitude of the second harmonic being larger than that of the fundamental. We have investigated the effect of the following parameters on achieving unidirectional ion ejection: primary signal amplitude, ratio of amplitude of second harmonic to that of primary signal amplitude, different operating points, different scan rates, different mass to charge ratios and diﬀerent damping constants. In all these simulations unidirectional ejection of destabilized ions has been successfully achieved.
The second study presents methods to determine multipole coefficients for describing the potential in toroidal ion trap mass analysers. Three different methods have been presented to compute the toroidal multipole coefficients. The first method uses a least square fit and is useful when we have ability to compute potential at a set of points in the trapping region. In the second method we use the Discrete Fourier Transform of potentials on a circle in the trapping region. The third method uses surface charge distribution obtained from the Boundary Element Method to compute these coefficients. Using these multipole coefficients we have presented (1) equations of ion motion in toroidal ion traps (2) the Mathieu parameters in terms of multipole coefficients and (3) the secular frequency of ion motion in these traps. It has been shown that the secular frequency obtained from our method has a good match with that obtained from numerical trajectory simulation.
The third study presents stability of ions in practical toroidal ion trap mass analysers. Here we have taken up for investigation four geometries with apertures and truncation of electrodes. The stability is obtained in UDC-VRF plane and later this is converted into A-Q plane on the Mathieu stability plot. Though the plots in terms of Mathieu parameters for these structures are qualitatively similar to the corresponding plot of linear ion trap mass analysers, there is a significant difference. The stability plots of these have regions of nonlinear resonances where ion motion is unstable. These resonances have been briefly investigated and it is proposed that they occur on account of hexapole and octopole contributions to the field in these toroidal ion traps.2018-05-28T18:30:00ZStudies on Kernel Based Edge Detection an Hyper Parameter Selection in Image Restoration and Diffuse Optical Image Reconstruction
http://etd.iisc.ernet.in/2005/3615
Title: Studies on Kernel Based Edge Detection an Hyper Parameter Selection in Image Restoration and Diffuse Optical Image Reconstruction
Authors: Narayana Swamy, Yamuna
Abstract: Computational imaging has been playing an important role in understanding and analysing the captured images. Both image segmentation and restoration has been in-tegral parts of computational imaging. The studies performed in this thesis has been focussed toward developing novel algorithms for image segmentation and restoration. Study related to usage of Morozov Discrepancy Principle in Di use Optical Imaging was also presented here to show that hyper parameter selection could be performed with ease.
The Laplacian of Gaussian (LoG) and Canny operators use Gaussian smoothing be-fore applying the derivative operator for edge detection in real images.
The LoG kernel was based on second derivative and is highly sensitive to noise when compared to the Canny edge detector. A new edge detection kernel, called as Helmholtz of Gaussian (HoG), which provides higher di suavity is developed in this thesis and it was shown that it is more robust to noise. The formulation of the developed HoG kernel is similar to LoG. It was also shown both theoretically and experimentally that LoG is a special case of HoG. This kernel when used as an edge detector exhibited superior performance compared to LoG, Canny and wavelet based edge detector for the standard test cases both in one- and two-dimensions.
The linear inverse problem encountered in restoration of blurred noisy images is typically solved via Tikhonov minimization. The outcome (restored image) of such min-imitation is highly dependent on the choice of regularization parameter. In the absence of prior information about the noise levels in the blurred image, ending this regular-inaction/hyper parameter in an automated way becomes extremely challenging. The available methods like Generalized Cross Validation (GCV) may not yield optimal re-salts in all cases. A novel method that relies on minimal residual method for ending the regularization parameter automatically was proposed here and was systematically compared with the GCV method. It was shown that the proposed method performance was superior to the GCV method in providing high quality restored images in cases where the noise levels are high
Di use optical tomography uses near infrared (NIR) light as the probing media to recover the distributions of tissue optical properties with an ability to provide functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) is non-linear and ill-posed, requiring usage of advanced computational methods to compensate this. An automated method for selection of regularization/hyper parameter that incorporates Morozov discrepancy principle(MDP) into the Tikhonov method was proposed and shown to be a promising method for the dynamic Di use Optical Tomography.2018-05-24T18:30:00ZLearning Compact Architectures for Deep Neural Networks
http://etd.iisc.ernet.in/2005/3581
Title: Learning Compact Architectures for Deep Neural Networks
Authors: Srinivas, Suraj
Abstract: Deep neural networks with millions of parameters are at the heart of many state of the art computer vision models. However, recent works have shown that models with much smaller number of parameters can often perform just as well. A smaller model has the advantage of being faster to evaluate and easier to store - both of which are crucial for real-time and embedded applications. While prior work on compressing neural networks have looked at methods based on sparsity, quantization and factorization of neural network layers, we look at the alternate approach of pruning neurons.
Training Neural Networks is often described as a kind of `black magic', as successful training requires setting the right hyper-parameter values (such as the number of neurons in a layer, depth of the network, etc ). It is often not clear what these values should be, and these decisions often end up being either ad-hoc or driven through extensive experimentation. It would be desirable to automatically set some of these hyper-parameters for the user so as to minimize trial-and-error. Combining this objective with our earlier preference for smaller models, we ask the following question - for a given task, is it possible to come up with small neural network architectures automatically? In this thesis, we propose methods to achieve the same.
The work is divided into four parts. First, given a neural network, we look at the problem of identifying important and unimportant neurons. We look at this problem in a data-free setting, i.e; assuming that the data the neural network was trained on, is not available. We propose two rules for identifying wasteful neurons and show that these suffice in such a data-free setting. By removing neurons based on these rules, we are able to reduce model size without significantly affecting accuracy.
Second, we propose an automated learning procedure to remove neurons during the process of training. We call this procedure ‘Architecture-Learning’, as this automatically discovers the optimal width and depth of neural networks. We empirically show that this procedure is preferable to trial-and-error based Bayesian Optimization procedures for selecting neural network architectures.
Third, we connect ‘Architecture-Learning’ to a popular regularize called ‘Dropout’, and propose a novel regularized which we call ‘Generalized Dropout’. From a Bayesian viewpoint, this method corresponds to a hierarchical extension of the Dropout algorithm. Empirically, we observe that Generalized Dropout corresponds to a more flexible version of Dropout, and works in scenarios where Dropout fails.
Finally, we apply our procedure for removing neurons to the problem of removing weights in a neural network, and achieve state-of-the-art results in scarifying neural networks.2018-05-21T18:30:00Z