etd@IISc Collection:
http://hdl.handle.net/2005/33
2018-02-23T00:34:49ZComprehensive Seismic Hazard Analysis of India
http://hdl.handle.net/2005/3170
Title: Comprehensive Seismic Hazard Analysis of India
Authors: Kolathayar, Sreevalsa
Abstract: Planet earth is restless and one cannot control its inside activities and vibrations those leading to natural hazards. Earthquake is one of such natural hazards that have affected the mankind most. Most of the causalities due to earthquakes happened not because of earthquakes as such, but because of poorly designed structures which could not withstand the earthquake forces. The improper building construction techniques adopted and the high population density are the major causes of the heavy damage due to earthquakes. The damage due to earthquakes can be reduced by following proper construction techniques, taking into consideration of appropriate forces on the structure that can be caused due to future earthquakes. The steps towards seismic hazard evaluation are very essential to estimate an optimal and reliable value of possible earthquake ground motion during a specific time period. These predicted values can be an input to assess the seismic vulnerability of an area based on which new construction and the restoration works of existing structures can be carried out.
A large number of devastating earthquakes have occurred in India in the past. The northern region of India, which is along the plate boundary of the Indian plate with the Eurasian plate, is seismically very active. The north eastern movement of Indian plate has caused deformation in the Himalayan region, Tibet and the North Eastern India. Along the Himalayan belt, the Indian and Eurasian plates converge at the rate of about 50 mm/year (Bilham 2004; Jade 2004). The North East Indian (NEI) region is known as one of the most seismically active regions in the world. However the peninsular India, which is far away from the plate boundary, is a stable continental region, which is considered to be of moderate seismic activity. Even though, the activity is considered to be moderate in the Peninsular India, world’s deadliest earthquake occurred in this region (Bhuj earthquake 2001). The rapid drifting of Indian plate towards Himalayas in the north east direction with a high velocity along with its low plate thickness might be the cause of high seismicity of the Indian region. Bureau of Indian Standard has published a seismic zonation map in 1962 and revised it in 1966, 1970, 1984 and 2002. The latest version of the seismic zoning map of India assigns four levels of seismicity for the entire Country in terms of different zone factors. The main drawback of the seismic zonation code of India (BIS-1893, 2002) is that, it is based on the past seismic activity and not based on a scientific seismic hazard analysis. Several seismic hazard studies, which were taken up in the recent years, have shown that the hazard values given by BIS-1893 (2002) need to be revised (Raghu Kanth and Iyengar 2006; Vipin et al. 2009; Mahajan et al. 2009 etc.). These facts necessitate a comprehensive study for evaluating the seismic hazard of India and development of a seismic zonation map of India based on the Peak Ground Acceleration (PGA) values. The objective of this thesis is to estimate the seismic hazard of entire India using updated seismicity data based on the latest and different methodologies.
The major outcomes of the thesis can be summarized as follows. An updated earthquake catalog that is uniform in moment magnitude, has been prepared for India and adjoining areas for the period till 2010. Region specific magnitude scaling relations have been established for the study region, which facilitated the generation of a homogenous earthquake catalog. By carefully converting the original magnitudes to unified MW magnitudes, we have removed a major obstacle for consistent assessment of seismic hazards in India. The earthquake catalog was declustered to remove the aftershocks and foreshocks. Out of 203448 events in the raw catalog, 75.3% were found to be dependent events and remaining 50317 events were identified as main shocks of which 27146 events were of MW ≥ 4. The completeness analysis of the catalog was carried out to estimate completeness periods of different magnitude ranges. The earthquake catalog containing the details of the earthquake events until 2010 is uploaded in the website the catalog was carried out to estimate completeness periods of different magnitude ranges. The earthquake catalog containing the details of the earthquake events until 2010 is uploaded in the website the catalog was carried out to estimate completeness periods of different magnitude ranges. The earthquake catalog containing the details of the earthquake events until 2010 is uploaded in the website
A quantitative study of the spatial distribution of the seismicity rate across India and its vicinity has been performed. The lower b values obtained in shield regions imply that the energy released in these regions is mostly from large magnitude events. The b
value of northeast India and Andaman Nicobar region is around unity which implies that the energy released is compatible for both smaller and larger events. The effect of aftershocks in the seismicity parameters was also studied. Maximum likelihood estimations of the b value from the raw and declustered earthquake catalogs show significant changes leading to a larger proportion of low magnitude events as foreshocks and aftershocks. The inclusions of dependent events in the catalog affect the relative abundance of low and high magnitude earthquakes. Thus, greater inclusion of dependent events leads to higher b values and higher activity rate. Hence, the seismicity parameters obtained from the declustered catalog is valid as they tend to follow a Poisson distribution. Mmax does not significantly change, since it depends on the largest observed magnitude rather than the inclusion of dependent events (foreshocks and aftershocks). The spatial variation of the seismicity parameters can be used as a base to identify regions of similar characteristics and to delineate regional seismic source zones.
Further, Regions of similar seismicity characteristics were identified based on fault alignment, earthquake event distribution and spatial variation of seismicity parameters. 104 regional seismic source zones were delineated which are inevitable input to seismic hazard analysis. Separate subsets of the catalog were created for each of these zones and seismicity analysis was done for each zone after estimating the cutoff magnitude. The frequency magnitude distribution plots of all the source zones can be found at http://civil.iisc.ernet.in/~sitharam . There is considerable variation in seismicity parameters and magnitude of completeness across the study area. The b values for various regions vary from a lower value of 0.5 to a higher value of 1.5. The a value for different zones vary from a lower value of 2 to a higher value of 10. The analysis of seismicity parameters shows that there is considerable difference in the earthquake recurrence rate and Mmax in India. The coordinates of these source zones and the seismicity parameters a, b & Mmax estimated can be directly input into the Probabilistic seismic hazard analysis. The seismic hazard evaluation of the Indian landmass based on a state-of-the art Probabilistic Seismic Hazard Analysis (PSHA) study has been performed using the classical Cornell–McGuire approach with different source models and attenuation relations. The most recent knowledge of seismic activity in the region has been used to evaluate the hazard incorporating uncertainty associated with different modeling parameters as well as spatial and temporal uncertainties. The PSHA has been performed with currently available data and their best possible scientific interpretation using an appropriate instrument such as the logic tree to explicitly account for epistemic uncertainty by considering alternative models (source models, maximum magnitude in hazard computations, and ground-motion attenuation relationships). The hazard maps have been produced for horizontal ground motion at bedrock level (Shear wave velocity ≥ 3.6 km/s) and compared with the earlier studies like Bhatia et al., 1999 (India and adjoining areas); Seeber et al, 1999 (Maharashtra state); Jaiswal and Sinha, 2007 (Peninsular India); Sitharam and Vipin, 2011 (South India); Menon et al., 2010 (Tamilnadu). It was observed that the seismic hazard is moderate in Peninsular shield (except the Kutch region of Gujarat), but the hazard in the North and Northeast India and Andaman-Nicobar region is very high. The ground motion predicted from the present study will not only give hazard values for design of structures, but also will help in deciding the locations of important structures such as nuclear power plants.
The evaluation of surface level PGA values is of very high importance in the engineering design. The surface level PGA values were evaluated for the entire study area for four NEHRP site classes using appropriate amplification factors. If the site class at any location in the study area is known, then the ground level PGA values can be obtained from the respective map. In the absence of VS30 values, the site classes can be identified based on local geological conditions. Thus this method provides a simplified methodology for evaluating the surface level PGA values. The evaluation of PGA values for different site classes were evaluated based on the PGA values obtained from the DSHA and PSHA. This thesis also presents VS30 characterization of entire country based on the topographic gradient using existing correlations. Further, surface level PGA contour map was developed based on the same. Liquefaction is the conversion of formally stable cohesionless soils to a fluid mass, due to increase in pore pressure and is prominent in areas that have groundwater near the surface and sandy soil. Soil liquefaction has been observed during the earthquakes because of the sudden dynamic earthquake load, which in turn increases the pore pressure. The evaluation of liquefaction potential involves evaluation of earthquake loading and evaluation of soil resistance to liquefaction. In the present work, the spatial variation of the SPT value required to prevent liquefaction has been estimated using a probabilistic methodology, for entire India.
To summarize, the major contribution of this thesis are the development of region specific magnitude correlations suitable for Indian subcontinent and an updated homogeneous earthquake catalog for India that is uniform in moment magnitude scale. The delineation and characterization of regional seismic source zones for a vast country like India is a unique contribution, which requires reasonable observation and engineering judgement. Considering complex seismotectonic set up of the country, the present work employed numerous methodologies (DSHA and PSHA) in analyzing the seismic hazard using appropriate instrument such as the logic tree to explicitly account for epistemic uncertainties considering alternative models (For Source model, Mmax estimation and Ground motion prediction equations) to estimate the PGA value at bedrock level. Further, VS30 characterization of India was done based on the topographic gradient, as a first level approach, which facilitated the development of surface level PGA map for entire country using appropriate amplification factors. Above factors make the present work very unique and comprehensive touching various aspects of seismic hazard. It is hoped that the methodology and outcomes presented in this thesis will be beneficial to practicing engineers and researchers working in the area of seismology and geotechnical engineering in particular and to the society as a whole.2018-02-22T18:30:00ZMonte Carlo Simulation Based Response Estimation and Model Updating in Nonlinear Random Vibrations
http://hdl.handle.net/2005/3162
Title: Monte Carlo Simulation Based Response Estimation and Model Updating in Nonlinear Random Vibrations
Authors: Radhika, Bayya
Abstract: The study of randomly excited nonlinear dynamical systems forms the focus of this thesis. We discuss two classes of problems: first, the characterization of nonlinear random response of the system before it comes into existence and, the second, assimilation of measured responses into the mathematical model of the system after the system comes into existence. The first class of problems constitutes forward problems while the latter belongs to the class of inverse problems. An outstanding feature of these problems is that they are almost always not amenable for exact solutions. We tackle in the present study these two classes of problems using Monte Carlo simulation tools in conjunction with Markov process theory, Bayesian model updating strategies, and particle filtering based dynamic state estimation methods.
It is well recognized in literature that any successful application of Monte Carlo simulation methods to practical problems requires the simulation methods to be reinforced with effective means of controlling sampling variance. This can be achieved by incorporating any problem specific qualitative and (or) quantitative information that one might have about system behavior in formulating estimators for response quantities of interest. In the present thesis we outline two such approaches for variance reduction. The first of these approaches employs a substructuring scheme, which partitions the system states into two sets such that the probability distribution of the states in one of the sets conditioned on the other set become amenable for exact analytical solution. In the second approach, results from data based asymptotic extreme value analysis are employed to tackle problems of time variant reliability analysis and updating of this reliability. We exemplify in this thesis the proposed approaches for response estimation and model updating by considering wide ranging problems of interest in structural engineering, namely, nonlinear response and reliability analyses under stationary and (or) nonstationary random excitations, response sensitivity model updating, force identification, residual displacement analysis in instrumented inelastic structures under transient excitations, problems of dynamic state estimation in systems with local nonlinearities, and time variant reliability analysis and reliability model updating. We have organized the thesis into eight chapters and three appendices. A resume of contents of these chapters and appendices follows.
In the first chapter we aim to provide an overview of mathematical tools which form the basis for investigations reported in the thesis. The starting point of the study is taken to be a set of coupled stochastic differential equations, which are obtained after discretizing spatial variables, typically, based on application of finite element methods. Accordingly, we provide a summary of the following topics: (a) Markov vector approach for characterizing time evolution of transition probability density functions, which includes the forward and backward Kolmogorov equations, (b) the equations governing the time evolution of response moments and first passage times, (c) numerical discretization of governing stochastic differential equation using Ito-Taylor’s expansion, (d) the partial differential equation governing the time evolution of transition probability density functions conditioned on measurements for the study of existing instrumented structures,
(e) the time evolution of response moments conditioned on measurements based on governing equations in (d), and (f) functional recursions for evolution of multidimensional posterior probability density function and posterior filtering density function, when the time variable is also discretized. The objective of the description here is to provide an outline of the theoretical formulations within which the problems of response estimation and model updating are formulated in the subsequent chapters of the present thesis. We briefly state the class of problems, which are amenable for exact solutions. We also list in this chapter major text books, research monographs, and review papers relevant to the topics of nonlinear random vibration analysis and dynamic state estimation.
In Chapter 2 we provide a review of literature on solutions of problems of response analysis and model updating in nonlinear dynamical systems. The main focus of the review is on Monte Carlo simulation based methods for tackling these problems. The review accordingly covers numerical methods for approximate solutions of Kolmogorov equations and associated moment equations, variance reduction in simulation based analysis of Markovian systems, dynamic state estimation methods based on Kalman filter and its variants, particle filtering, and variance reduction based on Rao-Blackwellization.
In this review we chiefly cover papers that have contributed to the growth of the methodology. We also cover briefly, the efforts made in applying the ideas to structural engineering problems. Based on this review, we identify the problems of variance reduction using substructuring schemes and data based extreme value analysis and, their incorporation into response estimation and model updating strategies, as problems requiring further research attention. We also identify a range of problems where these tools could be applied.
We consider the development of a sequential Monte Carlo scheme, which incorporates a substructuring strategy, for the analysis of nonlinear dynamical systems under random excitations in Chapter 3. The proposed substructuring ensures that a part of the system states conditioned on the remaining states becomes Gaussian distributed and is amenable for an exact analytical solution. The use of Monte Carlo simulations is subsequently limited for the analysis of the remaining system states. This clearly results in reduction in sampling variance since a part of the problem is tackled analytically in an exact manner. The successful performance of the proposed approach is illustrated by considering response analysis of a single degree of freedom nonlinear oscillator under random excitations. Arguments based on variance decomposition result and Rao-Blackwell theorems are presented to demonstrate that the proposed variance reduction indeed is effective.
In Chapter 4, we modify the sequential Monte Carlo simulation strategy outlined in the preceding chapter to incorporate questions of dynamic state estimation when data on measured responses become available. Here too, the system states are partitioned into two groups such that the states in one group become Gaussian distributed when conditioned on the states in the other group. The conditioned Gaussian states are subsequently analyzed exactly using the Kalman filter and, this is interfaced with the analysis of the remaining states using sequential importance sampling based filtering strategy. The development of this combined Kalman and sequential importance sampling filtering method constitutes one of the novel elements of this study. The proposed strategy is validated by considering the problem of dynamic state estimation in linear single and multi-degree of freedom systems for which exact analytical solutions exist.
In Chapter 5, we consider the application of the tools developed in Chapter 4 for a class of wide ranging problems in nonlinear random vibrations of existing systems. The nonlinear systems considered include single and multi-degree of freedom systems, systems with memoryless and hereditary nonlinearities, and stationary and nonstationary random excitations. The specific applications considered include nonlinear dynamic state estimation in systems with local nonlinearities, estimation of residual displacement in instrumented inelastic dynamical system under transient random excitations, response sensitivity model updating, and identification of transient seismic base motions based on measured responses in inelastic systems. Comparisons of solutions from the proposed substructuring scheme with corresponding results from direct application of particle filtering are made and a satisfactory mutual agreement is demonstrated.
We consider next questions on time variant reliability analysis and corresponding model updating in Chapters 6 and 7, respectively. The research effort in these studies is focused on exploring the application of data based asymptotic extreme value analysis for problems on hand. Accordingly, we investigate reliability of nonlinear vibrating systems under stochastic excitations in Chapter 6 using a two-stage Monte Carlo simulation strategy. For systems with white noise excitation, the governing equations of motion are interpreted as a set of Ito stochastic differential equations. It is assumed that the probability distribution of the maximum over a specified time duration in the steady state response belongs to the basin of attraction of one of the classical asymptotic extreme value distributions. The first stage of the solution strategy consists of selection of the form of the extreme value distribution based on hypothesis testing, and, the next stage involves the estimation of parameters of the relevant extreme value distribution. Both these stages are implemented using data from limited Monte Carlo simulations of the system response. The proposed procedure is illustrated with examples of linear/nonlinear systems with single/multiple degrees of freedom driven by random excitations. The predictions from the proposed method are compared with the results from large scale Monte Carlo simulations, and also with the classical analytical results, when available, from the theory of out-crossing statistics. Applications of the proposed method for vibration data obtained from laboratory conditions are also discussed.
In Chapter 7 we consider the problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations. Here we assume that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes’ theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplified by considering the reliability analysis of a few low dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on limited amount of pertinent Monte Carlo simulations.
A summary of the contributions made and a few suggestions for future work are presented in Chapter 8.
The thesis also contains three appendices. Appendix A provides details of the order 1.5 strong Taylor scheme that is extensively employed at several places in the thesis. The formulary pertaining to the bootstrap and sequential importance sampling particle filters is provided in Appendix B. Some of the results on characterizing conditional probability density functions that have been used in the development of the combined Kalman and sequential importance sampling filter in Chapter 4 are elaborated in Appendix C.2018-02-21T18:30:00ZNumerical Investigation of Masonry Infilled RC Frames Subjected to Seismic Loading
http://hdl.handle.net/2005/3156
Title: Numerical Investigation of Masonry Infilled RC Frames Subjected to Seismic Loading
Authors: Manju, M A
Abstract: Reinforced concrete frames, infilled with brick/concrete block masonry, are the most common type of structures found in multi-storeyed constructions, especially in developing countries. Usually, the infill walls are considered as non-structural elements even though they alter the lateral stiffness and strength of the frame significantly. Approximately 80% of the structural cost from earthquakes is attributable to damage of infill walls and to consequent damages of doors, windows and other installations. Despite the broad application and economical significance, the infill walls are not included in the analysis because of the design complexity and lack of suitable theory. But in seismic areas, ignoring the infill-frame interaction is not safe because the change in the stiffness and the consequent change in seismic demand of the composite structural system is not negligible. The relevant experimental findings shows a considerable reduction in the response of infilled frames under reverse cyclic loading. This behaviour is caused by the rapid degradation of stiffness, strength, and low energy dissipation capacity resulting from the brittle and sudden damage of the unreinforced masonry infill walls. Though various national/international codes of practice have incorporated some of the research outcomes as design guidelines, there is a need and scope for further refinement.
In the initial part of this work, a numerical modelling and linear elastic analysis of masonry infilled RC frames has been done. A multi-storey multi-bay frame infilled with masonry panels, is considered for the study. Both macro modelling and micro modelling strategies are adopted. Seismic loading is considered and an equivalent static analysis as suggested in IS 1893, 2002 is done. The results show that the stiffness of the composite structure is increased due to the obvious confinement effects of infill panels on the bounding frame. A parametric study is conducted to investigate the influence of size and location of openings, presence/absence of infill panels in a particular storey and elevation irregularity in terms of floor height. The results show that the interaction of infill panel changes the seismic response of the composite structure significantly. Presence of openings further changes the seismic behaviour. Increase in openings increases the natural period and introduce newer failure mechanisms. Absence of infill in a particular storey (an elevation irregularity) makes it drift more compared to adjacent storeys. Since the structural irregularities influence the seismic behaviour of a building considerably, we should be cautious while construction and renovation of such buildings in order to take the advantage of increased strength and stiffness obtained by the presence of infill walls.
A nonlinear dynamic analysis of masonry infilled RC frames is presented next. Material non linearity is considered for the finite element modelling of both masonry and concrete. Concrete damage plasticity model is employed to capture the degradation in stiffness under reverse cyclic loading. A parametric study by varying the same parameters as considered in the linear analysis is conducted. It is seen that the fundamental period calculation of infilled frames by conventional empirical formulae needs to be revisited for a better understanding of the real seismic behaviour of the infilled frames. Enhancement in the lateral stiffness due to the presence of infill panel attracts larger force and causes damage to the composite system during seismic loading. Elevation irregularities included absence of infill panels in a particular storey. Soft storey shows a tendency for the adjacent columns to fail in shear, due to the large drift compared to other storeys. The interstorey drift ratios of soft storeys are found to be larger than the limiting values. However this model could not capture the separation at the interfaces and related failure mechanisms.
To improve the nonlinear model, a contact surface at the interface is considered for a qualitative analysis. A one bay one storey infilled frame is selected. The material characteristics were kept the same as those used in the nonlinear model. Contact surface at the interface was given hard contact property with pressure-overclosure relations and suitable values of friction at the interface. This model could simulate the compressive diagonal strut formation and the switching of this compressive strut to the opposite diagonal under reverse cyclic loading. It showed an indication of corner crushing and diagonal cracking failure modes. The frame with central opening showed stress accumulation near the corners of opening.
Next, the micro modelling strategy for masonry suggested by Lourenco is studied. This interface element can be used at the masonry panel-concrete frame interface as well as at the expanded masonry block to block interface. Cap plasticity model (modified Drucker – Prager model for geological materials) can be used to describe the behaviour of masonry (in terms of interface cracking, slipping, shearing) under earthquake loading. The blocks can be defined as elastic material with a potential crack at the centre. However, further experimental investigation is needed to calibrate this model.
It is required to make use of the beneficial effects and improve upon the ill-effects of the presence of infills. To conclude, infill panels are inevitable for functional aspects such as division of space and envelope for the building. Using the lateral stiffness, strength contribution and energy dissipation capacity, use of infill panels is proposed to be a wiser solution for reducing the seismic vulnerability of multi-storey buildings.2018-02-20T18:30:00ZUncertainty Based Damage Identification and Prediction of Long-Time Deformation in Concrete Structures
http://hdl.handle.net/2005/3143
Title: Uncertainty Based Damage Identification and Prediction of Long-Time Deformation in Concrete Structures
Authors: Biswal, Suryakanta
Abstract: Uncertainties are present in the inverse analysis of damage identification with respect to the given measurements, mainly the modelling uncertainties and the measurement uncertainties. Modelling uncertainties occur due to constructing a representative model of the real structure through finite element modelling, and representing damage in the real structures through changes in material parameters of the finite element model (assuming smeared crack approach). Measurement uncertainties are always present in the measurements despite the accuracy with which the measurements are measured or the precision of the instruments used for the measurement. The modelling errors in the finite element model are assumed to be encompassed in the updated uncertain parameters of the finite element model, given the uncertainties in the measurements and in the prior uncertainties of the parameters. The uncertainties in the direct measurement data are propagated to the estimated output data. Empirical models from codal provisions and standard recommendations are normally used for prediction of long-time deformations in concrete structures. Uncertainties are also present in the creep and shrinkage models, in the parameters of these models, in the shrinkage and creep mechanisms, in the environmental conditions, and in the in-situ measurements. All these uncertainties are needed to be considered in the damage identification and prediction of long-time deformations in concrete structures. In the context of modelling uncertainty, uncertainties can be categorized into aleatory or epistemic uncertainty. Aleatory uncertainty deals with the irresolvable indeterminacy about how the uncertain variable will evolve
over time, whereas epistemic uncertainty deals with lack of knowledge. In the field of damage detection and prediction of long time deformations, aleatory uncertainty is modeled through probabilistic analysis, whereas epistemic uncertainty can be modeled through (1) Interval analysis (2) Ellipsoidal modeling (3) Fuzzy analysis (4) Dempster-Shafer evidence theory or (5) Imprecise probability. Many a times it is di cult to determine whether a particular uncertainty is to be considered as an aleatory or as an epistemic uncertainty, and the model builder makes the distinction. The model builder makes the choice based on the general state of scientific knowledge, on the practical need for limiting the model sophistication to a significant engineering importance, and on the errors associated with the measurements.
Measurement uncertainty can be stated as the dispersion of real data resulting from systematic error (instrumental error, environmental error, observational error, human error, drift in measurement, measurement of wrong quantity) and random error (all errors apart from systematic errors). Most of instrumental errors given by the manufacturers are in terms of plus minus ranges and can be better represented through interval bounds. The vagueness involved in the representation of human error, observational error, and drift in measurement can be represented through interval bounds. Deliberate measurement of wrong quantity through cheaper and more convenient measurement units can lead to bad quality data. Quality of data can be better handled through interval analysis, with good quality data having narrow width of interval bounds and bad quality data having wide interval bounds. The environmental error, the electronic noise coming from transmitting the data and the random errors can be represented through probability distribution functions. A major part of the measurement uncertainties is better represented through interval bounds and the other part, is better represented through probability distributions. The uncertainties in the direct measurement data are propagated to the estimated output data (in damage identification techniques, the damaged parameters, and
in the long-time deformation, the uncertain parameters of the deformation models, which are then used for the prediction of long-time deformations). Uncertainty based damage identification techniques and long-time deformations in concrete structures require further studies, when the measurement uncertainties are expressed through interval bounds only, or through both interval and probability using imprecise techniques.
The thesis is divided into six chapters. Chapter 1 provides a review of existing literature on uncertainty based techniques for damage identification and prediction of long-time deformations in concrete structures. A brief review of uncertainty based methods for engineering applications is made, with special highlight to the need of interval analysis and imprecise probability for modeling uncertainties in the damage identification techniques. The review identifies that the available techniques for damage identification, where the uncertainties in the measurements and in the structural and material parameters are expressed in terms of interval bounds, lack e ciency, when the size of the damaged parameter vector is large. Studies on estimating the uncertainties in the damage parameters when the uncertainties in the measurements are expressed through imprecise probability analysis, are also identified as problems that will be considered in this thesis. Also the need for estimating the short-term time period, which in turn helps in accurate prediction of long-time deformations in concrete structures, along with a cost effective and easy to use system of measuring the existing prestress forces at various time instances in the short-time period is noted. The review identifies that most of modelers and analysts have been inclined to select a single simulation model for the long-time deformations resulted from creep, shrinkage and relaxation, rather than take all the possibilities into consideration, where the model selection is made based on the hardly realistic assumption that we can certainly select a correct, and the lack of confidence associated with model selection brings about the uncertainty that resides in a given model set. The need for a single best model out of all the
available deformation models is needed to be developed, when uncertainties are present in the models, in the measurements and in the parameters of each models is also identified as a problem that will be considered in this thesis.
In Chapter 2, an algorithm is proposed adapting the existing modified Metropolis Hastings algorithm for estimating the posterior probability of the damage indices as well as the posterior probability of the bounds of the interval parameters, when the measurements are given in terms of interval bounds. A damage index is defined for each element of the finite element model considering the parameters of each element are intervals. Methods are developed for evaluating response bounds in the finite element software ABAQUS, when the parameters of the finite element model are intervals. Illustrative examples include reinforced concrete beams with three damage scenarios mainly (i) loss of stiffness, (ii) loss of mass, and (iii) loss of bond between concrete and reinforcement steel, that have been tested in our laboratory. Comparison of the prediction from the proposed method with those obtained from Bayesian analysis and interval optimization technique show improved accuracy and computational efficiency, in addition to better representation of measurement uncertainties through interval bounds.
Imprecise probability based methods are developed in Chapter 3, for damage identifi cation using finite element model updating in concrete structures, when the uncertainties in the measurements and parameters are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Illustrative examples include reinforced concrete beams and prestressed concrete beams tested in our laboratory.
In Chapter 4, a steel frame is designed to measure the existing prestressing force in the concrete beams and slabs when embedded inside the concrete members. The steel frame is designed to work on the principles of a vibrating wire strain gauge and is referred to as a vibrating beam strain gauge (VBSG). The existing strain in the VBSG is evaluated using both frequency data on the stretched member and static strain corresponding to a fixed static load, measured using electrical strain gauges. The crack reopening load method is used to compute the existing prestressing force in the concrete members and is then compared with the existing prestressing force obtained from the VBSG at that section. Digital image correlation based surface deformation and change in neutral axis monitored by putting electrical strain gauges across the cross section, are used to compute the crack reopening load accurately.
Long-time deformations in concrete structures are estimated in Chapter 5, using short-time measurements of deformation responses when uncertainties are present in the measurements, in the deformation models and in the parameters of the deformation models. The short-time period is defined as the least time up to which if measurements are made available, the measurements will be enough for estimating the parameters of the deformation models in predicting the long time deformations. The short-time period is evaluated using stochastic simulations where all the parameters of the deformation models are defined as random variables. The existing deformation models are empirical in nature and are developed based on an arbitrary selection of experimental data sets among all the available data sets, and each model contains some information about the deformation patterns in concrete structures. Uncertainty based model averaging is performed for obtaining the single best model for predicting the long-time deformation in concrete structures. Three types of uncertainty models are considered namely, probability models, interval models and imprecise probability models. Illustrative examples consider experiments in the Northwestern University database available in the literature and prestressed concrete beams and slabs cast in our laboratory for prediction of long-time prestress losses.
A summary of contributions made in this thesis, together with a few suggestions for future research, are presented in Chapter 6. Finally the references that were studies are listed.2018-02-19T18:30:00Z