etd AT Indian Institute of Science >
Division of Electrical Sciences >
Computer Science and Automation (csa) >
Please use this identifier to cite or link to this item:
|Title: ||Learning Algorithms Using Chance-Constrained Programs|
|Authors: ||Jagarlapudi, Saketha Nath|
|Advisors: ||Bhattacharyya, Chiranjib|
|Keywords: ||Machine Learning|
Ordinal Regression (OR)
Chance-Constrained Programming (CCP)
Classification - Algorithms
Ordinal Regression - Algorithms
Machine Learning - Algorithms
Second Order Cone Programs (SOCPs)
Maximum Margin Classification
|Submitted Date: ||Jul-2008|
|Series/Report no.: ||G22339|
|Abstract: ||This thesis explores Chance-Constrained Programming (CCP) in the context of learning. It is shown that chance-constraint approaches lead to improved algorithms for three important learning problems — classification with specified error rates, large dataset classification and Ordinal Regression (OR). Using moments of training data, the CCPs are posed as Second Order Cone Programs (SOCPs). Novel iterative algorithms for solving the resulting SOCPs are also derived. Borrowing ideas from robust optimization theory, the proposed formulations are made robust to moment estimation errors.
A maximum margin classifier with specified false positive and false negative rates is derived. The key idea is to employ chance-constraints for each class which imply that the actual misclassification rates do not exceed the specified. The formulation is applied to the case of biased classification.
The problems of large dataset classification and ordinal regression are addressed by deriving formulations which employ chance-constraints for clusters in training data rather than constraints for each data point. Since the number of clusters can be substantially smaller than the number of data points, the resulting formulation size and number of inequalities are very small. Hence the formulations scale well to large datasets.
The scalable classification and OR formulations are extended to feature spaces and the kernelized duals turn out to be instances of SOCPs with a single cone constraint. Exploiting this speciality, fast iterative solvers which outperform generic SOCP solvers, are proposed. Compared to state-of-the-art learners, the proposed algorithms achieve a speed up as high as 10000 times, when the specialized SOCP solvers are employed.
The proposed formulations involve second order moments of data and hence are susceptible to moment estimation errors. A generic way of making the formulations robust to such estimation errors is illustrated. Two novel confidence sets for moments are derived and it is shown that when either of the confidence sets are employed, the robust formulations also yield SOCPs.|
|Appears in Collections:||Computer Science and Automation (csa)|
Items in etd@IISc are protected by copyright, with all rights reserved, unless otherwise indicated.