SGER: Inductive Principles for Learning from DATA

Project: Research project

Project Details

Description

Quite recently, there has been a good deal of common interest among researchers in machine learning (whether it be algorithmic or neural net or genetic algorithms), in models of biological learning, and in statistics, all of which make inductive estimates of data dependencies. It is still the case, however, that the different disciplines have their own terminology and approaches. It is important to begin providing a common terminology and framework for the different disciplines by investigating inductive principles, which provide general prescriptions for what to do with training data in order to learn a model. There is just a handful of known inductive principles (Regularization, Structural Risk Minimization, Bayesian Inference, Minimum Description Length), but there are many learning methods - constructive implementations based on these principles. This research project will develop an understanding of the differences among inductive principles, how they vary in power, and their use in model selection and the development of learning algorithms, in order to provide more solid foundations for the fields that are using them and better appreciation of the commonalities among the fields.

StatusFinished
Effective start/end date2/15/971/31/98

Funding

  • National Science Foundation: $50,000.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.