algorithmic learning theory

Two Weeks at Waterloo


slides and lecture notes

Monday, August 11, 2014: slides, diagonalization picture
Tuesday, August 12, 2014: slides, side notes
Thursday, August 14, 2014: slides, side notes
Friday, August 15, 2014: slides, side notes

Monday, August 18, 2014: lectures notes
Tuesday, August 19, 2014: lectures notes

projects

consistent: A learner that conjectures a grammar for a language that is not consistent with what he or she already knows (from the text) to be in the language would be a foolish learner indeed.
As an extreme example, imagine a learner that has seen the following text: 1, 2, 4, 6, ... . It would be foolish for him to offer as a guess, "The set of all even numbers." Learners that do not do such silly things are called consistent learners. In this project, you'll investigate how requiring that a learner be consistent affects the classes of languages that they can learn.

confident: A learner identifies a language if she is correct about her conjecture in the limit, but we don't make any requirements on what happens on texts for languages that are not in the class to be learned. Imagine a very bold child that is sure he can always figure out the language being presented: No matter what text he's given, eventually he'll make a conjecture about the grammar and stick to it. We call such learners confident (perhaps we should call them overconfident) and in your project you'll investigate the consequences of this confidence.

enumerator: Imagine a learner that, instead of trying to learn a language (i.e., a set) is trying to learn a function. This is something that scientists often do when they are trying to find a model to describe the behavior of some system. Moreover, scientists often have some idea of what kind of model they're looking for (an ordinary differential equation, a linear system, etc.) and are, in a sense, just trying to choose the right one from a "list" of all possible models. Learners choosing from a list of possible answers like this are called \emph{enumerators}, and in this project you'll investigate the ability of these learners to learn sets and functions.

memory-limited: Do you remember the first sentence you ever heard? Do you remember all the sentences you've heard today? Of course not. In our general identification paradigm, there are no memory constraints built in to our model: Our learner is allowed to review every member of the text she's seen before offering a new conjecture. What happens if we disallow learners with perfect memories? In this project, you'll explore how memory limited learning affects the classes of languages that can be learned.

























complexification
create 

counter