News
- Relevant information, changes, and updates will be placed on this site. Please watch this site.
Welcome to the Universal AI Reading Group at RSISE@ANU
- Note: The group is currently merged with the 11:30 (Wednesdays) RL reading group
- Who: Everyone is welcome.
- When: Every Wednesday, 10:00-11:00.
(If you want to attend, but the time does not suit you, please let me know) - Where: RSISE building, Common LHS Room, A203, Australian National University.
- Assumed Background: calculus, computability, and probability theory (see below)
- Operation mode: Going through book [Hut05] and read&discuss related work. Reading should be done in advance. The sessions concentrate on the most important parts. 1-2 chapter per month. No email reminders.
A key property of intelligence is to learn from experience, build models of the environment from the acquired knowledge, and use these models for prediction. In philosophy this is called inductive inference, in statistics it is called estimation and prediction, and in computer science it is addressed by machine learning. The second key property of intelligence is to exploit the learned predictive model for making intelligent decisions or actions. Together, in computer science this is called reinforcement learning, in engineering it is called adaptive control, and in statistics and other fields it is called sequential decision theory. The talk will introduce the philosophical, statistical, and computational aspects of inductive inference, Solomonoff's unifying universal solution, and the theory of universal learning agents that incorporate most aspects of rational intelligence.
The reading group will focus on the key ingredients to the theories of Universal Induction and Universal AI, which are important subjects in their own right: Occam's razor; Turing machines; Kolmogorov complexity; probability theory; Solomonoff induction; Bayesian sequence prediction; minimum description length principle; intelligent agents; sequential decision theory; adaptive control theory; reinforcement learning; Levin search and extensions; and others.
I hope you will enjoy these hours and acquire a deep understanding of super-intelligent agents, and the topics surrounding it.
Feel free to invite others.
Actual and Potential Participants
- John Lloyd (Professor, Agents, RSISE@ANU)
- Marcus Hutter (A/Prof, Universal AI, RSISE@ANU)
- Kee-Siong Ng (Researcher, Agents, CRL/NICTA)
- Hassan Mahmud (ARC PostDoc of John Lloyd, Agents, RSISE@ANU)
- Peter Sunehag (ARC PostDoc, Universal AI, RSISE@ANU)
- Phuong Minh Nguyen (PhD student, Generic Reinforcement Learning Agents, RSISE@ANU)
- Joel Veness (PhD student, Universal AI and Games, UNSW)
- Ian Wood (PhD student, Universal Induction)
- Matthew Robarts (PhD student of Peter, Reinforcement Learning, NICTA)
- Tor Lattimore (Honors student, MSI@ANU, Kolmogorov Complexity)
- Mayank Daswani (BCS Honors @ANU, ua.ude.una|4492334u#ua.ude.una|4492334u)
- Samuel Rathmanner (BCS Honors @ANU with Kee-Siong on AIXI for Poker, ua.ude.una|4248934u#ua.ude.una|4248934u)
- Alireza Motevalian (PhD student)
What is Universal AI?
The dream of creating artificial devices that reach or outperform human intelligence is an old one, however a computationally efficient theory of true intelligence has not been found yet, despite considerable efforts in the last 50 years. Nowadays most research is more modest, focussing on solving more narrow, specific problems, associated with only some aspects of intelligence, like playing chess or natural language translation, either as a goal in itself or as a bottom-up approach. The dual, top down approach, is to first find a formal (mathematical, not necessarily computational) solution of the general AI problem, and then to consider computationally feasible approximations. Note that the AI problem remains non-trivial even when ignoring computational aspects. A key property of intelligence is to learn from experience, build models of the environment from the acquired knowledge, and use these models for prediction. In philosophy this is called inductive inference, in statistics it is called estimation and prediction, and in computer science it is addressed by machine learning. The second key property of intelligence is to exploit the learned predictive model for making intelligent decisions or actions. Together, in computer science this is called reinforcement learning, in engineering it is called adaptive control, and in statistics and other fields it is called sequential decision theory. The idea of the reading group is to get acquainted with the philosophical, statistical, and computational perspective of inductive inference, and Solomonoff's unifying universal solution, and the unified view of the intelligent agent framework. Putting everything together, yields an elegant mathematical parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. We will see that it represents a conceptual solution to the AI problem, thus reducing it to a pure computational problem. Despite the grand vision above, most of the time in the reading group is necessarily devoted to discussing the key ingredients of this theory, which are important subjects in their own right: Occam's razor; Turing machines; Kolmogorov complexity; probability theory; Solomonoff induction; Bayesian sequence prediction; minimum description length principle; intelligent agents; sequential decision theory; adaptive control theory; reinforcement learning; Levin search and extensions.
Reading List
Mar'17 2010
- The groups continues in a joint fashion with the RL group by looking at background to [VNHS09]
Mar'3 2010 & Mar'10 2010
- Monte Carlo AIXI [VNHS09], This will done together with the RL reading group at 11:30.
Dec'09-Feb'2010
- Chapter 5 of [Hut05].
Nov'09
- Chapter 4 of [Hut05].
Oct-Nov'09
- Chapter 3 of [Hut05].
9&16&23.Sep'09
- Chapter 2 of [Hut05].
In Queue:
- [Hut05] M. Hutter, Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability, EATCS Book, Springer, Berlin (2005)
- [VNHS09] J. Veness and K. S. Ng and M. Hutter and D. Silver,
A Monte Carlo AIXI Approximation,
Technical Report, 2009, arxiv.org/abs/0909.0801
- [Hut07] M. Hutter,
On Universal Prediction and Bayesian Confirmation,
Theoretical Computer Science, 384:1 (2007) 33-48, Springer
Background Reading
- [Hut07] M. Hutter,
Algorithmic Information Theory: a brief non-technical guide to the field,
Scholarpedia, 2:3 (2007) 2519
- [LV08] M. Li and P. M. B. Vitanyi.
An introduction to Kolmogorov complexity and its applications,
Springer, 3rd edition (2008)
- [Leg08] S. Legg, Machine Super Intelligence, IDSIA, PhD Thesis (2008)
Suggested Problems
- 2.1, 2.10. 211
- 3.9?, 3.11?
- 4.1, 4.2?
- 5.2, 5.16
Contact:
Peter Sunehag <moc.liamg|gahenus.retep#moc.liamg|gahenus.retep> or
Marcus Hutter <ua.ude.una|rettuh.sucram#ua.ude.una|rettuh.sucram>