Australian Reinforcement Learning Group

Welcome to the Australian Reinforcement Learning Group

The Reinforcement Learning Group is interested in developing and studying the foundations and pushing the frontier of generic intelligent agents endowed with minimal a-priori knowledge about the world, and that learn from experience, in particular for partial observable domains (PORL).
For many years, the reinforcement-learning community primarily focused on sequential decision making in fully observable but unknown domains while the planning-under-uncertainty community focused on known but partially observable domains.
Since most problems are both partially observable and (at least partially) unknown, recent years have seen a surge of interest in combining the related, but often different, algorithmic machineries developed in the two communities.

See, for instance:
PORL09: Partially Observable Reinforcement Learning
Symposium at NIPS'09 December 10, Vancouver
http://www.hutter1.net/ai/porlsymp.htm and
http://grla.wikidot.com/nips for more details.

Members and Friends

The following list contains students and researchers in Australia working on Generic Reinforcement Learning Agents,
most of them located or affiliated with the Research School of Information Science and Engineering (RSISE) at the Australian National University (ANU).

  • Phuong Minh Nguyen (PhD student, Generic Reinforcement Learning Agents, RSISE@ANU)
  • Joel Veness (PhD student, Universal AI and Games, UNSW&UoA)
  • Ian Wood (PhD student, Universal Induction)
  • Matthew Robards (PhD student, Reinforcement Learning, NICTA)
  • Tor Lattimore (PhD Student, Reinforcement Learning, RSISE@ANU)
  • Mayank Daswani (PhD Student, Feature Reinforcement Learning, RSISE@ANU)
  • ua.ude.una|4248934u#rennamhtaR leumaS (BCS Honors Student, Universal Induction and AIXI, RSISE@ANU)
  • Zahra Zamani (PhD Student, RSISE@ANU)

Contact: Marcus Hutter <ua.ude.una|rettuh.sucram#ua.ude.una|rettuh.sucram>

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License