Reinforcement Learning in Non-Stationary Environments

Porto, Portugal
October 7, 2005

In conjunction with the
16th European Conference on Machine Learning (ECML)
and the 9th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD)

NEWS:

  • Some photos available
  • All accepted papers available online

Introduction

Reinforcement Learning (RL) has attained a lot of attention in recent years. This learning paradigm has now been established as a practical tool for modeling autonomous learning agents. The most currently used RL methods are based on Markov Decision Processes (MDPs) and RL itself provides an efficient tool for solving MDPs.

One main assumption behind MDPs is that the environment obeys the Markov property, i.e. state transitions are based only on the current state of the environment and the actions selected by the learning agent. In many problem domains this property is not fully satisfied. For example, in many real problem instances, the learning agent is not capable to sense the real state of the environment and thus the Markov property might no longer be satisfied.

The problem becomes very apparent in systems where multiple RL agents are active in the same environment. In these systems, state transitions depend generally on the action selections of all agents in the system. If agents can not fully observe the behavior of the others, the Markov property is no longer satisfied, and the environment as experienced by a single agent is non-stationary.

Aim

Quite some researchers coming from different backgrounds (multiagent learning, distributed learning, parallel learning, swarm intelligence, learning automata, etc.) have made interesting contributions to RL in non-stationary environments. The aim of the workshop is to bring together researchers from different backgrounds working on this topic, in order to discuss the commonalities and the differences, and how forces can be joint.

Appropriate topics for papers include, but are not limited to, the following:

Related tutorial

The related tutorial "Learning Automata as a Basis for Multiagent Reinforcement Learning" will be arranged on October 3, 2005.

Important dates

Paper submission deadline July 29, 2005
Notification of acceptance August 24, 2005
Final copy due September 7, 2005
Workshop October 7, 2005

Submissions

Please send your submissions by email in PDF-format to:
ville.kononen@tkk.fi

Workshop chairs

Professor Ann Nowé
Computational Modeling Lab
Vrije Universiteit Brussel
Faculty of Sciences (WE)
Department of Computer Science
Pleinlaan 2
B-1050 Brussels
BELGIUM
Email: asnowe@info.vub.ac.be

Professor Timo Honkela
Neural Networks Research Centre
Helsinki University of Technology
P.O. Box 5400
FI-02015 HUT
FINLAND
Email: timo.honkela@tkk.fi

Ville Könönen
Neural Networks Research Centre
Helsinki University of Technology
P.O. Box 5400
FI-02015 HUT
FINLAND
Email: ville.kononen@tkk.fi

Katja Verbeeck
Computational Modeling Lab
Vrije Universiteit Brussel
Faculty of Sciences (WE)
Department of Computer Science
Pleinlaan 2
B-1050 Brussels
BELGIUM
Email: kaverbee@vub.ac.be

Program committee

Michael Bowling University of Alberta
Michael Littman Rutgers University
Ann Nowé Vrije Universiteit Brussel
Timo Honkela Helsinki University of Technology
Ron Sun Rensselaer Polytechnic Institute
Ville Könönen Helsinki University of Technology
Donald C. II Wunsch University Missouri-Rolla
Kary Främling Helsinki University of Technology
Katja Verbeeck Vrije Universiteit Brussel
Tom Lenaerts Université Libre de Bruxelles
Olivier Sigaud 'AnimatLab', Laboratoire d'Informatique de Paris 6 (Lip6)

Ville Könönen
ville.kononen@tkk.fi
Fri Oct 14 16:21:35 EEST 2005

Valid XHTML 1.0!