Release of the MDP toolbox V3 (Matlab)

Dr. Iadine Chades is pleased to announce the release of the Markov Decision Process (MDP) toolbox V3 (MATLAB).

“If you are interested in solving optimization problem using stochastic dynamic programming, have a look at this toolbox. Thanks to M-J Cros, the toolbox is now available via file exchange (MATLAB website) and includes a reinforcement leaning method (Q-learning). Please feel free to give us feedback (iadine.chades@csiro.au).”

http://www.mathworks.com/matlabcentral/fileexchange/25786-markov-decision-processes-mdp-toolbox

“The MDP toolbox proposes functions related to the resolution of discrete-time Markov Decision Processes: backwards induction, value iteration, policy iteration, linear programming algorithms with some variants. The functions were developped with MATLAB (note that one of the functions requires the Mathworks Optimization Toolbox) by Iadine Chadès, Marie-Josée Cros, Frédérick Garcia, Régis Sabbadin of the Biometry and Artificial Intelligence Unit of INRA Toulouse (France).The version 3.0 (September 2009) adds several functions related to Reinforcement Learning and improves the handling of sparse matrices. For more detail see the README file.” 

Toolbox page: http://www.inra.fr/mia/T/MDPtoolbox

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s