Home Reinforcement Learning and Artificial Intelligence (RLAI)
Reinforcement Learning Toolkit

The ambition of this web page is to provide source, documentation, and updates for the Reinforcement Learning (RL) toolkit. This toolkit is a collection of utilities and demos developed by the RLAI group which may be useful for anyone trying to learn, teach or use reinforcement learning. The tools are suitable for a range of users, from new users who have never used RL before, to very experienced users.

The RL toolkit is in the public domain and can be used by anyone for any purpose.

This software still works, but is being only lightly maintained. For updates send email to rich@richsutton.com.

Converted to python 3.5 on Oct 2016. -rss

Toolkit Contents
Downloads
Requirements
Installation Instructions
Usage Instructions
    Using the Gridworld Demo
    Importing the Whole Toolkit at Once
    Importing Specific Tools
    Running the Demos
Previous Toolkit Versions and Change Information
Obsolete pages: wishlist

Toolkit Contents:


RLToolkit downloads - Version 1.0:  

Last update: November 8, 2011              Changes and Previous Versions
resurrected a version of the toolkit obtained from Anna
-Rich

Requirements:

Installation:

For either the regular or nonGUI toolkit tar file, do one of the following:

Usage

Using the gridworld demo:

After you have downloaded and decompressed the RLtoolkit.zip file, you should have a folder called RLtoolkit. You should put this folder where it will be found by the version of Python that you have running on your machine. On my Mac, it goes in Macintosh HD>Library>Python>2.7>site-packages. I figured this out by looking at the older instructions above and below, and you might do the same if you have a different computer.

Of course, you will need Python to be installed on your machine, including pythonw for the graphics.

Then, run the demo by the following steps:

1. Start Python by, e.g., typing "pythonw" to a command line such as in the "terminal" program on a mac.

2. In Python, at the prompt, load the RLtoolkit by typing "from RLtoolkit import *"

3. Start the Gridworld Demo by typing "demo.demos("gwg", "run")". A gridworld window should pop up. (You can also run the other demos described below similarly.)

4. Click anywhere on the gridworld window to switch to Python (but maybe not on the grid itself because it may make a barrier). You should now see the gridworld menus, such as "Gridworld" and "Agent".

5. Select a gridworld from the Gridworld menu and an agent from the Agent menu. Display and set your parameters from the Agent menu. There is a little display bug in that a newly created gridworld window does not show its contents. You can see them by going to the Simulation menu and selecting "Redisplay".

6. Now you are ready to go. Use the buttons at the bottom of the window to control the simulation and the display.

If you change any of the code, for example in gwguimain.py, you would save the file, close all your gridworld windows, and then type "reload(demo.gridworld.gwguimain)" followed by "demo.demos("gwg", "run")" again to restart using the changed code.

Importing the Whole Toolkit at Once

Importing Specific Tools

Running the Demos


version 1.0b6 only provides __init__.py.bin, not the __init__.py (or *.pyc) files, in the directories and therefore cannot be used under Windows. I had to switch to 1.0b5, which at least has *.pyc files. I'd appreciate if someone could include the init functions. Thanks!  

for the missing __init__.py files, all you need to do is to create empty files with that name in each directory.  

Extend this Page   How to edit   Style   Subscribe   Notify   Suggest   Help   This open web page hosted at the University of Alberta.   Terms of use  9313/5