Q-Learning Setup

Tutorial 2 of 4

Introduction

Goal of the tutorial: In this tutorial, we will guide you through setting up a Q-Learning environment to understand how agents can learn from actions to maximize future rewards in reinforcement learning.

What you will learn: By the end of the tutorial, you would have understood how Q-Learning works, how to set up the Q-Learning environment, and how to implement it in code.

Prerequisites: This tutorial assumes you have a basic understanding of Python programming. Knowledge of basic machine learning concepts would be beneficial but is not mandatory.

Step-by-Step Guide

Q-Learning is a values iteration algorithm in reinforcement learning. It aims to learn the quality of actions telling an agent what action to take under what circumstances.

  1. Environment: The first step is to identify your environment. This could be a game, a map, or any problem you want your agent to learn over time.

  2. States: Next is to identify the 'states' in your environment. A 'state' is the current condition an agent is in.

  3. Actions: Then, identify the possible 'actions' that your agent can take in each state.

  4. Rewards: Define a 'reward' system. Rewards are feedback to the agent helping it understand if the action taken was beneficial or not.

  5. Q-table: Setup a Q-table. This table guides the agent to the best action on each state.

Code Examples

Let's create a simple Q-Learning setup in Python using the gym library.

# Importing required libraries
import numpy as np
import gym

# Setting up the environment
env = gym.make('FrozenLake-v0')

# Initializing the Q-table 
Q = np.zeros([env.observation_space.n, env.action_space.n])

# Setting the learning parameters
lr = .8
y = .95
num_episodes = 2000

# List to contain total rewards
rList = []
for i in range(num_episodes):
    # Reset environment and get first new observation
    s = env.reset()
    rAll = 0
    d = False
    j = 0
    # The Q-Table learning algorithm
    while j < 99:
        j+=1
        # Choose an action by greedily (with noise) picking from Q table
        a = np.argmax(Q[s,:] + np.random.randn(1,env.action_space.n)*(1./(i+1)))
        # Get new state and reward from environment
        s1,r,d,_ = env.step(a)
        # Update Q-Table with new knowledge
        Q[s,a] = Q[s,a] + lr*(r + y*np.max(Q[s1,:]) - Q[s,a])
        rAll += r
        s = s1
        if d == True:
            break
    rList.append(rAll)

This script sets up the FrozenLake environment from gym, initializes the Q-table, and then runs 2000 episodes of the game, each time updating the Q-table based on the rewards received.

Summary

In this tutorial, we covered how to set up a Q-Learning environment. We discussed the concept of states, actions, rewards, and the Q-table. We then implemented a simple Q-Learning algorithm using Python and the gym library.

Practice Exercises

  1. Exercise 1: Implement Q-Learning on a different environment from gym.
  2. Exercise 2: Modify the reward system and observe the changes in learning.
  3. Exercise 3: Try different values of learning parameters and see how they affect the learning.

Remember, the key to mastering Q-Learning is practice and experimentation. So, keep tweaking the parameters, try different environments, and most importantly, have fun while doing it.