Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow iconQ Learning in Python: What is it, Definitions [Coding Examples]

Q Learning in Python: What is it, Definitions [Coding Examples]

Last updated:
26th Mar, 2020
Views
Read Time
7 Mins
share image icon
In this article
Chevron in toc
View All
Q Learning in Python: What is it, Definitions [Coding Examples]

Reinforcement learning is when a learning agent learns to behave optimally according to its environment through constant interactions. The agent goes through various situations, which are also known as states. As you would’ve guessed, reinforcement learning has many applications in our world. Learn more if you are interested to learn more about data science algorithms.

Also, it has many algorithms, among the most popular ones is Q learning. In this article, we’ll be discussing what this algorithm is and how it works.

So, without further ado, let’s get started. 

What is Q Learning?

Q learning is a reinforcement learning algorithm, and it focuses on finding the best course of action for a particular situation. It’s off policy because the actions the Q learning function learns from are outside the existing policy, so it doesn’t require one. It focuses on learning a policy that increases its total reward. It’s a simple form of reinforcement learning that uses action values (or Q-values) to enhance the learning agent’s behaviour. 

Q learning is one of the most popular algorithms in reinforcement learning, as it’s effortless to understand and implement. The ‘Q’ in Q learning represents quality. As we mentioned earlier, Q learning focuses on finding the best action for a particular situation. And the quality shows how useful a specific action is and what reward it can help you in reaching. 

Important Definitions

Before we begin discussing how it works, we should first take a look at some essential concepts of q learning. Let’s get started.

Q-Values

Q-values are also known as Action-values. They are represented by Q(S, A), and they give you an estimate of how good the action A is to take at the state S. The model will compute this estimation iteratively by using the Temporal Difference Update rule we’ve discussed later in this section. 

Episodes and Rewards

An agent begins from a start state, goes through several transitions, and then moves from its current state to the next one according to its actions and its environment. Whenever the agent takes action, it gets some reward. And when there are no transitions possible, it’s the completion of the episode. 

TD-Update (Temporal Difference)

Here’s the TD-Update or Temporal Difference rule:

Q(S,A) Q(S,A) + (R +Q(S’,A’)-Q(S,A))

Here, S represents the agent’s current state, whereas S’ represents the next state. A represents the current action, A’ represents the following best action according to the Q-value estimation, R shows the current reward according to the present action, stands for the discounting factor, and shows the step length. 

Also read: Prerequisite for Data Science. How does it change over time?

Example of Q Learning Python

The best way to understand Q learning Python is to see an example. In this example, we are using the gym environment of OpenAI and train our model with it. First off, you’ll have to install the environment. You can do so with the following command:

pip install gym

Now, we’ll import the libraries we’ll need for this example:

import gym

import itertools

import matplotlib

import matplotlib.style

import numpy as np

import pandas as pd

import sys

from collections import defaultdict

from windy_gridworld import WindyGridworldEnv

import plotting

matplotlib.style.use(‘ggplot’)

Without the necessary libraries, you wouldn’t be able to perform these operations successfully. After we’ve imported the libraries, we will create the environment:

env = WindyGridworldEnv() 

Now we’ll create the -greedy policy:

def createEpsilonGreedyPolicy(Q, epsilon, num_actions):

    “””

    Creates an epsilon-greedy policy based

    on a given Q-function and epsilon.

       

    Returns a function that takes the state

    as an input and returns the probabilities

    for each action in the form of a numpy array 

    of the length of the action space(set of possible responses).

    “””

    def policyFunction(state):

   

        Action_probabilities = np.ones(num_actions,

                dtype = float) * epsilon / num_actions

                  

        best_action = np.argmax(Q[state])

        Action_probabilities[best_action] += (1.0 – epsilon)

        return Action_probabilities

   

    return policyFunction

 

Here’s the code for building a q-learning model:

def qLearning(env, num_episodes, discount_factor = 1.0,

                            alpha = 0.6, epsilon = 0.1):

    “””

    Q-Learning algorithm: Off-policy TD control.

    Finds the optimal greedy policy while improving

    following an epsilon-greedy policy”””

       

    # Action value function

    # A nested dictionary that maps

    # state -> (action -> action-value).

    Q = defaultdict(lambda: np.zeros(env.action_space.n))

   

    # Keeps track of useful statistics

    stats = plotting.EpisodeStats(

        episode_lengths = np.zeros(num_episodes),

        episode_rewards = np.zeros(num_episodes))

       

    # Create an epsilon greedy policy function

    # appropriately for environment action space

    policy = createEpsilonGreedyPolicy(Q, epsilon, env.action_space.n)

       

    # For every episode

    for ith_episode in range(num_episodes):

           

        # Reset the environment and pick the first action

        state = env.reset()

           

        for t in itertools.count():

               

            # get probabilities of all actions from current state

            action_probabilities = policy(state)

   

            # choose action according to 

            # the probability distribution

            action = np.random.choice(np.arange(

                      len(action_probabilities)),

                       p = action_probabilities)

   

            # take action and get reward, transit to next state

            next_state, reward, done, _ = env.step(action)

   

            # Update statistics

            stats.episode_rewards[i_episode] += reward

            stats.episode_lengths[i_episode] = t

               

            # TD Update

            best_next_action = np.argmax(Q[next_state])

            td_target = reward + discount_factor * Q[next_state][best_next_action]

            td_delta = td_target – Q[state][action]

            Q[state][action] += alpha * td_delta

   

            # done is True if episode terminated

            if done:

                break

                   

            state = next_state

       

    return Q, stats

Explore our Popular Data Science Online Courses

Let’s train the model now:

 

Q, stats = qLearning(env, 1000)

 

After we’ve created and trained the model, we can plot the essential stats of the same:

 

plotting.plot_episode_stats(stats)

 

Use this code to run the model and plot the graph. What kind of results do you see? Share your results with us, and if you face any confusion or doubts, let us know. 

Also read: Machine Learning Algorithms for Data Science

Learn data science courses from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

Top Data Science Skills to Learn to upskill

upGrad’s Exclusive Data Science Webinar for you –

Watch our Webinar on How to Build Digital & Data Mindset?

Read our popular Data Science Articles

Final Thoughts

When you plot the graph, you’ll see that the reward per episode increases progressively over time. And after certain episodes, the plot also reflects that it levels out the high reward limit per episode. What does this indicate? 

It means your model has learned to increase the total reward it can earn in an episode by ensuring that it behaves optimally. You must’ve also seen why q learning Python sees applications in so many industries and areas. 

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1What are the drawbacks of reinforcement learning?

1. Excessive reinforcement learning might result in an excess of states, lowering the quality of the outcomes.
2. Reinforcement learning is not recommended for easy problem solving.
3.Reinforcement learning necessitates a large amount of data and computation.
4. Reinforcement learning has its own set of unique and very complicated obstacles, such as challenging training design setup and issues with the balance of exploration and reinforcement.

2Is Q learning model-based?

No, Q learning isn't dependent on models. Q-learning is a model-free reinforcement learning technique for determining the worth of a certain action in a given state. Q learning is one of several current reinforcement learning algorithms that is model-free, meaning it may be used in a variety of contexts and can quickly adapt to new and unknown conditions. It can handle issues involving stochastic transitions and rewards without the requirement for adaptations and does not require an environment model. Q-learning is a learning algorithm that is based on values. Value-based algorithms use an equation to update the value function (particularly Bellman equation).

3How are Q learning and SARSA different from each other?

SARSA learns a near-optimal policy while exploring, whereas Q-learning learns the optimal policy directly. Off-policy SARSA learns action values in relation to the policy it is following, whereas on-policy SARSA learns action values in relation to the policy it is following. In relation to the greedy policy, Q-Learning does it. They both converge to the real value function under some similar conditions, but at different speeds. Q-Learning takes a little longer to converge, but it may continue to learn while regulations are changed. When coupled with linear approximation, Q-Learning is not guaranteed to converge. SARSA will consider penalties from exploratory steps when approaching convergence, while Q-learning will not. If there's a chance of a significant negative reward along the ideal path, Q-learning will try to trigger it while exploring, however SARSA will try to avoid a risky optimal path and only learn to utilize it after the exploration parameters are decreased.

Explore Free Courses

Suggested Blogs

Python Free Online Course with Certification [2023]
115986
Summary: In this Article, you will learn about python free online course with certification. Programming with Python: Introduction for Beginners Lea
Read More

by Rohit Sharma

20 Sep 2023

Information Retrieval System Explained: Types, Comparison & Components
47680
An information retrieval (IR) system is a set of algorithms that facilitate the relevance of displayed documents to searched queries. In simple words,
Read More

by Rohit Sharma

19 Sep 2023

26 Must Read Shell Scripting Interview Questions & Answers [For Freshers & Experienced]
12972
For those of you who use any of the major operating systems regularly, you will be interacting with one of the two most critical components of an oper
Read More

by Rohit Sharma

17 Sep 2023

4 Types of Data: Nominal, Ordinal, Discrete, Continuous
284214
Summary: In this Article, you will learn about 4 Types of Data Qualitative Data Type Nominal Ordinal Quantitative Data Type Discrete Continuous R
Read More

by Rohit Sharma

14 Sep 2023

Data Science Course Eligibility Criteria: Syllabus, Skills & Subjects
42450
Summary: In this article, you will learn in detail about Course Eligibility Demand Who is Eligible? Curriculum Subjects & Skills The Science Beh
Read More

by Rohit Sharma

14 Sep 2023

Data Scientist Salary in India in 2023 [For Freshers & Experienced]
900881
Summary: In this article, you will learn about Data Scientist salaries in India based on Location, Skills, Experience, country and more. Read the com
Read More

by Rohit Sharma

12 Sep 2023

16 Data Mining Projects Ideas & Topics For Beginners [2023]
48899
Introduction A career in Data Science necessitates hands-on experience, and what better way to obtain it than by working on real-world data mining pr
Read More

by Rohit Sharma

12 Sep 2023

Actuary Salary in India in 2023 – Skill and Experience Required
899302
Do you have a passion for numbers? Are you interested in a career in mathematics and statistics? If your answer was yes to these questions, then becom
Read More

by Rohan Vats

12 Sep 2023

Most Frequently Asked NumPy Interview Questions and Answers [For Freshers]
24488
If you are looking to have a glorious career in the technological sphere, you already know that a qualification in NumPy is one of the most sought-aft
Read More

by Rohit Sharma

12 Sep 2023

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon