Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow iconQ Learning in Python: What is it, Definitions [Coding Examples]

Q Learning in Python: What is it, Definitions [Coding Examples]

Last updated:
26th Mar, 2020
Views
Read Time
7 Mins
share image icon
In this article
Chevron in toc
View All
Q Learning in Python: What is it, Definitions [Coding Examples]

Reinforcement learning is when a learning agent learns to behave optimally according to its environment through constant interactions. The agent goes through various situations, which are also known as states. As you would’ve guessed, reinforcement learning has many applications in our world. Learn more if you are interested to learn more about data science algorithms.

Also, it has many algorithms, among the most popular ones is Q learning. In this article, we’ll be discussing what this algorithm is and how it works.

So, without further ado, let’s get started. 

What is Q Learning?

Q learning is a reinforcement learning algorithm, and it focuses on finding the best course of action for a particular situation. It’s off policy because the actions the Q learning function learns from are outside the existing policy, so it doesn’t require one. It focuses on learning a policy that increases its total reward. It’s a simple form of reinforcement learning that uses action values (or Q-values) to enhance the learning agent’s behaviour. 

Q learning is one of the most popular algorithms in reinforcement learning, as it’s effortless to understand and implement. The ‘Q’ in Q learning represents quality. As we mentioned earlier, Q learning focuses on finding the best action for a particular situation. And the quality shows how useful a specific action is and what reward it can help you in reaching. 

Important Definitions

Before we begin discussing how it works, we should first take a look at some essential concepts of q learning. Let’s get started.

Q-Values

Q-values are also known as Action-values. They are represented by Q(S, A), and they give you an estimate of how good the action A is to take at the state S. The model will compute this estimation iteratively by using the Temporal Difference Update rule we’ve discussed later in this section. 

Also, Check out all trending Python tutorial concepts in 2024.

Episodes and Rewards

An agent begins from a start state, goes through several transitions, and then moves from its current state to the next one according to its actions and its environment. Whenever the agent takes action, it gets some reward. And when there are no transitions possible, it’s the completion of the episode. 

TD-Update (Temporal Difference)

Here’s the TD-Update or Temporal Difference rule:

Q(S,A) Q(S,A) + (R +Q(S’,A’)-Q(S,A))

Here, S represents the agent’s current state, whereas S’ represents the next state. A represents the current action, A’ represents the following best action according to the Q-value estimation, R shows the current reward according to the present action, stands for the discounting factor, and shows the step length. 

Also read: Prerequisite for Data Science. How does it change over time?

Example of Q Learning Python

The best way to understand Q learning Python is to see an example. In this example, we are using the gym environment of OpenAI and train our model with it. First off, you’ll have to install the environment. You can do so with the following command:

pip install gym

Now, we’ll import the libraries we’ll need for this example:

import gym

import itertools

import matplotlib

import matplotlib.style

import numpy as np

import pandas as pd

import sys

from collections import defaultdict

from windy_gridworld import WindyGridworldEnv

import plotting

matplotlib.style.use(‘ggplot’)

Without the necessary libraries, you wouldn’t be able to perform these operations successfully. After we’ve imported the libraries, we will create the environment:

env = WindyGridworldEnv() 

Now we’ll create the -greedy policy:

def createEpsilonGreedyPolicy(Q, epsilon, num_actions):

    “””

    Creates an epsilon-greedy policy based

    on a given Q-function and epsilon.

       

    Returns a function that takes the state

    as an input and returns the probabilities

    for each action in the form of a numpy array 

    of the length of the action space(set of possible responses).

    “””

    def policyFunction(state):

   

        Action_probabilities = np.ones(num_actions,

                dtype = float) * epsilon / num_actions

                  

        best_action = np.argmax(Q[state])

        Action_probabilities[best_action] += (1.0 – epsilon)

        return Action_probabilities

   

    return policyFunction

 

Here’s the code for building a q-learning model:

def qLearning(env, num_episodes, discount_factor = 1.0,

                            alpha = 0.6, epsilon = 0.1):

    “””

    Q-Learning algorithm: Off-policy TD control.

    Finds the optimal greedy policy while improving

    following an epsilon-greedy policy”””

       

    # Action value function

    # A nested dictionary that maps

    # state -> (action -> action-value).

    Q = defaultdict(lambda: np.zeros(env.action_space.n))

   

    # Keeps track of useful statistics

    stats = plotting.EpisodeStats(

        episode_lengths = np.zeros(num_episodes),

        episode_rewards = np.zeros(num_episodes))

       

    # Create an epsilon greedy policy function

    # appropriately for environment action space

    policy = createEpsilonGreedyPolicy(Q, epsilon, env.action_space.n)

       

    # For every episode

    for ith_episode in range(num_episodes):

           

        # Reset the environment and pick the first action

        state = env.reset()

           

        for t in itertools.count():

               

            # get probabilities of all actions from current state

            action_probabilities = policy(state)

   

            # choose action according to 

            # the probability distribution

            action = np.random.choice(np.arange(

                      len(action_probabilities)),

                       p = action_probabilities)

   

            # take action and get reward, transit to next state

            next_state, reward, done, _ = env.step(action)

   

            # Update statistics

            stats.episode_rewards[i_episode] += reward

            stats.episode_lengths[i_episode] = t

               

            # TD Update

            best_next_action = np.argmax(Q[next_state])

            td_target = reward + discount_factor * Q[next_state][best_next_action]

            td_delta = td_target – Q[state][action]

            Q[state][action] += alpha * td_delta

   

            # done is True if episode terminated

            if done:

                break

                   

            state = next_state

       

    return Q, stats

Explore our Popular Data Science Online Courses

Let’s train the model now:

 

Q, stats = qLearning(env, 1000)

 

After we’ve created and trained the model, we can plot the essential stats of the same:

 

plotting.plot_episode_stats(stats)

 

Use this code to run the model and plot the graph. What kind of results do you see? Share your results with us, and if you face any confusion or doubts, let us know. 

Also read: Machine Learning Algorithms for Data Science

Learn data science courses from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

Top Data Science Skills to Learn to upskill

upGrad’s Exclusive Data Science Webinar for you –

Watch our Webinar on How to Build Digital & Data Mindset?

Read our popular Data Science Articles

Final Thoughts

When you plot the graph, you’ll see that the reward per episode increases progressively over time. And after certain episodes, the plot also reflects that it levels out the high reward limit per episode. What does this indicate? 

It means your model has learned to increase the total reward it can earn in an episode by ensuring that it behaves optimally. You must’ve also seen why q learning Python sees applications in so many industries and areas. 

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1What are the drawbacks of reinforcement learning?

1. Excessive reinforcement learning might result in an excess of states, lowering the quality of the outcomes.
2. Reinforcement learning is not recommended for easy problem solving.
3.Reinforcement learning necessitates a large amount of data and computation.
4. Reinforcement learning has its own set of unique and very complicated obstacles, such as challenging training design setup and issues with the balance of exploration and reinforcement.

2Is Q learning model-based?

No, Q learning isn't dependent on models. Q-learning is a model-free reinforcement learning technique for determining the worth of a certain action in a given state. Q learning is one of several current reinforcement learning algorithms that is model-free, meaning it may be used in a variety of contexts and can quickly adapt to new and unknown conditions. It can handle issues involving stochastic transitions and rewards without the requirement for adaptations and does not require an environment model. Q-learning is a learning algorithm that is based on values. Value-based algorithms use an equation to update the value function (particularly Bellman equation).

3How are Q learning and SARSA different from each other?

SARSA learns a near-optimal policy while exploring, whereas Q-learning learns the optimal policy directly. Off-policy SARSA learns action values in relation to the policy it is following, whereas on-policy SARSA learns action values in relation to the policy it is following. In relation to the greedy policy, Q-Learning does it. They both converge to the real value function under some similar conditions, but at different speeds. Q-Learning takes a little longer to converge, but it may continue to learn while regulations are changed. When coupled with linear approximation, Q-Learning is not guaranteed to converge. SARSA will consider penalties from exploratory steps when approaching convergence, while Q-learning will not. If there's a chance of a significant negative reward along the ideal path, Q-learning will try to trigger it while exploring, however SARSA will try to avoid a risky optimal path and only learn to utilize it after the exploration parameters are decreased.

Explore Free Courses

Suggested Blogs

Most Common PySpark Interview Questions & Answers [For Freshers & Experienced]
20591
Attending a PySpark interview and wondering what are all the questions and discussions you will go through? Before attending a PySpark interview, it’s
Read More

by Rohit Sharma

05 Mar 2024

Data Science for Beginners: A Comprehensive Guide
5036
Data science is an important part of many industries today. Having worked as a data scientist for several years, I have witnessed the massive amounts
Read More

by Harish K

28 Feb 2024

6 Best Data Science Institutes in 2024 (Detailed Guide)
5113
Data science training is one of the most hyped skills in today’s world. Based on my experience as a data scientist, it’s evident that we are in
Read More

by Harish K

28 Feb 2024

Data Science Course Fees: The Roadmap to Your Analytics Career
5055
A data science course syllabus covers several basic and advanced concepts of statistics, data analytics, machine learning, and programming languages.
Read More

by Harish K

28 Feb 2024

Inheritance in Python | Python Inheritance [With Example]
17368
Python is one of the most popular programming languages. Despite a transition full of ups and downs from the Python 2 version to Python 3, the Object-
Read More

by Rohan Vats

27 Feb 2024

Data Mining Architecture: Components, Types & Techniques
10657
Introduction Data mining is the process in which information that was previously unknown, which could be potentially very useful, is extracted from a
Read More

by Rohit Sharma

27 Feb 2024

6 Phases of Data Analytics Lifecycle Every Data Analyst Should Know About
79944
What is a Data Analytics Lifecycle? Data is crucial in today’s digital world. As it gets created, consumed, tested, processed, and reused, data goes
Read More

by Rohit Sharma

19 Feb 2024

Sorting in Data Structure: Categories & Types [With Examples]
138258
The arrangement of data in a preferred order is called sorting in the data structure. By sorting data, it is easier to search through it quickly and e
Read More

by Rohit Sharma

19 Feb 2024

Data Science Vs Data Analytics: Difference Between Data Science and Data Analytics
68361
Summary: In this article, you will learn, Difference between Data Science and Data Analytics Job roles Skills Career perspectives Which one is right
Read More

by Rohit Sharma

19 Feb 2024

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon