Reinforcement Learning in ML: How Does it Work, Learning Models & Types

What is Reinforcement Learning?

Reinforcement learning refers to the process of taking suitable decisions through suitable machine learning models. It is based on the process of training a machine learning method. It is a feedback-based machine learning technique, whereby an agent learns to behave in an environment by observing his mistakes and performing the actions.

Reinforcement learning applies the method of learning via Interaction and feedback. A few of the terminologies used in reinforcement learning are:

  • Agent: It is the learner or the decision-maker performing actions to receive a reward.
  • Environment: It is the scenario where an agent learns and performs future tasks.
  • Action: actions that are performed by the agent.
  • State: current situation
  • Policy: Decision-making function of an agent whereby the agent decides the future action based on the current state.
  • Reward: Returns provided by the environment to an agent for performing each action.
  • Value: Compared to the reward it is the expected long-term return with a discount.
  • Value function: Denotes the value of a state .i.e. the total amount of return.
  • Function approximator: Inducing a function from training examples.
    Model of the environment: it is a model that mimics the real environment for predicting inferences.
  • Model-based methods: Used for solving reinforcement based models.
  • Q value or action value: similar to value but additional parameters are considered like current action.
  • Markov decision process: A probabilistic model of the sequential decision problem.
  • Dynamic programming: Class of methods for solving sequential decision problems.

    Reinforcement learning is mostly concerned with the fact of how the software agents should take actions in an environment. Learning based on neural networks allows attaining a complex objective.

How Does Reinforcement Learning Work?

A reinforcement learning example is shown below showcasing how reinforcement learning works.

  • Cats don’t understand any form of language and therefore a different strategy has to be followed to communicate with the cat.
  • A situation is created where the cat acts in various ways. The cat is rewarded with fish if it is the desired way. Therefore the cat behaves in the same way whenever it faces that situation expecting more food as a reward.
  • The scenario defines the process of learning from positive experiences.
  • Lastly, the cat also learns what not to do through negative experiences.

This leads to the following explanation

  • The cat acts as the agent as it is exposed to an environment. In the example mentioned above, the house is the environment. The states might be anything like the cat sitting or walking.
  • The agent performs an action by transiting from one state to the other like moving from a sitting to a walking position.
  • The action is the reaction of the agent. The policy includes the method of selecting an action in a particular state while expecting a better outcome in the future state.
  • The transition of states might provide a reward or penalty.

Few points to note in Reinforcement learning

  • An initial state of input should be provided from which the model will start.
  • Many possible outputs are generated through varied solutions to a particular problem.
  • Training of the RL method is based on the input. After the generation of output, the model will decide whether to reward the model. Therefore, the model keeps on getting trained.
  • The model continuously keeps on learning.
  • The best solution for a problem is decided on the maximum reward it receives.

Reinforcement Learning Algorithm

There are three approaches for implementing a reinforcement learning method.

1. Value based

The value based method involves maximizing the value function V(s). The expectation of a long-term return of the current state is expected under a policy. SARSA and Q Learning are some of the value based algorithms. Value based approaches are quite stable as it is not able to model a continuous environment. Both the algorithms are simple to implement, but they could not estimate values of an unseen state.

2. Policy based

This type of method Involves developing a policy that helps to return a maximum reward through the performance of every action. 

There are two types of policy based methods:

  • Deterministic: This means that under any state the policy produces the same action.
  • Stochastic: A probability for every action exists defined by the equation 

n{a\s) = P\A, = a\S, =S]

Policy based algorithms are the Monte Carlo policy gradient (REINFORCE) and deterministic policy gradient (DPG). Policy based approaches of learning generate instabilities as they suffer from high variance.

An “actor-critic” algorithm is developed through a combination of both the value based and policy based approaches. Parameterization of both the value function (critic) and the policy (actor) enables stable convergence through effective use of the training data.

3. Model based

A virtual model is created for each environment and the agent learns based on that model. Model building includes the steps of sampling of states, taking actions, and observation of the rewards. At each state in an environment, the model predicts the future state and the expected reward. With the availability of the RL based model, an agent can plan upon the actions. The agent gets the ability to learn when the process of planning is interwoven with policy estimation. 

Reinforcement learning aims to achieve a goal through the exploration of an agent in an unknown environment. A hypothesis of RL states that goals can be described as THE maximization of rewards. The agent must be able to derive the maximum reward through the perturbation of states in the form of actions. RL algorithms can be broadly classified into model based and model free. 

Learning models in Reinforcement

1. Markov decision process

The set of parameters used in a Markov decision process are

Set of Actions-A

Set of states-S

Reward-R

Policy-n

Value-V

Markov decision process is the mathematical approach for mapping a solution in reinforcement learning.

2. Q learning

This process supplies information to the agent informing which action to proceed with. It’s a form of model free approach. The Q values keep on updating, denoting the value of doing an action “a” in state “s”.

Difference between Reinforcement learning and Supervised learning

Supervised learning is a process of machine learning whereby a supervisor is required to feed knowledge into a learning algorithm. The main function of the supervisor includes the collection of the training data such as images, audio clips, etc.

Whereas in RL the training dataset mostly includes the set of situation, and actions. Reinforcement learning in machine learning doesn’t require any form of supervision. Also, the combination of reinforcement learning and deep learning produces the subfield deep reinforcement learning.

The key differences between RL and Supervised Learning are tabulated below.

Reinforcement Learning Supervised Learning
Decisions are made sequentially. The output of the process depends on the state of the current input. The next input will depend on the output of the previous input and so on. The decision is made on the initial input or at the input fed at the start of the process.
Decisions are dependent. Therefore, labeling is done to sequences of dependent decisions. Decisions are independent of each other. Hence, labeling of all the decisions is done.
Interaction with the environment occurs in RL. No interaction with the environment. The process works on the existing dataset.
Decision-making process of an RL is similar to the decision-making process of a human brain. Decision-making process is similar to the decision made by a human brain under the supervision of a guide.
No labeled dataset. Labeled dataset.
Previous training is not required to the learning agent. Previous training is provided for output prediction.
RL is best supported with AI, where there is a prevalence of human interaction. Supervised learning is mostly operated with applications or interactive software systems.
Example: Chess game Example: Object recognition

 

Types of Reinforcement

There are two types of reinforcement learning

1. Positive

Positive reinforcement learning is defined as an event generated out of a specific behavior. This impacts positively on the agent as it increases the strength and frequency of learning. As a result, the performance is maximized. Therefore, changes are sustained for a longer period of time. But, over optimization of states can affect the results of learning. Therefore, reinforcement learning should not be too much.

Advantages of positive reinforcement are:

  • Performance maximization.
  • Changes sustained for a longer period.

2. Negative

Negative reinforcement is defined when under circumstances of negative condition, the behavior is strengthened. The minimum standard of performance is defined through negative reinforcement

Advantages of negative reinforcement learning are:

  • Increases behavior.
  • Provide defiance to a minimum standard of performance

Disadvantage of reinforcement learning

  • Provides only enough to meet up the minimum behavior.

Challenges in Reinforcement Learning

Reinforcement learning, although doesn’t require the supervision of the model, is not a type of unsupervised learning. However, it is a different part of machine learning. 

A few challenges associated with reinforcement learning are:

  • Preparation of the simulation environment. This depends on the task that is to be performed. The creation of a realistic simulator is a challenging task. The model has to figure out every minute and important detail of the environment.
  • The involvement of feature and reward design is highly important.
  • The speed of learning may be affected by the parameters.
  • Transferring of the model into the training environment.
  • Controlling the agent through neural networks is another challenge as the only communication with the neural networks is through the system of rewards and penalties.  Sometimes this may result in catastrophic forgetting i.e. deletion of old knowledge while gaining new knowledge.
  • Reaching a local minimum is a challenge for reinforcement learning. 
  • Under conditions of a real environment, partial observation might be present.
  • The application of reinforcement learning should be regulated. An excess amount of RL leads to the overloading of the states. This might lead to a diminishing of the results.
  • The real environments are non-stationary.

Applications of Reinforcement

  • In the area of Robotics for industrial automation.
  • RL can be used in strategic planning of businesses.
  • RL can be used in data processing techniques involving machine learning algorithms.
  • It can be used for custom preparation of training materials for students as per their requirements.
  • RL can be applied in the control of aircraft and the motion of robots.

In large environments, Reinforcement can be applied in the following situations

  • If an analytic solution is not available for a known model of the environment.
  • If only a simulation model of the environment is provided.
  • When there is only one way to collect the data that is to interact with the environment.

What is the use of Reinforcement Learning?

  • Reinforcement Learning helps in identifying the situation that requires an action.
  • The application of RL helps in knowing which action is yielding the highest reward.
  • The usefulness of RL lies in providing the agent with a reward function.
  • Lastly, the RL helps in identifying the method leading to larger rewards.

Conclusion

RL cannot be applied to every situation. There lie certain limitations in its usage. 

  • Availability of enough data allows the use of a supervised learning approach rather than an RL method.
  • The computation of RL is quite time-consuming, especially in cases where a large environment is considered.

If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s Executive PG Programme in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Lead the AI Driven Technological Revolution

PG DIPLOMA IN MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
Apply Now

0 replies on “Reinforcement Learning in ML: How Does it Work, Learning Models & Types”

Accelerate Your Career with upGrad

Our Popular Machine Learning Course

×