For working professionals
For fresh graduates
More
Talk to our experts. We are available 7 days a week, 9 AM to 12 AM (midnight)
Indian Nationals
Foreign Nationals
The above statistics depend on various factors and individual results may vary. Past performance is no guarantee of future results.
The student assumes full responsibility for all expenses associated with visas, travel, & related costs. upGrad does not .
Recommended Programs
1. Introduction
6. PyTorch
9. AI Tutorial
10. Airflow Tutorial
11. Android Studio
12. Android Tutorial
13. Animation CSS
16. Apex Tutorial
17. App Tutorial
18. Appium Tutorial
21. Armstrong Number
22. ASP Full Form
23. AutoCAD Tutorial
27. Belady's Anomaly
30. Bipartite Graph
35. Button CSS
39. Cobol Tutorial
46. CSS Border
47. CSS Colors
48. CSS Flexbox
49. CSS Float
51. CSS Full Form
52. CSS Gradient
53. CSS Margin
54. CSS nth Child
55. CSS Syntax
56. CSS Tables
57. CSS Tricks
58. CSS Variables
61. Dart Tutorial
63. DCL
65. DES Algorithm
83. Dot Net Tutorial
86. ES6 Tutorial
91. Flutter Basics
92. Flutter Tutorial
95. Golang Tutorial
96. Graphql Tutorial
100. Hive Tutorial
103. Install Bootstrap
107. Install SASS
109. IPv 4 address
110. JCL Programming
111. JQ Tutorial
112. JSON Tutorial
113. JSP Tutorial
114. Junit Tutorial
115. Kadanes Algorithm
116. Kafka Tutorial
117. Knapsack Problem
118. Kth Smallest Element
119. Laravel Tutorial
122. Linear Gradient CSS
129. Memory Hierarchy
133. Mockito tutorial
134. Modem vs Router
135. Mulesoft Tutorial
136. Network Devices
138. Next JS Tutorial
139. Nginx Tutorial
141. Octal to Decimal
142. OLAP Operations
143. Opacity CSS
144. OSI Model
145. CSS Overflow
146. Padding in CSS
148. Perl scripting
149. Phases of Compiler
150. Placeholder CSS
153. Powershell Tutorial
158. Pyspark Tutorial
161. Quality of Service
162. R Language Tutorial
164. RabbitMQ Tutorial
165. Redis Tutorial
166. Redux in React
167. Regex Tutorial
170. Routing Protocols
171. Ruby On Rails
172. Ruby tutorial
173. Scala Tutorial
175. Shadow CSS
178. Snowflake Tutorial
179. Socket Programming
180. Solidity Tutorial
181. SonarQube in Java
182. Spark Tutorial
189. TCP 3 Way Handshake
190. TensorFlow Tutorial
191. Threaded Binary Tree
196. Types of Queue
197. TypeScript Tutorial
198. UDP Protocol
202. Verilog Tutorial
204. Void Pointer
205. Vue JS Tutorial
206. Weak Entity Set
207. What is Bandwidth?
208. What is Big Data
209. Checksum
211. What is Ethernet
214. What is ROM?
216. WPF Tutorial
217. Wireshark Tutorial
218. XML Tutorial
Imagine waiting in line at a ticket counter, first in line gets served first. This is exactly how First Come First Serve (FCFS) scheduling works in computer systems.
Understanding First Come First Serve Scheduling is key to grasping core concepts of process management.
In this tutorial, you will learn its characteristics, advantages, disadvantages, and practical examples to master this basic concept. Let’s start by understanding the algorithm deeply.
Want to elevate your software development skills? Enroll in upGrad’s Online Software Engineering Courses today. Build a strong foundation and accelerate your career by mastering JavaScript, Node.js, APIs, React, and beyond.
First Come First Serve (FCFS) is a simple CPU scheduling algorithm used in operating systems. In this approach, processes are executed in the order they arrive in the ready queue. The first process that enters the queue is the first one to be executed by the CPU. FCFS is a non-preemptive scheduling algorithm, meaning that upon execution, processes continue to run until they complete or voluntarily release the CPUs.
Let's understand how FCFS scheduling works with an example:
Advance your career by mastering in-demand skills in Cloud, DevOps, AI, and Full Stack Development. Gain hands-on experience with real-world projects and learn directly from industry experts to excel in the job market.
Suppose we have three processes A, B, and C with their respective burst times:
Assuming they arrive in the order A -> B -> C, the order of execution under FCFS would be:
In this scenario, Process A, being the first to arrive, gets executed first, followed by B and then C.
Now, here's another example with different process burst times:
In this example, we again have three processes X, Y, and Z with their respective burst times:
Assuming they arrive in the order X -> Y -> Z, the order of execution under FCFS would be:
In this example, Process X starts execution first as it arrived first. After Process X completes, Process Y follows, and finally, Process Z runs.
Also Read: Types of Memory in Computers and Their Uses with Examples
FCFS is a non-preemptive scheduling algorithm, which means that a process will continue to run until it finishes its burst time upon execution. Preemption is not supported in FCFS, so the currently executing process cannot be interrupted by a higher-priority process.
In a preemptive scheduling approach, the operating system can interrupt a running process and allocate the CPU to another process if a higher-priority process becomes available. This means that a running process may be temporarily paused to allow another process with higher priority to execute.
Suppose we have two processes, A and B, with the following burst times and priorities:
In preemptive scheduling, if Process B becomes ready while Process A is running, the operating system might decide to preempt (interrupt) Process A to allow Process B, with a higher priority, to execute. After a while, the operating system could resume executing Process A.
In FCFS, once a process starts execution, it continues until it completes or releases the CPU voluntarily. Other processes waiting in the ready queue have to wait for the currently executing process to finish before they can start execution.
In a non-preemptive scheduling approach, once processes start executing, they run until they complete their burst times or voluntarily release the CPUs. No process can be interrupted by the operating system until it finishes or enters a blocking state.
Let's use the same processes, A and B, with the same burst times and priorities:
In a non-preemptive scheduling scenario, if Process B becomes ready while Process A is running, the operating system will let Process A continue until it completes its execution. Only after Process A finishes or enters a blocking state will Process B get a chance to execute.
Also Read: Structure of Operating System
The convoy effect is a phenomenon that can occur in FCFS scheduling. It refers to a situation where long processes occupy the CPUs, making the shorter processes wait behind them. This can lead to poor CPU utilization and increased response times for other processes.
For example, consider a scenario where a long I/O-bound process occupies the CPU under FCFS. Other shorter processes that are CPU-bound might queue up behind this process, resulting in longer waiting times for those processes.
The characteristics of FCFS CPU Process Scheduling are as follows:
Also Read: Thrashing in Operating Systems: A Deep Dive
Here are the benefits of FCFS CPU Process Scheduling:
Here are some of the downsides of FCFS CPU Process Scheduling:
Also Read: System Calls in OS: Different Types Explained
Dijkstra proposed a solution to the problem of wasting wake-up signals by introducing Semaphores in operating systems. Semaphores are variables used to store wake-up calls from producers to consumers, preventing direct signaling and potential wastage. The producer stores wake-up calls in the semaphore variable, which can be read and updated automatically in kernel mode by any consumer when needed.
The implementation of Semaphores in user mode is not feasible due to the possibility of race conditions when multiple processes try to access the variable simultaneously. Thus, Semaphores always require support from the operating system for proper functioning.
There are two categories of Semaphores based on their functionality:
AT: Arrival Time - The time at which a process arrives in the ready queue or becomes ready for execution.
BT: Burst Time - The amount of time a process requires to complete its execution, also known as the CPU time.
CPU: Central Processing Unit - The main processing component of a computer where calculations and data processing take place.
CT: Completion Time - The time at which a process finishes its execution.
FIFO: First In First Out - A queueing principle where the first item to enter the queue is the first one to be removed.
OS: Operating System - Software that manages computer hardware, software resources, and provides services for computer programs.
TAT: Turn Around Time - The total time taken for a process to complete its execution, from arrival to completion.
WT: Waiting Time - The total time a process spends waiting in the ready queue before it starts execution.
Also Read: Difference Between Process and Program
Here's an example of implementing the First Come First Serve CPU scheduling algorithm in Java:
Code:
import java.util.*;
class Process {
String name;
int arrivalTime;
int burstTime;
public Process(String name, int arrivalTime, int burstTime) {
this.name = name;
this.arrivalTime = arrivalTime;
this.burstTime = burstTime;
}
}
public class FCFSExample {
public static void main(String[] args) {
List<Process> processes = new ArrayList<>();
processes.add(new Process("Process A", 0, 8));
processes.add(new Process("Process B", 1, 4));
processes.add(new Process("Process C", 2, 6));
int currentTime = 0;
for (Process process : processes) {
if (process.arrivalTime > currentTime) {
currentTime = process.arrivalTime;
}
System.out.println("Executing " + process.name + " from time " + currentTime + " to " + (currentTime + process.burstTime));
currentTime += process.burstTime;
}
}
}
In the above code, we define a Process class with attributes for the process name, arrival time, and burst time. We then create a list of processes and simulate their execution using the FCFS scheduling algorithm. The processes are executed in the order they arrive in the list, and their execution times are printed.
The First Come First Serve (FCFS) scheduling algorithm is simple, fair, and processes tasks in the order they arrive, preventing starvation. This tutorial covered the basics of First Come First Serve and First Come First Serve Scheduling. For a deeper understanding and practical mastery, explore upGrad’s computer science courses to strengthen your skills and career prospects.
The First Come First Serve Scheduling algorithm, often abbreviated as FCFS, is the simplest CPU scheduling algorithm. In this non-preemptive algorithm, the process that requests the CPU first gets the CPU first. It operates on a first-in, first-out (FIFO) basis, where processes are executed in the exact order they arrive in the ready queue. This straightforward approach makes First Come First Serve easy to implement and understand.
You should avoid First Come First Serve Scheduling in scenarios where some processes are time-sensitive or have higher priority. This algorithm is particularly inefficient in systems with a mix of long and short-running processes, a phenomenon known as the convoy effect. If a long process arrives first, all subsequent short processes will have to wait for a significant amount of time, leading to a high average waiting time. In such cases, more advanced scheduling algorithms like Shortest Job First (SJF) or Round Robin are more suitable.
The average waiting time in First Come First Serve depends heavily on the arrival order of processes. If a few very long processes arrive before a large number of short processes, the average waiting time can be very high due to the convoy effect. Conversely, if shorter processes arrive first, the average waiting time may be lower. The predictability of waiting time is low, as it is entirely dependent on the specific sequence of arrivals.
Yes, the First Come First Serve Scheduling algorithm can be part of hybrid scheduling approaches. For example, in a multi-level feedback queue, FCFS might be used in one of the lower-priority queues to process background or batch jobs, while higher-priority queues use a different algorithm like Round Robin for interactive processes. This allows a system to benefit from the simplicity of First Come First Serve for certain workloads while providing better responsiveness for others.
The main disadvantage of the First Come First Serve Scheduling algorithm is its high average waiting time and a lack of consideration for process priority or execution time. This can lead to the convoy effect, where a single long-running process can hold up all subsequent processes, regardless of their length. This inefficiency makes First Come First Serve a poor choice for modern operating systems that require responsiveness and fairness, as shorter, high-priority tasks may be unnecessarily delayed.
The First Come First Serve Scheduling algorithm is a non-preemptive algorithm. This means that once a process is assigned to the CPU, it continues to execute until it completes its CPU burst or voluntarily gives up the CPU (e.g., for I/O). The CPU cannot be taken away from the currently running process, even if a higher-priority or shorter process arrives in the ready queue. This rigid nature is a key characteristic of First Come First Serve.
The First Come First Serve Scheduling algorithm uses a standard First-In, First-Out (FIFO) queue. When processes enter the ready state, they are placed at the back of the queue. The CPU dispatcher then selects the process at the front of the queue to execute. This simple queuing mechanism is at the very core of how First Come First Serve functions, ensuring that processes are always served in their order of arrival.
The 'convoy effect' is a major issue with First Come First Serve Scheduling. It occurs when a CPU-bound (long-running) process holds the CPU for a long time, forcing all other processes, especially I/O-bound (short) processes, to wait. This creates a "convoy" of waiting processes behind the long-running one, severely underutilizing other system resources and leading to very high average waiting times.
The throughput of the First Come First Serve Scheduling algorithm is generally lower compared to other algorithms like SJF, especially in environments with a mix of long and short processes. Throughput is the number of processes completed per unit of time. Because FCFS does not prioritize short jobs, it can take a long time to complete a given set of processes if a long one arrives early, thus reducing the overall throughput of the system.
A time quantum does not apply to the First Come First Serve Scheduling algorithm. A time quantum is a feature of preemptive algorithms, most notably Round Robin, where a process is allowed to run for a specific, limited time slice before being moved to the back of the ready queue. Since FCFS is a non-preemptive algorithm, a process runs to completion without any time limit, making the concept of a time quantum irrelevant.
While First Come First Serve Scheduling is generally not used as the primary scheduling algorithm in modern general-purpose operating systems, it is still used in specific scenarios. It is often a component of more complex, hybrid scheduling systems. For example, it might be used for batch processing or for managing processes within a specific priority queue, particularly for non-time-critical tasks. Its simplicity makes it useful for contexts where it's the most straightforward and effective approach.
The First Come First Serve Scheduling algorithm does not have a starvation problem. Starvation occurs when a process is repeatedly delayed and never gets to execute. Because FCFS services every process in the order of its arrival, every process is guaranteed to be eventually executed. No process can be indefinitely blocked, which is a key advantage of the algorithm's simple and fair queuing mechanism.
The turnaround time for a process in First Come First Serve is calculated as the time from its arrival until its completion. It is the sum of the process's waiting time and its CPU burst time. The average turnaround time for a set of processes is the average of all their individual turnaround times. Like average waiting time, it is heavily influenced by the order of arrival and can be high due to the convoy effect.
A simple real-world example of First Come First Serve Scheduling is a ticket queue. The person who gets in line first is the first person to be served. Other examples include a printer queue, where print jobs are processed in the order they were submitted, or a customer service call center, where calls are answered in the order they are received. These simple queues are a direct analogy to how First Come First Serve works in a computing context.
First Come First Serve generally performs worse than Shortest Job First (SJF) in terms of average waiting time and turnaround time. SJF prioritizes processes with the smallest CPU burst time, which significantly reduces the average waiting time for the entire set of processes. While FCFS is simpler to implement because it doesn't require knowing the future burst time of a process, SJF is almost always more efficient in terms of performance metrics.
The First Come First Serve Scheduling algorithm has very low overhead. Because it is so simple, it requires minimal CPU cycles for context switching and for making scheduling decisions. It simply takes the process from the front of the queue and lets it run. There is no complex calculation, priority check, or time slicing involved, making First Come First Serve an efficient choice from a resource management perspective, even if it is inefficient from a process waiting time perspective.
In a non-preemptive algorithm like First Come First Serve, the response time for a process is the same as its waiting time. Response time is the time from when a process arrives until the first time it gets the CPU. Because the First Come First Serve Scheduling algorithm does not preempt processes, a process does not start and stop its execution. It simply waits for its turn and then runs to completion, so the first time it gets the CPU is also the time it stops waiting.
Applications that are suitable for First Come First Serve Scheduling are typically batch processing jobs where no user interaction is required. Examples include tasks like generating reports, running nightly backups, or performing large data transfers. In these scenarios, the primary goal is to complete all tasks, and a user isn't waiting for a quick response, so the simplicity and fairness of First Come First Serve are acceptable, and the potential for long waiting times is not a major issue.
The First Come First Serve Scheduling algorithm does not handle process priorities. It treats every process equally, regardless of its importance or priority level. This is a significant limitation of the algorithm. If a high-priority, mission-critical process arrives but a low-priority, long-running process is already executing, the important process will have to wait, which can be detrimental to system performance and responsiveness.
You can learn more about the First Come First Serve Scheduling algorithm and other CPU scheduling techniques through online learning platforms. upGrad offers comprehensive programs in computer science and software engineering that cover operating systems and key concepts like First Come First Serve. These courses provide hands-on experience and a deeper understanding of how these algorithms work in real-world operating systems.
FREE COURSES
Start Learning For Free
Author|900 articles published