Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconBig Databreadcumb forward arrow iconAggregation in MongoDB: Pipeline & Syntax

Aggregation in MongoDB: Pipeline & Syntax

Last updated:
25th Jun, 2023
Views
Read Time
10 Mins
share image icon
In this article
Chevron in toc
View All
Aggregation in MongoDB: Pipeline & Syntax

Introduction

MongoDB is a form of a high-volume data storage medium. It acts as a non-relational database with document queries. The basic unit in MongoDB is key-value pairs of the documents in MongoDB collection. It became a very beneficial medium from the early 2000s.

Aggregation in MongoDB is a framework that allows us to perform various computational tasks on documents in one or more MongoDB collections. It is an effective way of generating reports or a handful of data metrics for interpretation from different documents. The framework is named as MongoDB as it aggregates multiple documents to form united and combined results. 

The aggregation in MongoDB primarily contains the pipeline framework. The pipeline’s basic underlying concept is that input is taken from a MongoDB collection, and the documents are passed through a series of stages to produce a unified output finally. This idea is very much similar to the Linux pipeline concept, i.e., Bash.

What Are Aggregation Operations?

Numerous files are processed using aggregation in MongoDB, which then produces computed results. Aggregation operations can be used to: 

Ads of upGrad blog
  • Combine values from various documents. 
  • Apply procedures to the data groups to produce a solitary outcome. 
  • Analyze the evolution of the data. 

To conduct or execute an aggregate function in MongoDB, you can use the following: 

  • Aggregation pipelines are considered the recommended approach for performing aggregations. 
  • Single-purpose aggregation techniques are simple and easy to follow but fall short of an aggregation pipeline’s capabilities. 

Key Features of MongoDB

There are many reasons for which this database system is widely used. Some special features are mentioned below:

  • MongoDB, being a NoSQL database, is highly flexible to use. It is document-oriented.
  • Key-value pairs can index the fields inside the document. This stands to be a very special feature of MongoDB.
  • MongoDB splits a large dataset into small instances by using a concept of sharding. In this way, it can run over many servers, keeping the instances in balance.
  • Queries in MongoDB can return specific fields in a document.

Read: MongoDB Project Ideas & Topics

Why is Aggregation in MongoDB useful?

There can be times when processing a million of embedded files may be needed. However, this can cause an overflow in the server stack and cause the process to terminate. The constraint of processing a large number of embedded files indulged the enhancement of the scanning process by associating the files together.

Therefore, aggregation operation was designed to compute the documents in different stages and show the cumulative effect as a result and return it. The matching technique of result generation revolutionized the issues of a huge number of files. Hence, the aggregation framework is essential.

This framework can perform many query operations on different files simultaneously. It has much resemblance to relational Database queries.

Explore our Popular Software Engineering Courses

Check out: Most Common MongoDB Commands

What is the Aggregation Pipeline?

A pipeline is a framework of continuous stages designed to perform separate tasks that together solve one unified goal. Here in MongoDB Aggregation, this framework serves the computation process and manipulates the documents. Many documents from the MongoDB collection are given as input, and specific to the methodology; a particular task is performed at each stage.

Later, all the results are collectively united, and cumulative metrics are calculated, which are shown as output. The output is quite similar to query outputs given from relational databases, i.e., a stream of documents to work additionally. Later, it can be used in report generation of website making.

So, each stage acts as a processing unit here. For every internal stage, the output from the previous stage acts as an input. Also, additional filters can be added at the initial stage. The stages are often designed with many hyperparameters. For this purpose, some knobs or tuning buttons are provided to control them. Changing these hyperparameters affects the results of that stage. This parameterized the task one is interested in performing. In this way, a stage performs a generic task.

There can be situations when one may want to include a similar type of stage multiple times in a particular pipeline. For example, there can be a filter present in the initial part to not make the entire collection pass through. But later on, after some processing, another filter may be needed for a different criterion.

Syntax

There is a specific format in which the aggregation queries are built. The syntax and format of code is shown below.

db.Collection_Name.aggregate([
{ $match: {“_id_field_”: value}}
{ $group: {“_id_field_”: value}}
{ $sort: {“_id_field_”: value}}
]);

Pipeline Commands

  • Structural Commands: Structural commands help organize the documents and make them suitable for data manipulation operations. There are two prime structural commands, which are used very often.
  1. Matching: This is the filtering stage. This stage cuts out the documents which are not cared about. This command has much resemblance to the WHERE function of SQL.
db.customers.aggregate([
{ $match: {“zip”: 700068}}
]);

The above piece of code returns the documents of all the customers who live in 700068 zip code, from the MongoDB Collections.

2. Grouping: After filtering the documents, the specific grouping is needed. This enables to form subsets of the whole collection. Also, documents can be clustered upon similar commonalities. Clustering helps to perform similar operations on them together.

db.customers.aggregate([
{ $match: {“zip”: 700068}}
{
 $group: {
 _id: null,
 Count: {
 $sum: 1
     }
}
]);

$group enables the clustering of the documents to perform transformation operations. _id command deals with preserving fields of data.

3. Sort: This helps to sort the documents in ascending or descending order based on any specific query field.

db.customers.aggregate([
{ $match: {“zip”: 700068}}
{
 $group: {
 _id: null,
 Count: {
 $sum: 1
     }
}
{
 $sort: {
 {“zip”: -1}
}
}
]);

This will sort the documents based upon their zip code.

  • Operational Commands: There are many operational commands in MongoDB Aggregation, which help perform the data tasks. Some of the most important commands are described below:
  1. Summation ($sum): Returns the addition of all values from the documents.
  2. Maximum ($max): Outputs the maximum value of a particular variable from all documents.
  3. Minimum ($min): Returns the minimum value of a variable.
  4. Average ($avg): Calculates the mean of the values from each document.
  5. Push ($push): Appends a value to an array.
  6. First ($first): Returns the first document from a collection.
  7. Last ($last): Returns the last document from a collection.
  8. Adding to Set ($addToSet): Appends a value to an array of a document without duplicating it.

Explore Our Software Development Free Courses

Aggregation in MongoDB: Stage Operators

Each stage begins with the stage operators, which are: 

  • $project: It is used to pick a subset of a collection’s fields. 
  • $sort: The document that is rearranging them is sorted using $sort. 
  • $limit: It is used to pass the first n documents, restricting the total number that can be passed. 
  • $out: The $out parameter is used to write the results to a new collection.
  • $match: It is used to filter the documents, which can cut down on the number of documents provided as input in the following stage.
  • $group: This keyword is used to group documents according to a value. 
  • $skip: It is used to pass the remaining documents while skipping n numbers of documents. 
  • $unwind: It deconstructs an array field in the documents to return documents for each element. It is used to unwind documents that use arrays. 

Expressions: It signifies the field name in input files, for e.g. { $group : { _id : “$id“, total:{$sum:”$fare“}}} here $id and $fare are expressions.

Aggregation in MongoDB: Stage Limits

In memory, aggregation functions. Each level has a maximum RAM usage of 100 MB. If you go beyond this limit, the database will issue an error. If it becomes impossible to avoid the issue, you can choose a page to disc. 

However, this has the drawback of making you wait a little longer, as working on the disc takes longer than working in memory. You only need to toggle the setting allowDiskUse to true in order to select the page-to-disk method: 

db.collectionName.aggregate(pipeline, { allowDiskUse : true })

Keep in mind that shared services may not always have this option available. The Atlas M0, M2, and M5 clusters, for instance, disable this option. The maximum size of the documents retrieved by the aggregation query is 16MB, whether they are saved as a cursor or via $out in another collection. 

They cannot, therefore, exceed the largest permitted size for a MongoDB document. If you anticipate going over this limit, you must indicate that the aggregation query’s result is a cursor rather than a document. 

Read our Popular Articles related to Software Development

Also Read: Future Scope of MongoDB

Example of Aggregate Grouping in MongoDB

MongoDB $match

In the $match stage, programmers are able to select only the documents from any grouping in MongoDB that they are interested in using. It does this by eliminating individuals who don’t fit their criteria. 

In the scenario that follows, we only intend to proceed with the documents that explicitly state that Spain is the value for the field country and Salamanca is the value for the field city. I’m going to finish all the instructions with.pretty() to get a comprehensible result. 

db.universities.aggregate([
  { $match : { country : 'Spain', city : 'Salamanca' } }
]).pretty()
The output is…
{
"_id" : ObjectId("5b7d9d9efbc9884f689cdba9"),
"country" : "Spain","city" : "Salamanca",
"name" : "USAL",
"location" : {
      "type" : "Point",
      "coordinates" : [
            -5.6722512,
             17,
             40.9607792
       ]
},
"students" : [
{
"year" : 2014,
"number" : 24774
},
{
"year" : 2015,
"number" : 23166
},
{
"year" : 2016,
"number" : 21913
},
{
"year" : 2017,
"number" : 21715
}
]
}
{
"_id" : ObjectId("5b7d9d9efbc9884f689cdbaa"),
"country" : "Spain",
"city" : "Salamanca",
"name" : "UPSA",
"location" : {
"type" : "Point",
"coordinates" : [
-5.6691191,
17,
40.9631732
]
},
"students" : [
{
"year" : 2014,
"number" : 4788
},
{
"year" : 2015,
"number" : 4821
},
{
"year" : 2016,
"number" : 6550
},
{
"year" : 2017,
"number" : 6125
}
]
}

Wrapping Up

In this era of Big Data, non-relational databases are very useful to handle large sample sets. Nowadays, the field of data science and development are well accustomed to the use of MongoDB. This framework is usable with popular languages like Java, JavaScript, Python, and many other languages. Having knowledge of MongoDB and a sound hand with an aggregation framework can make for a career of dreams.

Ads of upGrad blog

If you are interested to know more about Big Data, check out our Advanced Certificate Programme in Big Data from IIIT Bangalore.

In that case, this course certainly will help you in gaining all the knowledge regarding Data structures and algorithms, Java programming, Foundation of Database, HTML, CSS, JavaScript, Angular, Java, Object-Oriented Analysis & Design.

More than 250 hours of online teaching, one on one sessions with industry experts, and much more is available in this course. In addition to this, the course will be curated by subject matter experts from upGrad, and you will be provided with placement opportunities from top IT companies, product-based companies, and start-ups.

Learn Software Development Courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs or Masters Programs to fast-track your career.

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.
Get Free Consultation

Selectcaret down icon
Select Area of interestcaret down icon
Select Work Experiencecaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Popular Big Data Course

Frequently Asked Questions (FAQs)

1What are some examples of Big Data?

There are many examples of the sources of Big Data. These examples can help put into perspective the scale of Big Data, i.e. how big Big Data actually is. Experts say that by the end of 2025, there will be almost 175 Zettabytes of data totally in this world. If we wanted to download 175 Zettabytes of data at the usual speed of our internet connection, we would need nearly 1.8 billion years to do it! On average, we use between 2 and 5 GB of internet data each month, which sums up to an enormous amount of data if you think about it. Again the fact that Amazon records nearly 283,000 USD worth of transactions every hour indicates how much data it can generate in a day.

2Which company uses Big Data?

The infinite potential that lies within Big Data has prompted companies to delve into it and use it for their business benefits. Using Big Data, organizations can accurately understand the specific needs of their customers, understand market dynamics, foresee market trends and make informed decisions in a short time which boosts their profits. Big Data is employed by the top global organizations across different industries. Starting with e-commerce giant Amazon, IT giant Google and leading tech firms like Apple to Facebook, Spotify, American Express, Starbucks and McDonald's, Big Data has proven to be a game-changer for all organizations worldwide.

3Which is better for Big Data between Hadoop and MongoDB?

Both MongoDB and Hadoop are hugely popular options for handling Big Data and are employed by top organizations across the world. Both Hadoop and MongoDB are open-source, schema-free, and support NoSQL; their data handling mode is entirely different. MongoDB is developed using C++ and offers excellent memory handling capabilities, while Hadoop, developed using Java, is better at space optimization. However, Hadoop was built specifically for Big Data, while MongoDB was not made for the same purpose. So Hadoop offers better support for batch processing.

Explore Free Courses

Suggested Blogs

50 Must Know Big Data Interview Questions and Answers 2024: For Freshers & Experienced
8363
Introduction The demand for potential candidates is increasing rapidly in the big data technologies field. There are plenty of opportunities in this
Read More

by Mohit Soni

Top 6 Major Challenges of Big Data & Simple Solutions To Solve Them
103401
No organization today can operate effectively without data. Data, generated incessantly from various sources like business transactions, sales records
Read More

by Rohit Sharma

17 Jun 2024

13 Best Big Data Project Ideas & Topics for Beginners [2024]
102460
Big Data Project Ideas Big Data is an exciting subject. It helps you find patterns and results you wouldn’t have noticed otherwise. This skill
Read More

by upGrad

29 May 2024

Characteristics of Big Data: Types & 5V’s
7238
Introduction The world around is changing rapidly, we live a data-driven age now. Data is everywhere, from your social media comments, posts, and lik
Read More

by Rohit Sharma

04 May 2024

Top 10 Hadoop Commands [With Usages]
12435
In this era, with huge chunks of data, it becomes essential to deal with them. The data springing from organizations with growing customers is way lar
Read More

by Rohit Sharma

12 Apr 2024

What is Big Data – Characteristics, Types, Benefits & Examples
187104
Lately the term ‘Big Data’ has been under the limelight, but not many people know what is big data. Businesses, governmental institutions, HCPs (Healt
Read More

by Abhinav Rai

18 Feb 2024

Cassandra vs MongoDB: Difference Between Cassandra & MongoDB [2023]
5546
Introduction Cassandra and MongoDB are among the most famous NoSQL databases used by large to small enterprises and can be relied upon for scalabilit
Read More

by Rohit Sharma

31 Jan 2024

Be A Big Data Analyst – Skills, Salary & Job Description
899975
In an era dominated by Big Data, one cannot imagine that the skill set and expertise of traditional Data Analysts are enough to handle the complexitie
Read More

by upGrad

16 Dec 2023

12 Exciting Hadoop Project Ideas & Topics For Beginners [2024]
21452
Hadoop Project Ideas & Topics Today, big data technologies power diverse sectors, from banking and finance, IT and telecommunication, to manufact
Read More

by Rohit Sharma

29 Nov 2023

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon