Author DP

Reetesh Chandra

5+ of articles published

Words Crafter / Idea Explorer / Insightful Critic

Domain:

upGrad

Current role in the industry:

Head of Program Design & Delivery for International (B2B) programs at upGrad

Educational Qualification:

5-year Integrated M.Sc in Statistics & Informatics from Indian Institute of Technology, Kharagpur

Expertise:

Talent Management

Skilling

Learning Management

Educational Technology

Certifications:

Passed CFA Level 1 examination (CFA Institute)

IVY'S BASE SAS 9.1 OVERVIEW WORKSHOP (IVY Professional School)

NSE's Certification in Financial Market: Derivative market

Mutual fund

About

Reetesh is Project Manager of Data Sciences Program at UpGrad. He manages end-to-end student experience of the Data Sciences program.

Published

Most Popular

The What’s What of Keras and TensorFlow
Blogs
Views Icon

24094

The What’s What of Keras and TensorFlow

If you’ve been following the tech scene closely (or even remotely, for that matter), you must have heard the term “Deep Learning”. It’s a much widely talked about the term – and rightly so. Top Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our certification courses on AI & ML, kindly visit our page below. Machine Learning Certification Deep learning has revolutionized artificial intelligence by helping us build machines and systems that were only dreamt of in the past. In true essence, Deep Learning is a sub-sect of Machine Learning that uses deep artificial neural networks (at this point, if you’re confused by what Neural Networks are, do check out our article on the same) to tackle the problems of Machine Learning. Dreaming to Study Abroad? Here is the Right program for you A Deep Neural Network is just a Neural Network with many layers stacked on top of each other – greater the number of layers, deeper the network. The growing need for Deep Learning, and, consequently, training of Deep Neural Networks gave rise to a number of libraries and frameworks dedicated to Deep Learning. In this blog post, we are going to talk about two of such Deep Learning frameworks. By the end of this blog post, you’ll have a much clearer understanding of what is Keras, what is TensorFlow, how the two differ, and are the two similar in any aspect? But before that, we should briefly discuss the two, so that you know what you’re in for. Tensorflow is the most used library used in development of Deep Learning models. The community of TensorFlow is extremely vast and supportive, especially because it’s an open-sourced platform. The number of commits and forks on the GitHub repository of TensorFlow are enough to let you understand the widespread popularity of the framework. However, it is not that easy to work with. Keras, on the other end, is a high-level API that is built on top of TensorFlow. It is extremely user-friendly and comparatively easier than TensorFlow.  Trending Machine Learning Skills AI Courses Tableau Certification Natural Language Processing Deep Learning AI Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career. Reading the above section might raise a few questions: If Keras is built on top of TF, what’s the difference between the two then?  If Keras is more user-friendly, why should I ever use TF for building deep learning models? Through this article, let’s walk you through the intricacies of both frameworks and help you answer the questions.  In this article, we’ll be talking about two of the many libraries and frameworks – TensorFlow and Keras. What is TensorFlow? TensorFlow is Google’s gift to the developers involved in Machine Learning. It makes the power of Deep Learning accessible to the people in pursuit. Google has a beginner as well as an advanced tutorial which introduce you to both ML and TF concurrently to solve a multi-feature problem — character recognition. Further, if you want to dive into even more technical aspects of it, we suggest you check out our courses on the same! TensorFlow is an open-sourced library that’s available on GitHub. It is one of the more famous libraries when it comes to dealing with Deep Neural Networks. The primary reason behind the popularity of TensorFlow is the sheer ease of building and deploying applications using TensorFlow. The sample projects provided in the GitHub repository are not only powerful but also written in a beginner-friendly way. So, what is TensorFlow used for? TensorFlow excels at numerical computing, which is critical for deep learning. It provides APIs in most major languages and environments needed for deep learning projects: Python, C, C++, Rust, Haskell, Go, Java, Android, IoS, Mac OS, Windows, Linux, and Raspberry Pi. FYI: Free nlp course! Moreover, TensorFlow was created keeping the processing power limitations in mind. Implying, we can run this library on all kinds of computers, irrespective of their processing powers. It can even be run on a smartphone (yes, even that overpriced thing you’re holding with a bitten apple on it). 5 Applications of Natural Language Processing for Businesses in 2018 TensorFlow is currently in v1.3 and runs on almost all major platforms used today, from mobiles to desktops, to embedded devices, to specialized workstations, to distributed clusters of servers on the cloud or on-premise. This pervasiveness, openness, and large community have pushed TensorFlow into the enterprise for solving real-world applications such as analyzing images, generating data, natural language processing, intelligent chatbots, robotics, and more. Interestingly, TensorFlow is being used by a wide array of coders to implement language translation and even early detection of skin cancer among other cases. It is truly changing the way developers are interacting with machine learning technology. TensorFlow Applications When it comes to Deep Learning, TensorFlow has gained much more momentum that its competitors – Caffe, Theano, Torch, and other well-known frameworks. TensorFlow is extensively used in voice recognition, text-based applications like Google Translate, image recognition, and Video Detection. Interestingly enough, NASA is developing a predictive model of Near Earth Objects with TensorFlow and Deep Learning. According to the people at NASA, TensorFlow can help design a multilayer model that will be able to recognize and classify the potential of NEOs. TensorFlow is used by some of the biggest data companies in the world – the likes of Airbnb, Airbus, Dropbox, Snapchat, and Uber. Some of the major applications of TensorFlow are: Tensorflow has been successfully implemented in DeepDream – the automated image captioning software – uses TensorFlow. Google’s RankBrain, backed by TensorFlow, handles a substantial number of queries every minute and has effectively replaced the traditional static algorithm-based search. If you’ve used the Allo application, you must’ve seen a feature similar to Google’s Inbox – you can reply to the last message from a few customized options. All thanks to Machine Learning with TensorFlow. Another feature analyses the images sent to you in order to suggest a relevant response. 5 Breakthrough Applications of Machine Learning What is Keras? Keras is a high-level library that’s built on top of Theano or TensorFlow. It provides a scikit-learn type API (written in Python) for building Neural Networks. Developers can use Keras to quickly build neural networks without worrying about the mathematical aspects of tensor algebra, numerical techniques, and optimization methods. The key idea behind the development of Keras is to facilitate experimentations by fast prototyping. The ability to go from an idea to result with the least possible delay is key to good research. This offers a huge advantage for scientists and beginner developers alike because they can dive right into Deep Learning without getting their hands dirty with low-level computations. The rise in the demand for Deep Learning has resulted in the rise in demand for people skilled in Deep Learning. Every organization is trying to incorporate Deep Learning in one way or another, and Keras offers a very easy to use as well as intuitive enough to understand API which essentially helps you test and build Deep Learning applications with least considerable efforts. This is good because Deep Learning research is such a hot topic right now and scientists need a tool to try out their ideas without wasting time on putting together a Neural Network model. Salient Features of Keras Keras is a high-level interface and uses Theano or Tensorflow for its backend. It runs smoothly on both CPU and GPU. Keras supports almost all the models of a neural network – fully connected, convolutional, pooling, recurrent, embedding, etc. Furthermore, these models can be combined to build more complex models. Keras, being modular in nature,  is incredibly expressive, flexible, and apt for innovative research. Keras is a completely Python-based framework, which makes it easy to debug and explore. A Beginners Guide to Edge Computing Keras vs TensorFlow: How do they compare? Keras is a neural networks library written in Python that is high-level in nature – which makes it extremely simple and intuitive to use. It works as a wrapper to low-level libraries like TensorFlow or Theano high-level neural networks library, written in Python that works as a wrapper to TensorFlow or Theano. In that sense, the comparison doesn’t make much sense because Keras itself uses TensorFlow for the back-end. But, if we must, we must. Keras is very simple to understand and implement – using Keras is much like dealing with Lego blocks. It was built to help developers perform quick tests, POC’s, and experiments before going full scale. Keras allows you to use TensorFlow in the backend – eliminating the need to learn it. Keras was developed with the objective of allowing people to write their own scripts without having to learn the backend in detail. After all, most of the users wouldn’t bother about the performance of scripts and the details of the algorithms. However, one size does not fit all when it comes to Machine Learning applications – the proper difference between Keras and TensorFlow is that Keras won’t work if you need to make low-level changes to your model. For that, you need TensorFlow. Although difficult to understand, once you get a hold of the syntax, you’ll be building your models in no time. Sentiment Analysis: What is it and Why Does it Matter? So, like everything, it all boils down to your requirements at hand. If you’re looking to fiddle around with Deep Neural Networks or just want to build a prototype – Keras is your calling. However, if you’re the one that likes to dive deep and get control of the low-level functionalities, you should spend some time exploring TensorFlow. Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau Wrapping Up… The world is swiftly moving towards automation with Deep Learning taking control of everything. There’s no denying the fact that in the days to come, the use of Deep Neural Networks will only grow, and with that, the need for skilled people will grow, too. So, if you think Deep Learning is your calling, start by exploring either Keras or TensorFlow as soon as possible! Boost your career with an advanced course of Machine Learning and AI with IIIT-B & Liverpool John Moores University.

by Reetesh Chandra

Calendor icon

04 Apr 2019

A Beginners Guide to Edge Computing
Blogs
Views Icon

7855

A Beginners Guide to Edge Computing

We can define edge computing as a distributed IT architecture that makes it possible to process data on the periphery – as close to the originating source as possible. If all this sounds gibberish, hold on. The past decade has seen tremendous growth in the number of internet-connected devices, which has given rise to a technology known as the Internet of Things (IoT). Simply put, IoT is just a concept of inter-connecting various devices and connecting each of the devices to the internet with a simple on/off switch. This includes everything from cell phones, coffee makers, fridge, washing machines, wearable devices, and any device you can think of that easily connects to any device and transfers data seamlessly. As IoT started gaining momentum, a problem arose – that of dealing with the data from these inter-connected devices. There’s no need to remind that the data we’re talking about is terabytes in size. Traditionally, the data collected from these devices was sent to the organisation’s central cloud for processing. However, it was a rather time taking process, owing to the size of the data files. Transferring such large datasets over the network to a central cloud can also expose sensitive organisational data to vulnerabilities. Best Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our courses, visit our page below. Machine Learning Courses Edge computing came into the picture to tackle all this and more. Now, have a look at the first para again and allow us to walk you through the definition slowly. The name ‘edge computing’ refers to computation around the corner/edge in a network diagram. Edge computing pushes all the significant computational processing power towards the edges of the mesh. Like we said earlier – as close to the originating device as possible. How does this help? Consider a smart traffic light. Instead of calling home whenever in need of data analysis, if the device is capable of performing analytics in-house, it can accomplish real-time analysis of streaming data and even communicate with other devices to finish tasks on the go. Edge computing, therefore, speeds up the entire analysis process, enabling quick decision-making. In-demand Machine Learning Skills Artificial Intelligence Courses Tableau Courses NLP Courses Deep Learning Courses Get Machine Learning Certification from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career. Edge computing is also beneficial for the organisations as it helps them cut down costs that were earlier incurred on transferring data sets over a network. Other than that, it also allows the organisations to filter out the useful data from the device’s periphery itself – thereby enabling organisations to collect only valuable data and ensuring them to cut down costs on cloud computing and storage. Further, edge computing also reduces the response time to milliseconds, all the while conserving the network resources. Using edge computing, we don’t necessarily need to send the data over a network. Instead, the local edge computing system is responsible for compiling the data and sending frequent reports to the central cloud storage for long-term storage. Clearly, by only sending the essential data, edge computing drastically reduces the data that traverses the network. The deployment of Edge Computing is ideal in a variety of situations. One such case is when the IoT devices have weak internet connectivity, and it’s not practical for them to be connected to a central cloud constantly. 5 Breakthrough Applications of Machine Learning Other such situation can be when there’s a requirement of latency-sensitive processing of data. Edge computing eliminates the factor of latency as the data does not need to be transferred over a network to central cloud storage for processing. This is ideal for financial or manufacturing services where latencies of milliseconds are challenging to achieve. One more use case for edge computing has been the development of the next-gen 5G cellular networks. Kelly Quinn, a research manager at IDC and an expert in edge computing, predicts that as telecom providers incorporate 5G into their wireless networks, they will start adding micro-data centres by either integrating into or locating adjacent to the 5G towers. Business customers would be able to own or rent space in these micro-data centres to perform edge computing and have direct access to a gateway into the telecom provider’s central network, which can be connected to a public IaaS cloud provider. Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau Let’s take a look at some other use cases of Edge Computing: Drones are capable of reaching remote places that human can’t even think of. Edge computing enables these drones to review, analyze, and respond to the analysis in real-time. For instance, if a drone finds any emergency situation, it can instantaneously provide valuable information to people nearby without having first to send the data over a network and then receive the analysis.   Augmented Reality– The introduction of edge computing has taken Augmented Reality a step further. An edge computing platform can provide highly localised data targeted at user’s point of interest; thereby enhancing the AR services.   Automated vehicles– Giants like Google and Uber are coming up with self-driving cars. Edge computing plays a crucial role in the development of such automatic vehicles. These vehicles can process and transmit vital data in real-time to other vehicles commuting nearby using edge computing. These giants aim to make such self-driving cars a consumer reality by 2020. With the introduction of such automated vehicles, we’re sure to see a decrease in the number of lives lost due to automobile accidents. Having said all this, there are still some compromises and challenges that can’t be neglected when talking about edge computing. First of all, only a minute subset of the whole data is processed and analyzed on edge. Then, the analysis of this data is transmitted over the network. This means that we are ideally disregarding some of the raw, unanalyzed data, and potentially missing out on some insights. Again, an important question arises – how bearable is this “loss” of data? Does the organisation need the whole data or is the result generated enough for them? Will missing out on some data negatively affect the organisation’s analysis? Neural Networks: Applications in the Real World There’s no correct answer to these questions. An aeroplane system can’t afford to miss any data, even a bit of it (no pun intended), so, all of the data should be transferred and analyzed to detect trends and patterns. But, transferring data during flight time is not a good idea. So, a better approach will be to collect the data offline and perform edge computing during the flight time. All in all, edge computing is not a panacea in the world of Information Technology. It is a relatively newer technology that offers a host of benefits. However, it’s still important to know if it fits your organisation’s needs or not. The bottom line is that data is valuable. All data that can be analyzed should be analyzed to detect patterns and gain insights. In today’s world, data-driven companies are making a lot more progress compared to the traditional ones. Edge Analytics is a new and exciting space and is an answer for maintenance and usability of data, and we can expect to see many more exciting applications of the same in the years to come.

by Reetesh Chandra

Calendor icon

29 Mar 2018

Neural Networks: Applications in the Real World
Blogs
Views Icon

19934

Neural Networks: Applications in the Real World

Neural Networks find extensive applications in areas where traditional computers don’t fare too well. Like, for problem statements where instead of programmed outputs, you’d like the system to learn, adapt, and change the results in sync with the data you’re throwing at it. Neural networks also find rigorous applications whenever we talk about dealing with noisy or incomplete data. And honestly, most of the data present out there is indeed noisy. Best Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our courses, visit our page below. Machine Learning Courses With their brain-like ability to learn and adapt, Neural Networks form the entire basis and have applications in Artificial Intelligence, and consequently, Machine Learning algorithms. Before we get to how Neural Networks power Artificial Intelligence, let’s first talk a bit about what exactly is Artificial Intelligence. For the longest time possible, the word “intelligence” was just associated with the human brain. But then, something happened! Scientists found a way of training computers by following the methodology our brain uses. Thus came Artificial Intelligence, which can essentially be defined as intelligence originating from machines. To put it even more simply, Machine Learning is simply providing machines with the ability to “think”, “learn”, and “adapt”. In-demand Machine Learning Skills Artificial Intelligence Courses Tableau Courses NLP Courses Deep Learning Courses With so much said and done, it’s imperative to understand what exactly are the use cases of AI, and how Neural Networks help the cause. Let’s dive into the applications of Neural Networks across various domains – from Social Media and Online Shopping, to Personal Finance, and finally, to the smart assistant on your phone. You should remember that this list is in no way exhaustive, as the applications of neural networks are widespread. Basically, anything that makes the machines learn is deploying one or the other type of neural network. Social Media The ever-increasing data deluge surrounding social media gives the creators of these platforms the unique opportunity to dabble with the unlimited data they have. No wonder you get to see a new feature every fortnight. It’s only fair to say that all of this would’ve been like a distant dream without Neural Networks to save the day. FYI: Free Deep Learning Course! Neural Networks and their learning algorithms find extensive applications in the world of social media. Let’s see how: Facebook As soon as you upload any photo to Facebook, the service automatically highlights faces and prompts friends to tag. How does it instantly identify which of your friends is in the photo? The answer is simple – Artificial Intelligence. In a video highlighting Facebook’s Artificial Intelligence research, they discuss the applications of Neural Networks to power their facial recognition software. Facebook is investing heavily in this area, not only within the organization, but also through the acquisitions of facial-recognition startups like Face.com (acquired in 2012 for a rumored $60M), Masquerade (acquired in 2016 for an undisclosed sum), and Faciometrics (acquired in 2016 for an undisclosed sum). In June 2016, Facebook announced a new Artificial Intelligence initiative that uses various deep neural networks such as DeepText – an artificial intelligence engine that can understand the textual content of thousands of posts per second, with near-human accuracy. Instagram Instagram, acquired by Facebook back in 2012, uses deep learning by making use of a connection of recurrent neural networks to identify the contextual meaning of an emoji – which has been steadily replacing slangs (for instance, a laughing emoji could replace “rofl”). By algorithmically identifying the sentiments behind emojis, Instagram creates and auto-suggests emojis and emoji related hashtags. This may seem like a minor application of AI, but being able to interpret and analyze this emoji-to-text translation at a larger scale sets the basis for further analysis on how people use Instagram. Pinterest Pinterest uses computer vision – another application of neural networks, where we teach computers to “see” like a human, in order to automatically identify objects in images (or “pins”, as they call it) and then recommend visually similar pins. Other applications of neural networks at Pinterest include spam prevention, search and discovery, ad performance and monetization, and email marketing. Online Shopping Do you find yourself in situations where you’re set to buy something, but you end up buying a lot more than planned, thanks to some super-awesome recommendations? Yeah, blame neural networks for that. By making use of neural network and its learnings, the e-commerce giants are creating Artificial Intelligence systems that know you better than yourself. Let’s see how: Search Your Amazon searches (“earphones”, “pizza stone”, “laptop charger”, etc) return a list of the most relevant products related to your search, without wasting much time. In a description of its product search technology, Amazon states that its algorithms learn automatically to combine multiple relevant features. It uses past patterns and adapts to what is important for the customer in question. And what makes the algorithms “learn”? You guessed it right – Neural Networks! Recommendations Amazon shows you recommendations using its “customers who viewed this item also viewed”,  “customers who bought this item also bought”, and also via curated recommendations on your homepage, on the bottom of the item pages, and through emails. Amazon makes use of Artificial Neural Networks to train its algorithms to learn the pattern and behaviour of its users. This, in turn, helps Amazon provide even better and customized recommendations. Banking/Personal Finance Cheque Deposits Through Mobile Most large banks are eliminating the need for customers to physically deliver a cheque to the bank by offering the ability to deposit cheques through a smartphone application. The technologies that power these applications use Neural Networks to decipher and convert handwriting on checks into text. Essentially, Neural Networks find themselves at the core of any application that requires handwriting/speech/image recognition. Fraud Prevention How can a financial institution determine a fraudulent transaction? Most of the times, the daily transaction volume is too much to be reviewed manually. To help with this, Artificial Intelligence is used to create systems that learn through training what types of transactions are fraudulent (speak learning, speak Neural Networks!). FICO – the company that creates credit ratings that are used to determine creditworthiness, makes use of neural networks to power their Artificial Intelligence to predict fraudulent transactions. Factors that affect the artificial neural network’s final output include the frequency and size of the transaction and the kind of retailer involved. Powering Your Mobile Phones Voice-to-Text One of the more common features on smartphones today is voice-to-text conversion. Simply pressing a button or saying a particular phrase (“Ok Google”, for example), lets you start speaking to your phone and your phone converts the audio into text. Google makes use of artificial neural networks in recurrent connection to power voice search. Microsoft also claims to have developed a speech-recognition system – using Neural Networks, that can transcribe conversations slightly more accurately than humans. Smart Personal Assistants With the voice-to-text technology becoming accurate enough to rely on for basic conversations, it is turning into the control interface for a new generation of personal assistants. Initially, there were simpler phone assistants – Siri and Google Now (now succeeded by the more sophisticated Google Assistant), which could perform internet searches, set reminders, and integrate with your calendar. Amazon expanded upon this model with the announcement of complementary hardware and software components – Alexa, and Echo (later, Dot). Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau To Wrap Up… We’ve only scratched the surface when it comes to the applications of neural networks in day-to-day life. Specific industries and domains have specific interactions with Artificial Intelligence by making use of neural networks which is far beyond what’s talked about in this article. For example, chess players regularly use chess engines to analyze their games, improve themselves, and practice new tactics – and it goes without saying that the chess engine in question deploys Neural Networks to accomplish the learning. Learn ML courses Online from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career. Do you have any other interesting real-life use case of Neural Networks that we might have missed? Drop it in the comments below!

by Reetesh Chandra

Calendor icon

06 Feb 2018

Neural Networks for Dummies: A Comprehensive Guide
Blogs
Views Icon

10880

Neural Networks for Dummies: A Comprehensive Guide

Our brain is an incredible pattern-recognizing machine. It processes ‘inputs’ from the outside world, categorizes them (that’s a dog; that’s a slice of pizza; ooh, that’s a bus coming towards me!), and then generates an ‘output’ (petting the dog; the yummy taste of that pizza; getting out of the way of the bus!). Best Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our courses, visit our page below. Machine Learning Courses All of this with little conscious effort, almost impulsively. It’s the very same system that senses if someone is mad at us, or involuntarily notices the stop signal as we speed past it. Psychologists call this mode of thinking ‘System 1’, and it includes innate skills — like perception and fear — that we share with other animals. (There’s also a ‘System 2’, to know more about it, check out the extremely informative Thinking, Fast and Slow by Daniel Kahneman). How is all of this related to Neural Networks, you ask? Wait, we’ll get there in a second. Look at the image above, just your regular numbers, distorted to help you explain the learning of Neural Networks better. Even looking cursorily, your mind will prompt you with the words “192”. You surely didn’t go “Ah, that seems like a straight line, I think it’s a 1”. You didn’t compute it – it happened instantly. In-demand Machine Learning Skills Artificial Intelligence Courses Tableau Courses NLP Courses Deep Learning Courses Fascinating, right? There is a very simple reason for this – you’ve come across the digit so many times in your life, that by trial and error, your brain automatically recognizes the digit if you present it with something even remotely close to it. Let’s cut to the chase. What exactly is a Neural Network? How does it work? By definition, a neural network is a system of hardware or softwares, patterned after the working of neurons in the human brain. Basically, it helps computers think and learn like humans. An example will make this clearer: As a child, if we ever touched a hot coffee mug and it burnt us, we made sure not to touch a hot mug ever again. But did we have any such concept of hurt in our conscience BEFORE we touched it? Not really. This adjustment of our knowledge and understanding of the world around us is based on recognizing patterns. And, like us, computers, too, learn through the same type of pattern recognition. This learning forms the whole basis of the working of neural networks. Traditional computer programs work on logic trees – If A happens, then B happens. All the potential outcomes for each of the systems can be preprogrammed. However, this eliminates the scope of flexibility. There’s no learning there. And that’s where Neural Networks come into the picture! A neural network is built without any specific logic. Essentially, it is a system that is trained to look for and adapt to, patterns within data. It is modeled exactly after how our own brain works. Each neuron (idea) is connected via synapses. Each synapse has a value that represents the probability or likelihood of the connection between two neurons to occur. Take a look at the image below: What exactly are neurons, you ask? Simply put, a neuron is just a singular concept. A mug, the colour white, tea -, the burning sensation of touching a hot mug, basically anything. All of these are possible neurons. All of them can be connected, and the strength of their connection is decided by the value of their synapse. Higher the value, better the connection. Let’s see one basic neural network connection to make you understand better: Each neuron is the node and the lines connecting them are synapses. Synapse value represents the likelihood that one neuron will be found alongside the other. So, it’s pretty clear that the diagram shown in the above image is describing a mug containing coffee, which is white in colour and is extremely hot. All mugs do not have the properties like the one in question. We can connect many other neurons to the mug. Tea, for example, is likely more common than coffee. The likelihood of two neurons being connected is determined by the strength of the synapse connecting them. Greater the number of hot mugs, the stronger the synapse. However, in a world where mugs are not used to hold hot beverages, the number of hot mugs would decrease drastically. Incidentally, this decrease would also result in lowering the strength of the synapses connecting mugs to heat. So, Becomes This small and seemingly unimportant description of a mug represents the core construction of neural networks. We touch a mug kept on a table — we find that it’s hot. It makes us think all mugs are hot. Then, we touch another mug – this time, the one kept on the shelf – it’s not hot at all. We conclude that mugs in the shelf aren’t hot. As we grow, we evolve. Our brain has been taking in data all this time. This data makes it determine an accurate probability as to whether or not the mug we’re about to touch will be hot. Neural Networks learn in the exact same way. Now, let’s talk a bit aboutthe first and the most basic model of a neural network: The Perceptron! What is a Perceptron? A perceptron is the most basic model of a neural network. It takes multiple binary inputs: x1, x2, …, and produces a single binary output. Let’s understand the above neural network better with the help of an analogy. Say you walk to work. Your decision of going to work is based on two factors majorly: the weather, and whether it is a weekday or not. The weather factor is still manageable, but working on weekends is a big no! Since we have to work with binary inputs, let’s propose the conditions as yes or no questions. Is the weather fine? 1 for yes, 0 for no. Is it a weekday? 1 yes, 0 no. Remember, we cannot explicitly tell the neural network these conditions; it’ll have to learn them for itself. How will it decide the priority of these factors while making a decision? By using something known as “weights”. Weights are just a numerical representation of the preferences. A higher weight will make the neural network consider that input at a higher priority than the others. This is represented by the w1, w2…in the flowchart above. “Okay, this is all pretty fascinating, but where do Neural Networks find work in a practical scenario?” Real-life applications of Neural Networks If you haven’t yet figured it out, then here it is, a neural network can do pretty much everything as long as you’re able to get enough data and an efficient machine to get the right parameters. Anything that even remotely requires machine learning turns to neural networks for help. Deep learning is another domain that makes extensive use of neural networks. It is one of the many machine learning algorithms that enables a computer to perform a plethora of tasks such as classification, clustering, or prediction. With the help of neural networks, we can find the solution of such problems for which a traditional-algorithmic method is expensive or does not exist. Neural networks can learn by example, hence, we do not need to program it to a  large extent. Neural networks are accurate and significantly faster than conventional speeds. Because of the reasons mentioned above and more, Deep Learning, by making use of Neural Networks, finds extensive use in the following areas: Speech recognition: Take the example of Amazon Echo Dot – magic speakers that allow you to order food, get news and weather updates, or simply buy something online just by talking it out. Handwriting recognition: Neural networks can be trained to understand the patterns in somebody’s handwriting. Have a look at Google’s Handwriting Input application – which makes use of handwriting recognition to seamlessly convert your scribbles into meaningful texts. Face recognition: From improving the security on your phone (Face ID) to the super-cool Snapchat filters – face recognition is everywhere. If you’ve ever uploaded a photo on Facebook and were asked to tag the people in your photo, you know what face recognition is! Providing artificial intelligence in games: If you’ve ever played chess against a computer, you already know how artificial intelligence powers games and game development. It’s to the extent that players use AI to improve upon their tactics and try their strategies first-hand. Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau In Conclusion… Neural networks form the backbone of almost every big technology or invention you see today. It’s only fair to say that imagining deep/machine learning without neural networks is next to impossible. Depending on the way you implement a network and the kind of learning you put to use, you can achieve a lot out of a neural network, as compared to a traditional computer system. Learn ML courses from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.

by Reetesh Chandra

Calendor icon

06 Feb 2018

Building the Largest Online Program Content in India
Blogs
Views Icon

5219

Building the Largest Online Program Content in India

It was a mildly hot Saturday afternoon of late March 2016, when my phone buzzed. It was a LinkedIn notification that Mayank Kumar, Co-Founder and CEO of UpGrad had viewed my profile. Half an hour later the phone buzzed again – Shehzia, the Program Director of the Data Analytics program, UpGrad had viewed my profile. She also left a message that they were looking for someone with a Statistics background for the Program Associate position, for the Data Analytics program. If interested, I was to call her back. After contemplating on the ideal time duration that I should wait before calling her back, I called her after about 40 minutes. The call lasted for about an hour. Next afternoon, I had an even longer call with the Program Manager for the Data Analytics program. On Monday, I received a call from UpGrad HR with a job offer. About a month before this story unfolded, the results of the UPSC Civil Services had been released. It was my third attempt and just as the previous two, I had not made it to the coveted service, yet again. After a serious introspection as to where my career was going, I decided it was time to pull the plug on my IAS dream and get back to where my actual skills and knowledge lay – Statistics and Data Analytics. Journey from Traditional Corporate Roles to an EdTech Startup: Ritesh Malhotra However, the road was never going to be smooth. I had lost touch with the subject over the 3 years of prepping for the IAS and the startup boom had completely transformed the career landscape in the Data Analytics field. New tools had emerged and gained traction in the market. So I needed some time to get my ‘mojo’ back and revisit all the old concepts as well as learn new tricks of the trade. So, when I got this offer from UpGrad, I gladly accepted it. It was tailor made for my situation. I would be revisiting and learning Data Analytics on the job while getting paid for it! I couldn’t ask for a better bargain. So after relocating to Mumbai from Patna on a short notice, I joined UpGrad by early April 2016. In the beginning, it was an overwhelming experience. And I am not even talking about the humidity level of early April in Mumbai! On just the third day of my joining, I was in a review session with the three co-founders and the media mogul Ronnie Screwvala. In this session, the team was to look at the initial few sessions of the Data Analytics program and provide feedback. In that meeting itself that I realised that this startup was not just another startup and the primary reason was the attention to detail that the top management had for the last mile student experience. Being in the field of online education, where there’s no dearth of free courses on all subjects under the sun, the excellent learner experience was what was going to separate UpGrad from the rest. The Idea Called UpGrad: Why Education is Serious Business Another key quality that I noticed in the top management was that they were willing to get their hands dirty with actual content creation work. I have had the opportunity to work with the co-founders on topics ranging from Hypothesis Testing to K Means and Hierarchical Clustering. The CEOs and CTOs of the company did not shy away from doing the grunt work like getting deep into the concepts, writing scripts, creating assessment questions, conceptualising animation and graphics and even answering student queries. However, it was a tough few initial months at UpGrad. The Data Analytics program was scheduled to launch towards the end of May, but the content of the initial parts was yet to be ready. So it was a tight race against time and it took a substantial number of all-nighters and working weekends to launch the program on time. But, what made these nights and weekends memorable was the collaborative effort put in by different teams at UpGrad. We had the production team working with us on the video components, the Tech team fast-tracked their effort on the Learning Management System. Even the finance team was generous with the reimbursements for our late night pizza parties. Finally, through this combined effort, we managed to launch the program on time. Joining a product team at such an early stage has its own advantages. First, you get the satisfaction of having built something successful right from scratch. Second, you get a 360 – degree view of all the aspects of the products – ranging from sales and marketing to finalising student selection criteria, onboarding industry experts and subject matter experts (SMEs) for the program to assisting in B2B collaborations. Just as in any other product, what matters at the end is the feedback from our learners. While we received a fair share of brickbats initially from our learners about exceeding the promised learner time required per week, there is nothing more gratifying than the content that you created getting high student rating or the learners appreciating your content on their Whatsapp group (yes, we have sources in that group!). However, as a team, what we put the premium on is whether the learner is actually gaining from the program and if this is bringing a tangible impact on his career prospects. Thus, instead of high student feedback rating, it’s the news of career transitions of our learners that really gets us partying. Start-Up Founders Listen Up! Freedom at Work is the Key to Success The other aspect that we give importance is the perfection in execution. The sheer volume of the content that we churn out on a weekly basis is enormous and we strive to achieve a zero-error count in our content. In fact, there are internal Slack channels where the whole of UpGrad gets notified whenever a learner raises a content-related concern. Well, this ensures two objectives are met – the student’s query gets resolved on a priority basis and if the issue raised is genuine, the content creator becomes extra cautious from the next time onwards. From personal experience, I can tell you that it’s always better to ensure zero-error in your content than to be embarrassed on public Slack channels! We have now launched the 5th batch for the newly termed ‘Data Science program’ and this has become the largest online program in India. So, what did I gain from this program? Well, as I mentioned before, I wanted a role where I could learn on the job and at UpGrad, I never stopped learning. I always get to learn about new topics, new tools, work on new programs, work and network with top industry SMEs, and get paid for all of it! Want to be part of a similar journey, one that is exciting and challenging at the same time? Feel free to apply for roles in the content team by sending your CVs to onkar.shaligram@upgrad.com or hema.negi@upgrad.com

by Reetesh Chandra

Calendor icon

11 Dec 2017

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon

Explore Free Courses