Vectorization and Broadcasting are ways to speed up the compute time and optimize memory usage while doing mathematical operations with Numpy. These methods are crucial to ensure time complexity is reduced so that the algorithms don’t face any bottlenecks. This optimized operation is necessary for applications to be scalable. We’ll go over both these techniques and implement some examples.

By the end of this tutorial, you will have the knowledge of following:

- How Vectorization is handled by Numpy
- Time differences with and without Vectorization
- What Broadcasting is
- How Broadcasting is different from usual Matrix multiplication

**Vectorization**

A lot of times we require mathematical operations on arrays – such as array multiplication. Now, a non-vectorized way would be to do element wise multiplication by using a loop. Implementing it in such a way would result in the same multiplication operation to be done multiple times which would be a wastage of compute resources if the data size is too huge. Let’s take a quick look.

Non vectorized way:

Import random
a = [random.randint(1, 100) for _ in range(10000)] |

#Output:>> 1000 loops, best of 3: 658 µs per loop |

Vectorized way:

import numpy as np a = np.array([random.randint(1, 100) for _ in range(10000)]) b = np.array([random.randint(1, 100) for _ in range(10000)]) %timeit a*b |

#Output:>>100000 loops, best of 3: 7.25 µs per loop |

As we see, the time elapsed went from 658 microseconds to just 7.25 microseconds. This is because when we say a = np.array([]), all the operations are handled internally by numpy. And when we do a*b, numpy internally multiplies the complete array at once by the means of vectorization.

Here we use the %timeit magic command to time the execution of the process which might differ on your machine.

Let’s take a look at another example of outer products of 2 vectors with dimensions (nx1) and (1xm). The output will be (nxm).

import time import numpy import array a = array.array(‘i’, [random.randint(1,100) for _ in range(100)]) b = array.array(‘i’, [random.randint(1,100) for _ in range(100)]) |

T1 = time.process_time() c = numpy.zeros((200, 200)) for i in range(len(a)): for j in range(len(b)): c[i][j]= a[i]*b[j] T2 = time.process_time() print(f”Computation time = {1000*(T2-T1)}ms”) |

#Output:>> Computation time = 6.819299000000001ms |

Now, let’s do it with Numpy,

T1 = time.process_time() c = numpy.outer(a, b) T2 = time.process_time() print(f”Computation time = {1000*(T2-T1)}ms”) |

#Output:>> Computation time = 0.2256630000001536ms |

As we see again, Numpy processes the same operation way faster by vectorization.

**Must Read: **Fascinating Python Applications in Real World

**Broadcasting**

So uptil now, we saw examples where arrays of the same size were used. What if the sizes of arrays are different? Here’s where Numpy’s another great feature, Broadcasting, comes into picture.

Broadcasting is another extension to vectorization where arrays need not be of the same sizes for operations to be performed on them like addition, subtraction, multiplication, etc. Let’s understand this by a very simple example of addition of an array and a scalar.

a = np.array([1, 1, 1, 1]) a+5 |

#Output:array([6, 6, 6, 6]) |

As we see, the scalar 5 got added to all the elements. So how did it happen?

To imagine the process, you can think that the scalar 5 is repeated 4 times to make an array which is then added into the array a. But keep in mind, Numpy doesn’t create any such arrays which will only take up memory. Numpy just “broadcasts” or duplicates the scalar 5 up to 4 places to add it to the array a.

Let’s take another easy example.

a = np.ones((3,3)) b = np.ones(3) a+b |

#Output:>> array([[2., 2., 2.], [2., 2., 2.], [2., 2., 2.]]) |

In the above example, the array of shape (3,1) got broadcasted to (3,3) to match array a.

But does this mean that any array with any dimension can be broadcasted to match an array with any dimension?

NO!

**Broadcasting rules**

Numpy follows a set of easy rules to make sure only the arrays following the criteria are broadcasted. Let’s take a look.

The rule of broadcasting says that the 2 arrays that are to be operated must either have the same dimensions or if either of them is 1.

Let’s see this is in action.

**Example 1:**

Consider below arrays of dimensions:

a = 3 x 4 x **7**

b = 3 x 4 x 1

Here b’s last dimension will be broadcasted to match that of a to 7.

Hence, result = 3 x 4 x 7

**Example 2:**

a = **3** x 4 x **7**

b = 4

Now, the number of dimensions of a and b are unequal. In such cases the array with lesser number of dimensions will be padded with 1.

So, here, b’s first and last dimensions are 1, so they will be broadcasted to match that of a to 3 and 7.

Hence, result = 3 x 4 x 7.

**Read: **Python Tutorial

**Example 3:**

a = 3 x **4** x 1 x** 5**

b = 3 x 1 x** 7** x 1

Here, again, b’s second and last dimensions will be broadcasted to match that of a to 4 and 5. Also, the third dimension of a will be broadcasted to match that of b to 7.

Hence, result = 3 x 4 x 7 x 5

Now let’s see when the condition fails:

**Example 4:**

a = 3 x **4** x 7 x** 5**

b = 3 x 3 x 7 x 4

Here, the second and fourth dimensions of b do not match with a and neither they are 1. In this case, Python will throw a value error:

ValueError: operands could not be broadcast together with shapes (3,4,7,5) (3,3,7,4) |

**Example 5:**

a = 3 x 4 x 1 x 5

b = 3 x 2 x 3

Result: ValueError

Here as well, the second dimension doesn’t match and is neither 1 for either of them.

**Before You Go**

Vectorization and Broadcasting, both, are methods how Numpy makes its processing optimized and more efficient. These concepts should be kept in mind especially when dealing with matrices and n-dimensional arrays, which are very common in image data and Neural Networks.

If you are curious to learn about python, data science, check out IIIT-B & upGrad’s PG Diploma in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.