Basic but important Functions Of PyTorch | PyTorch Series | Snehit Vaddi

Snehit Vaddi
Analytics Vidhya
Published in
3 min readMay 29, 2020

--

Deep Learning Model Training Loop – mc.ai

Introduction

PyTorch is a Python library that facilitates building Deep Learning models. It was introduced by Facebook. PyTorch is used for a number of applications like Natural Language Processing, Computer Vision, Self-driving cars, etc.

The basic building block of PyTorch is Tensors. In simple words, a Tensor is an array of numbers. In the NumPy library, these matrices are called nd-array. In PyTorch, 1d-tensor is a vector, 2d-tensor is a matrix, 3d- Tensor is a cube, and 4d-Tensor is a cube vector.

Basic Functions of PyTorch:

  1. Creation and Slicing of Tensor -

Import PyTorch module as “import torch”. For creating an n-dimensional tensor, we use the tensor function of torch module ie, torch.tensor([elements]).

Note: In PyTorch, tensors should be symmetric like 3,3. If the elements are of different shape, the compiler raises sequence exception.

  • A Tensor can be initialized only with numbers. It can not contain any strings or characters.
  • All the elements in a tensor are homogeneous. In case if there are different types of elements, the compiler itself will upcast the elements.
  • torch.rand((m,n)) function is used to create a m x n tensor filled with random numbers in the interval [0,-1) ie, it follows Standard Normal Deviation.

2. Tensor — NumPy Bridge (NumPy array to Tensors vise versa) -

  • torch.from_numpy() function is used to convert a n-dimensional NumPy array into n-dimensional tensor.
  • Similarly, tensor.numpy() is used to convert an n-dimensional tensor to NumPy array.

3. Mathematical Operations On Tensors -

  • PyTorch module provides a number of functions to perform different mathematical / matrix operations.
  • In PyTorch, matrix multiplication could be calculated using 3 different methods.
  • In matrix multiplication, multiplication fails if columns of A not equal to rows of B.

4. Gradient with PyTorch

In PyTorch, a variable initialized with ‘requires_grad= True’ specifies variable will undergo differentiation.

In the given example, we initialized ‘x’ with requires_grad. An equation ‘y’ in terms of x. In the next step, the derivative of the function using a backward() method. The last step is to find a partial derivative of ‘x’ wrt ‘y’.

5. Optimizer

PyTorch provides a number of built-in optimizers. like SGD, Adam, AdamW, Adamax, Rprop, etc.

The Optim module in Pytorch has pre-written codes foremost of the optimizers that are used while building a neural network.

Importing and Uploading into jovian.ml-

Jovian.ml is the best platform to share collaborate and run our notebooks in the cloud. To use jovian.ml, pip install jovian package, import jovian, and commit your notebook.

Conclusion:

In this post, we have covered the creation of tensors, tensor slicing, Tensor-NumPy bridge, Mathematical operations on tensors, Gradients, and Optim module. In the coming weeks, I will be diving deep into advanced topics like Logistic Regression, Neural Networks, etc.

Reference:

Jovian Notebook: https://jovian.ml/v-snehith999/pytorch-week1-blog

Author:🤠

- Snehit Vaddi

I am a Machine Learning enthusiast. I teach Machines how to See, Listen, and Learn.

Linkedin: https://www.linkedin.com/in/snehit-vaddi-73a814158/

Github: https://github.com/snehitvaddi

--

--

Snehit Vaddi
Analytics Vidhya

👨‍🎓I am a Machine Learning enthusiast. I teach machines how to see, listen, and learn.