NLP’s State completely changed when in 2018, researchers from Google open-sourced BERT (Bi-Directional Encoder Representation From Transformers).
In this post, we will understand all the working of polyloss from the paper PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions and implement the same on an image classification task. We shall explore the following
Natural Language Processing is one of the fastest-growing fields in Deep Learning. NLP has completely changed since the inception of Transformers. Later on, variants of Transformer architecture where-in only the encoder part was used (BERT) cracked the transfer learning game in NLP. Now, you can download a pre-trained model from the internet which is already trained on huge amounts of data and has the knowledge of language and use it for your downstream tasks with a bit of fine-tuning.
In this post, we shall look at the task of metric learning, and implement the paper Classification is a strong baseline for deep metric learning on the Inshop dataset
Have you ever wondered 🤔 how PyTorch
I was always curious to understand how the internals work too. Recently I was reading Fast.ai's Deep learning for coders book's 19th chapter, where we learn how to build minimal versions of
FastAI modules like
One of the least taught skill in machine learning is how to manage and track machine learning experiments effectively. Once you get out of the shell of beginner-level projects and get into some serious projects/research, experiment tracking and management become one of the most crucial parts of your project.
While designing DL modules like a classification head, it is required to calculate the input features. PyTorch Lazy modules comes to the rescue by helping us automate it.