For most of us, who were once newbies in Deep Learning, trying
tf.keras.layers.Conv2D for MNIST classification was fun. Convolutions are the building blocks of most algorithms in computer vision, except for some newer variants like Vision Transformers, Mixers, etc. which claim to solve image-related problems without the use of convolutions. At the core of DL, lies Gradient Descent ( and its variants ), which helps us optimize the parameters of a NN, which in turn reduces the loss we incur while training the model.
Convolutions or Convolutional layers also possess their own parameters commonly known as filters. No, not filters…
Linear Independence is an important concept in linear algebra and can be discovered in any high-school “Coordinate Geometry” textbook. In order to grasp some other concepts like *span, dimension and basis vectors, a concrete understanding of linear independence is essential.
Instead of going through the textbook-ish way to introduce the concept of linear independence, we’ll explore an intuitive example. Further, we’ll formalize ( the process of making things a bit textbook-ish ) ourselves and end up with a precise definition.
*span, dimension and basis vectors: These include the idea of linear independence in their definition
In order to understand linear…
The Moore-Penrose Pseudoinverse or Moore-Penrose Generalized Inverse or simply Pseudoinverse is a special matrix used to solve non-consistent linear systems of equations. It provides a least-squares solution to the problem
Considering matrices as compact representations of a linear transformation, inverse matrices, in a similar manner, represent a linear transformation which “undo” the effects of the former. But not every matrix is invertible, meaning it is not possible to find a matrix ( which “undo” the transformation ) for every other matrix. …
👉🏽 Machine Learning ( and AI as a whole ) is a math-heavy field. It highly relies on concepts of Linear Algebra, Probability, Calculus and Statistics which might be difficult sometimes, to some developers. Learning these concepts and developing intuitive for them would help us in understanding the working for many ML algorithms and techniques.
Earlier this May, a group of researchers from Google released a paper “MLP-Mixer: An all-MLP Architecture for Vision” introducing their MLP-Mixer ( Mixer, for short ) model for solving computer vision problems. The research suggests that MLP-Mixer attains competitive scores on image classification benchmarks such as the ImageNet.
One thing that would catch every ML developer’s eyes, is that they haven’t used convolutions in their architecture. Convolutions have reigned computer vision since long as they are efficient in extracting spatial information from images and videos. Recently, Transformers, that were originally used for NLP problems, have shown remarkable results in computer…
* Have you heard about Kotlin Coroutines from your colleagues, or from other developers 🤓?
* Were you in the search for an alternative method to replace the deprecated
AsyncTaskclass in Android?
* Or you have a 🏋🏼 heavy task on your main thread and want to implement it in a Kotlin-ish way?
If yes, we should quickly start with Kotlin Coroutines on Android, by which we can manage async tasks efficiently on Android, with beautiful and readable code 😘.
Kotlin Coroutines were of course require Kotlin in the first go. See Add Kotlin to an existing app docs…
We’ve all performed age, gender or emotion detection in Python with TensorFlow Keras. For most of us, a simple Keras models with
Conv2D layers or a VGG-16 backbone might have given satisfactory results.
In this story, we implement two Keras models for age and gender estimation, whose sole purpose will be to run on Android. As we’ll deploy our model on an Android device, we’ll pay attention in making our model faster and also expect satisfactory results on our dataset.
Starting from the UTKFace dataset, we build our model and train it, finally exporting it to the TFLite format. …
Whether you are a beginner or a seasoned ML developer, you’ve probably heard about (and most likely implemented) the Naive Bayes classifier. These classifiers are very helpful when you have smaller datasets and lesser computational power.
You might be aware of the world-famous
scikit-learn implementation commonly used in Python:
Deep Learning could be super fascinating if you’re already in love with other ML algorithms. DL is all about huge neural networks, with numerous parameters, different layers which perform specific tasks.
This checklist will provide you with a smooth start ( and also a safer start ) with Deep Learning in TensorFlow.
Warning: This story uses some dreaded terminology. Readers who have jumped directly into DL, thereby skipping basic ML algos, are advised.
Without wasting time, let’s get started.
Mathematics of backpropagation ( bonus, refresher ),
Random Forests are an ensemble learning method that are used to perform regression and classification. Most ML developers have used Random Forests in their projects as they provide better accuracy than Decision Trees.
A Decision Tree is a decision support tool that uses a tree-like model of decisions and their possible consequences including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements.
Random Forest is a fancy name for multiple Decision Trees used collectively to predict a label (in the case of classification).
You might be aware of…