Switch theme
Technical Fridays
Home
About
Blog
Blog | Categories
Personal
Data Science
Machine Learning
R
Python
Algorithms
Cryptography
Mathematics
Visualization
Deep Learning
Computer Vision
Natural Language Processing
Speech Recognition
Personal
Two Years of Technical Fridays
A Year of Fridays
Smart India Hackathon 2018 grand finale
Technical Fridays
Site launch
Data Science
False positive paradox
Loss functions
Optimizers
Methods of Hyperparameter optimization
The Bayesian Thinking - III
The Bayesian Thinking - II
The Bayesian Thinking - I
Dropout: Prevent overfitting
How deep should neural nets be?
Don't use sigmoid: Neural Nets
Scaling vs Normalization
Ensembling is the key
Computational graphs: Backpropagation
Gradient descent: The core of neural networks
Gradient boosted trees: Better than random forest?
Linear algebra: The essence behind deep learning
Data Mining: Knowledge discovery in databases
Anscombe's Quartet
The Curse of Dimensionality
Dealing with categorical data
Regularization
Evaluation metrics for classification and False positives
Simplicity doesn't imply accuracy
p-Value
Out-liars
Correlation is not causation
Overfitting and Underfitting
Data leakage: A big problem
Simpson's paradox
Email spam filtering: Text analysis in R
Friendship paradox: facebook
Moneyball: Why no prediction can't be made for baseball champion
Moneyball: How linear regression changed baseball
Machine Learning
Loss vs Accuracy
Loss functions
Methods of Hyperparameter optimization
A visual introduction to eigenvectors and eigenvalues
Scaling vs Normalization
Ensembling is the key
Gradient boosted trees: Better than random forest?
Data Mining: Knowledge discovery in databases
The Curse of Dimensionality
Regularization
Simplicity doesn't imply accuracy
Overfitting and Underfitting
Email spam filtering: Text analysis in R
Moneyball: Why no prediction can't be made for baseball champion
Moneyball: How linear regression changed baseball
R
Simpson's paradox
Email spam filtering: Text analysis in R
Moneyball: Why no prediction can't be made for baseball champion
Moneyball: How linear regression changed baseball
Python
Transfer learning: How to build accurate models
Dynamic Programming
Anscombe's Quartet
Dealing with categorical data
Regularization
Some Prime Thoughts
Friendship paradox: facebook
Turtle in Python: A Traffic light
Algorithms
Dynamic Programming
Shortest Path: Dijkstra's Algorithm
Divide and Conquer
Greedy Algorithms
Structure of the web
Cryptography
Some Prime Thoughts
How secure are we?
Mathematics
False positive paradox
The Bayesian Thinking - III
The Bayesian Thinking - II
The Bayesian Thinking - I
A visual introduction to eigenvectors and eigenvalues
Linear algebra: The essence behind deep learning
Some Prime Thoughts
The Rule of 72: Mathematics in everyday life
Mathematics and Beauty
Visualization
Anscombe's Quartet
Deep Learning
Introduction to Panoptic Segmentation: A Tutorial
Evaluation metrics for object detection and segmentation: mAP
Quick intro to Instance segmentation: Mask R-CNN
Quick intro to semantic segmentation: FCN, U-Net and DeepLab
Converting FC layers to CONV layers
Data augmentation
Generative Adversarial Networks variants: DCGAN, Pix2pix, CycleGAN
Layer-specific learning rates
Quick intro to Object detection: R-CNN, YOLO, and SSD
Attention
Backpropagation Through Time
Autoencoder: Downsampling and Upsampling
Weight initialization in neural nets
Image captioning using encoder-decoder
The gradient problem in RNN
Why Batch Normalization?
Filters in Convolutional Neural Networks
Loss vs Accuracy
Generative models and Generative Adversarial Networks
Skip connections and Residual blocks
Loss functions
Optimizers
Transfer learning: How to build accurate models
Methods of Hyperparameter optimization
word2vec: The foundation of NLP
Dropout: Prevent overfitting
How deep should neural nets be?
Don't use sigmoid: Neural Nets
The magic behind ConvNets
Computational graphs: Backpropagation
Gradient descent: The core of neural networks
Linear algebra: The essence behind deep learning
Computer Vision
Introduction to Panoptic Segmentation: A Tutorial
Evaluation metrics for object detection and segmentation: mAP
Quick intro to Instance segmentation: Mask R-CNN
Quick intro to semantic segmentation: FCN, U-Net and DeepLab
Converting FC layers to CONV layers
Data augmentation
Generative Adversarial Networks variants: DCGAN, Pix2pix, CycleGAN
Quick intro to Object detection: R-CNN, YOLO, and SSD
Attention
Image captioning using encoder-decoder
Why Batch Normalization?
Filters in Convolutional Neural Networks
Generative models and Generative Adversarial Networks
Skip connections and Residual blocks
Transfer learning: How to build accurate models
The magic behind ConvNets
Natural Language Processing
Attention
Backpropagation Through Time
Image captioning using encoder-decoder
The gradient problem in RNN
word2vec: The foundation of NLP
Speech Recognition
Introduction to Automatic Speech Recognition