K Fold Cross Validation:
UB Labs
Friday, June 9, 2017
Monday, May 1, 2017
At Bala's place!
Two days have just slipped. It was quick and good. Some learnings. Some insights.
And bigger event is that 3 months have passed since I left my job.
So, how was it?
It was good. It was mostly exploration. And a big relaxation mode. I need to take it to the next level now.
More travel needs to be done. And will be staying at friend's place more often.
Tomorrow will meet my previous employer and make a visit to office. Also, will meet a friend that I met randomly on a BlaBla trip and will end the day by meeting Ratul, my old time friend.
Will spend more time with you from tomorrow on. Will sleep for now. Goodnight!
And bigger event is that 3 months have passed since I left my job.
So, how was it?
It was good. It was mostly exploration. And a big relaxation mode. I need to take it to the next level now.
More travel needs to be done. And will be staying at friend's place more often.
Tomorrow will meet my previous employer and make a visit to office. Also, will meet a friend that I met randomly on a BlaBla trip and will end the day by meeting Ratul, my old time friend.
Will spend more time with you from tomorrow on. Will sleep for now. Goodnight!
Monday, April 17, 2017
Wednesday, April 12, 2017
[]Convolution
How do we apply convolution on images?
An image is made up of 3 matrices. Pixel values for Red, Green and Blue.
If we convolve that, we get a first level of feature map. The convolving matrix is called Convolution Kernel.
What do we convolve it with?
We do it with a
An image is made up of 3 matrices. Pixel values for Red, Green and Blue.
If we convolve that, we get a first level of feature map. The convolving matrix is called Convolution Kernel.
What do we convolve it with?
We do it with a
Friday, April 7, 2017
[Udicity]Perceptron
https://classroom.udacity.com/nanodegrees/nd101/parts/8b03c7e1-308c-477a-bd21-28872f5fed78/modules/2afd43e6-f4ce-4849-bde6-49d7164da71b/lessons/dc37fa92-75fd-4d41-b23e-9659dde80866/concepts/5ab911d0-fe20-4113-852c-8a07fe9bdacc
Adding weights and bias.
Summing the inputs.
If weight is zero: No effect of that parameter
If exactly opposite: No bias.
If opposite at some level:
If bias is zero: Weights can take any number. You can multiply or divide it by any number.
Building blocks of a neural networks are again NOT, AND and OR gates.
This is best said in the following comment from Udacity where we created a XOR gate from NOT, AND and OR gates.
"You've seen that a perceptron can solve linearly separable problems. Solving more complex problems, you use more perceptrons. You saw this by calculating AND, OR, NOT, and XOR operations using perceptrons. These operations can be used to create any computer program. With enough data and time, a neural network can solve any problem that a computer can calculate. However, you don't build a Twitter using a neural network. A neural network is like any tool, you have to know when to use it.
Adding weights and bias.
Summing the inputs.
If weight is zero: No effect of that parameter
If exactly opposite: No bias.
If opposite at some level:
If bias is zero: Weights can take any number. You can multiply or divide it by any number.
Building blocks of a neural networks are again NOT, AND and OR gates.
This is best said in the following comment from Udacity where we created a XOR gate from NOT, AND and OR gates.
"You've seen that a perceptron can solve linearly separable problems. Solving more complex problems, you use more perceptrons. You saw this by calculating AND, OR, NOT, and XOR operations using perceptrons. These operations can be used to create any computer program. With enough data and time, a neural network can solve any problem that a computer can calculate. However, you don't build a Twitter using a neural network. A neural network is like any tool, you have to know when to use it.
The power of a neural network isn't building it by hand, like we were doing. It's the ability to learn from examples. In the next few sections, you'll learn how a neural networks sets it's own weights and biases."
Tuesday, April 4, 2017
Maths for deep learning!
- Linear Algebra.
- Calculus.
- Statistics.
If you just want to use the existing models, you don't need much maths. But, if you want to build your models, you need some understanding of the terminology.
At the level of data cleaning, we need some maths.
Like Min Max Scaling.
This makes all the data in the range 0-1.
Formula for it:
(X-Xmin)/(Xmax-Xmin)
When we have to give this data to the TensorFlow, we will be dealing with the following mainly.
1) Scalar. Constant
2) Vector. 1 D
3) Matrix. 2 D
4) Tensor. N D
So, overall we use maths in the following cases:
1) Normalising.
2) Learning Hyperparameters
3) Initialising Weights.
4) Forward Propagation.
5) Calculate error.
6) Backpropagate to get cost corrected.
Hyper parameters.
1) Batch size.
2) Initial Learning rate.
3) Learning rate schedule
4) Rotations.
5) No. of iterations.
6) Weight decay
7) Random Minor.
8) Transformations.
These parameters can be chosen based on the domain knowledge or we can also use Search approach where we just define a range for each parameter.
Q:If you can do matrix operations using Python, will you still use NumPy?
A:YES. NumPy library is written in C, which makes it faster than native python.
Python lists can have different data types as well. But, numpy can only have one specific data type.
- Calculus.
- Statistics.
If you just want to use the existing models, you don't need much maths. But, if you want to build your models, you need some understanding of the terminology.
At the level of data cleaning, we need some maths.
Like Min Max Scaling.
This makes all the data in the range 0-1.
Formula for it:
(X-Xmin)/(Xmax-Xmin)
When we have to give this data to the TensorFlow, we will be dealing with the following mainly.
1) Scalar. Constant
2) Vector. 1 D
3) Matrix. 2 D
4) Tensor. N D
So, overall we use maths in the following cases:
1) Normalising.
2) Learning Hyperparameters
3) Initialising Weights.
4) Forward Propagation.
5) Calculate error.
6) Backpropagate to get cost corrected.
Hyper parameters.
1) Batch size.
2) Initial Learning rate.
3) Learning rate schedule
4) Rotations.
5) No. of iterations.
6) Weight decay
7) Random Minor.
8) Transformations.
These parameters can be chosen based on the domain knowledge or we can also use Search approach where we just define a range for each parameter.
Q:If you can do matrix operations using Python, will you still use NumPy?
A:YES. NumPy library is written in C, which makes it faster than native python.
Python lists can have different data types as well. But, numpy can only have one specific data type.
Monday, March 27, 2017
TensorFlow and Deep Learning without a PhD, Part 1 (Google Cloud Next '17) -
https://www.youtube.com/watch?v=u4alGiomYP4
Looks like Neural networks always have a non linear activation function.
This video explains number recognition. Which is a classification problem. May be multinomial linear regression!
Softmax is one algorithm which works well on classification problems.
NumPy is a numerical library for Python.
Looks like Neural networks always have a non linear activation function.
This video explains number recognition. Which is a classification problem. May be multinomial linear regression!
Softmax is one algorithm which works well on classification problems.
NumPy is a numerical library for Python.
Subscribe to:
Comments (Atom)