Monday, April 17, 2017
Wednesday, April 12, 2017
[]Convolution
How do we apply convolution on images?
An image is made up of 3 matrices. Pixel values for Red, Green and Blue.
If we convolve that, we get a first level of feature map. The convolving matrix is called Convolution Kernel.
What do we convolve it with?
We do it with a
An image is made up of 3 matrices. Pixel values for Red, Green and Blue.
If we convolve that, we get a first level of feature map. The convolving matrix is called Convolution Kernel.
What do we convolve it with?
We do it with a
Friday, April 7, 2017
[Udicity]Perceptron
https://classroom.udacity.com/nanodegrees/nd101/parts/8b03c7e1-308c-477a-bd21-28872f5fed78/modules/2afd43e6-f4ce-4849-bde6-49d7164da71b/lessons/dc37fa92-75fd-4d41-b23e-9659dde80866/concepts/5ab911d0-fe20-4113-852c-8a07fe9bdacc
Adding weights and bias.
Summing the inputs.
If weight is zero: No effect of that parameter
If exactly opposite: No bias.
If opposite at some level:
If bias is zero: Weights can take any number. You can multiply or divide it by any number.
Building blocks of a neural networks are again NOT, AND and OR gates.
This is best said in the following comment from Udacity where we created a XOR gate from NOT, AND and OR gates.
"You've seen that a perceptron can solve linearly separable problems. Solving more complex problems, you use more perceptrons. You saw this by calculating AND, OR, NOT, and XOR operations using perceptrons. These operations can be used to create any computer program. With enough data and time, a neural network can solve any problem that a computer can calculate. However, you don't build a Twitter using a neural network. A neural network is like any tool, you have to know when to use it.
Adding weights and bias.
Summing the inputs.
If weight is zero: No effect of that parameter
If exactly opposite: No bias.
If opposite at some level:
If bias is zero: Weights can take any number. You can multiply or divide it by any number.
Building blocks of a neural networks are again NOT, AND and OR gates.
This is best said in the following comment from Udacity where we created a XOR gate from NOT, AND and OR gates.
"You've seen that a perceptron can solve linearly separable problems. Solving more complex problems, you use more perceptrons. You saw this by calculating AND, OR, NOT, and XOR operations using perceptrons. These operations can be used to create any computer program. With enough data and time, a neural network can solve any problem that a computer can calculate. However, you don't build a Twitter using a neural network. A neural network is like any tool, you have to know when to use it.
The power of a neural network isn't building it by hand, like we were doing. It's the ability to learn from examples. In the next few sections, you'll learn how a neural networks sets it's own weights and biases."
Tuesday, April 4, 2017
Maths for deep learning!
- Linear Algebra.
- Calculus.
- Statistics.
If you just want to use the existing models, you don't need much maths. But, if you want to build your models, you need some understanding of the terminology.
At the level of data cleaning, we need some maths.
Like Min Max Scaling.
This makes all the data in the range 0-1.
Formula for it:
(X-Xmin)/(Xmax-Xmin)
When we have to give this data to the TensorFlow, we will be dealing with the following mainly.
1) Scalar. Constant
2) Vector. 1 D
3) Matrix. 2 D
4) Tensor. N D
So, overall we use maths in the following cases:
1) Normalising.
2) Learning Hyperparameters
3) Initialising Weights.
4) Forward Propagation.
5) Calculate error.
6) Backpropagate to get cost corrected.
Hyper parameters.
1) Batch size.
2) Initial Learning rate.
3) Learning rate schedule
4) Rotations.
5) No. of iterations.
6) Weight decay
7) Random Minor.
8) Transformations.
These parameters can be chosen based on the domain knowledge or we can also use Search approach where we just define a range for each parameter.
Q:If you can do matrix operations using Python, will you still use NumPy?
A:YES. NumPy library is written in C, which makes it faster than native python.
Python lists can have different data types as well. But, numpy can only have one specific data type.
- Calculus.
- Statistics.
If you just want to use the existing models, you don't need much maths. But, if you want to build your models, you need some understanding of the terminology.
At the level of data cleaning, we need some maths.
Like Min Max Scaling.
This makes all the data in the range 0-1.
Formula for it:
(X-Xmin)/(Xmax-Xmin)
When we have to give this data to the TensorFlow, we will be dealing with the following mainly.
1) Scalar. Constant
2) Vector. 1 D
3) Matrix. 2 D
4) Tensor. N D
So, overall we use maths in the following cases:
1) Normalising.
2) Learning Hyperparameters
3) Initialising Weights.
4) Forward Propagation.
5) Calculate error.
6) Backpropagate to get cost corrected.
Hyper parameters.
1) Batch size.
2) Initial Learning rate.
3) Learning rate schedule
4) Rotations.
5) No. of iterations.
6) Weight decay
7) Random Minor.
8) Transformations.
These parameters can be chosen based on the domain knowledge or we can also use Search approach where we just define a range for each parameter.
Q:If you can do matrix operations using Python, will you still use NumPy?
A:YES. NumPy library is written in C, which makes it faster than native python.
Python lists can have different data types as well. But, numpy can only have one specific data type.
Subscribe to:
Comments (Atom)