Supervised learning
-classification vs regression(contiguous variables)
Unsupervised learning
-no answers given to the algorithm ⇒ computer automatically analyze
-cocktail party problem ⇒ 2 audio recordings → separate out the two voices ⇒ can be done with single line of code
⇒ [W,s,v] = svd((repmat(sum(x.*x,1),size(x,1),1).*x)*x’);
⇒ use “Octave” or “Matlab” ⇒ it’s faster
[Linear Regression]
Model Representation
-supervised learning has training set
-training set → learning algorithm
* hypothesis:
Cost Function
⇒ Goal: minimize J(θ0 , θ1) ⇒ global minimum
⇒ use contour plots/figures for visualization
⇒ linear line of h(x) is converted to a single point in cost function graph
Gradient Descent Algorithm
If is α too small ⇒ gradient descent can be slow (alpha = step size)
If is α too big ⇒ gradient descent fail to converge, or even diverge
α rate doesn’t need to decrease →automatically take smaller steps
Batch Gradient Descent: every step needs to calculate all training sets in batches
Review:
Although there is difficulty in understanding the whole process, particularly the gradient descent equation, I am fairly able to get the big picture and the important concepts of machine learning regarding supervised/unsupervised learning, model representation, cost function, and gradient descent algorithm.
I am currently able to follow the contents and able to solve the quiz in Coursera for each lecture without much difficulty, yet!
'AI NLP Study > Machine Learning_Andrew Ng' 카테고리의 다른 글
Coursera - Machine Learning_Andrew Ng - Week 3 (Logical Regression) (0) | 2022.01.26 |
---|---|
Coursera - Machine Learning_Andrew Ng - Week 2 (0) | 2022.01.25 |