So far, I have come a long way from Andrew Ng's Course on Machine learning Specialization. Well, since I'm just starting obsidian, I'm going to do a quick note of what I have learnt so far and continue from there onwards.
Here are the things I've learnt so far.
What Machine Learning is
The broad types: Supervised, Unsupervised, and Reinforcement learning Currently, I'm at the Supervised Learning, which entails Linear Regression model and Logistic Regression model.
Linear Regression model:
$$f_{w,b}(x^{(i)})=wx^{(i)}+b$$
The cost function is used to ensure that the model's prediction is closer to the actual labels. it is given by:
$$J(w,b)=\frac{1}{2m}\sum_{i=0}^{m-1}(f_{w,b}(x^{(i)})-y^{(i)})^2$$
where J(w,b) is the cost function. m = number of instances. y = actual label, y[i] = f(x) = prediction of the model.
In selection of the model's parameter, Gradient Descent is used:
-Repeat until Convergence
$$\begin{aligned} & \{ \\ & w=w-\alpha\frac{\partial J(w,b)}{\partial w} \\ & b=b-\alpha\frac{\partial J(w,b)}{\partial b} \\ & \} \end{aligned}$$
where:
$$\begin{aligned} & \frac{\partial J(w,b)}{\partial w}=\frac{1}{m}\sum_{i=0}^{m-1}(f_{w,b}(x^{(i)})-y^{(i)})x^{(i)} \\ & \frac{\partial J(w,b)}{\partial b}=\frac{1}{m}\sum_{i=0}^{m-1}(f_{w,b}(x^{(i)})-y^{(i)}) \end{aligned}$$
So yes. This is a conceptual Recap of what I have learnt so far.