1. Introduction:
  1. There should be a Linear and Additive relationship between dependent (output: y) and independent (inputs: X’s) variables. The linear relationship creates a straight line when plotted on a graph. An additive relationship means that the effect of X’s on y is independent of other variables.
  2. There should be no correlation between the error or residual terms. The absence of this phenomenon is known as Autocorrelation.
  3. The independent (X’s) variables should not be correlated, the absence of phenomena is called multi-collinearity.
  4. The error term must have constant variance. The phenomenon is known as homoskedasticity. The presence of non-constant variance is referred to as heteroskedasticity.
  5. The error term must be normally distributed.
Linear Regression Hypothesis
Different hypotheses based on different values of θ’s
The cost function of Linear Regression
Gradient Descent Algorithm with m: training dataset size
Weight update
  • data: contains the information for various houses
  • target: prices of the house
  • feature_names: names of the features
  • DESCR: describes the dataset
Correlation matrix




Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Dive head first into advanced GANs: exploring self-attention and spectral norm

ML Algorithms: One SD (σ)

Heating up word2vec: BlazingText for Real-time Search

Dependency Parser or how to find syntactic neighbours of a word

An Introduction to Gradient Descent

Machine Learning With SecOps

Precision and recall

An illustrative introduction to Fisher’s Linear Discriminant

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Heena Sharma

Heena Sharma

More from Medium

Linear Regression in Machine Learning:

Simple Linear Regression Using Example.

Linear Regression

Example Study Cases with Random Forest Regression