what does the line B0 +B1X mean?
Linear regressor with a single regressor
Y = B0 +B1X+ u
The conditional mean of Y given X
E[Y|X]
what does B1 mean?
Linear regressor with a single regressor
Y = B0 +B1X+ u
The partial affect of X on E[Y|X] (on conditional mean of Y given X)
^E[Y|X] / ^X
^ = average
What hoes the predicted value ^Y mean?
^Y(estimation of Y)
A predicted conditional mean of Y given X
^E[Y|X] - ^b0 +^b1X
what does the line B0 +B1X mean?
E[Y|X] - b0+b1X
P(Y=1|X)
Probability of Y +1 is conditional on X
Linear probability model (advantages)
- inference is the same as for multiple regression (need heteroskeddasticity - robust standard errors)
Linear probability model (disadvantages)
These disadvantages can be addressed by using a nonlinear probability model: probit and logit regression
a nonlinear functional form is used when we want:
2. 0
Differences between linear and nonlinear functions (FORMULAS)
Linear: P (Y+1|X) = b0 + b1X
Non-linear: P (Y+1|X) = F(b0 + b1X)
F is a cumulative distribution function of a random variable
Probit regression model
P (Y=1|X) = o| ( b0 + b1X)
o| is the cdf of standard normal distribution
b0 + b1X is the “z value” of the probit model
Why use normal cdf? (probit regression)
Probit regression model (formula
P (Y=1|X) = o| ( b0 + b1X)
F(x) = [ 1 / 1 + e^ (b0 + b1X) ]
cdf of logistic distribution
P (Y=1|X) = o| ( b0 + b1X)
how can we estimate b0 and b1 ?
- Maximum likelihood estimation (MLE)
what is the OLS estimator?
It is the coefficient values that minimize the squared residuals.
min E [ Y - ( b0 + b1X) ]^2
Nonlinear lest square
min E [ Y - o| ( b0 + b1X) ]^2
likelihood function
Maximum likelihood estimator (MLE)