What is the standard error formula when there are multiplex (regressors)
π¦π= π½0+ β(π½ππ₯ππ) π π=1 +π’π
π π(π½β Μ)=β1π πππ(π’πΜ) πππ(π₯βπΜ Μ)
Above, π₯βπΜ Μ are the residuals from a regression of π₯βπ on all other π₯.
What is the t stat for the
H0: B1 - B2 = 0
H1: B1 - B2 β 0
Need SE(π½1 Μβπ½2 Μ)
πππ(π½1 Μβπ½2 Μ) = πππ(π½1 Μ)+πππ(π½2 Μ)β2πΆππ£(π½1 Μ,π½2 Μ).
Thus:
π‘-π π‘ππ‘ = (π½1 Μβπ½2 Μβ0)/ (βπππ Μ (π½1Μ)+πππ Μ (π½2Μ)β2πΆππ£ . Μ (π½1 Μ,π½2 Μ))
What is a joint hypothesis?
Hypothesis that requires 2 or more equal signs
H0: B1=0 & B2=0
Could you do separate T-tests for each
No as it would be imprecise as each test ignored half of the hypothesis
What test do you do for joint hypothesis then?
We need to do an F-test
Why is that the formula for s.e. w multiple regressors
how to evluate t stat w multiple coeff
take se(x1 + x2 … etc)
Calculate the Var(“ “)
Sqaure root it for standard deviation
Replace with sample estiamtes (putting a hat on it)
WE have formula for Var(b1har) and Var(b2) from earlier
What is Var(b1 hat) and Var(B2 hat)?
from s.d. of multiple coeef
When is the F test valid
If there is homoskedasticity
For the test of a single restriction, how are the f and t test equivalent