A measure of the degree to which variation of one variable is related to variation in one or more other variables. The most commonly used correlation coefficient indicates the degree to which variation in one variable is described by a straight line relation with another variable.

Suppose that sample information is available on family income and Years of schooling of the head of the household. A correlation coefficient = 0 would indicate no linear association at all between these two variables. A correlation of 1 would indicate perfect linear association (where all variation in family income could be associated with schooling and vice versa).

Definition:

A t test is obtained by dividing a regression coefficient by its standard error and then comparing the result to critical values for Students' t with Error *df*. It provides a test of the claim that ${\beta}_{i}=0$ when all other variables have been included in the relevant regression model.

Example:

Suppose that 4 variables are suspected of influencing some response. Suppose that the results of fitting ${Y}_{i}={\beta}_{0}+{\beta}_{1}{X}_{1i}+{\beta}_{2}{X}_{2i}+{\beta}_{3}{X}_{3i}+{\beta}_{4}{X}_{4i}+{e}_{i}$ include:

Variable |
Regression coefficient |
Standard error of regular coefficient |

.5 | 1 | -3 |

.4 | 2 | +2 |

.02 | 3 | +1 |

.6 | 4 | -.5 |

t calculated for variables 1, 2, and 3 would be 5 or larger in absolute value while that for variable 4 would be less than 1. For most significance levels, the hypothesis ${\beta}_{1}=0$ would be rejected. But, notice that this is for the case when ${X}_{2}$, ${X}_{3}$, and ${X}_{4}$ have been included in the regression. For most significance levels, the hypothesis ${\beta}_{4}=0$ would be continued (retained) for the case where ${X}_{1}$, ${X}_{2}$, and ${X}_{3}$ are in the regression. Often this pattern of results will result in computing another regression involving only ${X}_{1}$, ${X}_{2}$, ${X}_{3}$, and examination of the t ratios produced for that case.

False. Since ${H}_{0}\text{:}\phantom{\rule{0.2em}{0ex}}\beta =\mathrm{-1}$ would not be rejected at $\alpha =0.05$, it would not be rejected at $\alpha =0.01$.

Some variables seem to be related, so that knowing one variable's status allows us to predict the status of the other. This relationship can be measured and is called correlation. However, a high correlation between two variables in no way proves that a cause-and-effect relation exists between them. It is entirely possible that a third factor causes both variables to vary together.

${Y}_{j}={b}_{0}+{b}_{1}\cdot {X}_{1}+{b}_{2}\cdot {X}_{2}+{b}_{3}\cdot {X}_{3}+{b}_{4}\cdot {X}_{4}+{b}_{5}\cdot {X}_{6}+{e}_{j}$

The precision of the estimate of the Y variable depends on the range of the independent (X) variable explored. If we explore a very small range of the X variable, we won't be able to make much use of the regression. Also, extrapolation is not recommended.

Most simply, since −5 is included in the confidence interval for the slope, we can conclude that the evidence is consistent with the claim at the 95% confidence level.

Using a t test:

${H}_{0}$: ${B}_{1}=\mathrm{-5}$

${H}_{A}$: ${B}_{1}\ne \mathrm{-5}$

${t}_{\text{calculated}}=\frac{\mathrm{-5}-(\mathrm{-4})}{1}=\mathrm{-1}$

${t}_{\text{critical}}=\mathrm{-1.96}$

Since ${t}_{\text{calc}}$ < ${t}_{\text{crit}}$ we retain the null hypothesis that ${B}_{1}=\mathrm{-5}$.

True.

t_{(critical, df = 23, two-tailed, α = .02)} = ± 2.5

t_{critical, df = 23, two-tailed, α = .01} = ± 2.8

- $80+1.5\cdot 4=86$
- No. Most business statisticians would not want to extrapolate that far. If someone did, the estimate would be 110, but some other factors probably come into play with 20 years.

- The population value for ${\beta}_{2}$, the change that occurs in Y with a unit change in ${X}_{2}$, when the other variables are held constant.
- The population value for the standard error of the distribution of estimates of ${\beta}_{2}$.
- .8, .1, 16 = 20 − 4.