- Binomial Distribution
- a discrete random variable (RV) that arises from Bernoulli trials. There are a fixed number, n, of independent trials. “Independent” means that the result of any trial (for example, trial 1) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial RV Χ is defined as the number of successes in n trials. The notation is: X ~ B(n, p) μ = np and the standard deviation is $\sigma =\sqrt{npq}$. The probability of exactly x successes in n trials is $P(X=x)=\left(\begin{array}{c}n\\ x\end{array}\right){p}^{x}{q}^{n-x}$.
- Central Limit Theorem
- Given a random variable (RV) with known mean $\mu $ and known standard deviation σ. We are sampling with size n and we are interested in two new RVs - the sample mean, $\overline{X}$, and the sample sum, $\Sigma X$. If the size n of the sample is sufficiently large, then $\overline{X}~N\left(\mu \text{,}\frac{\sigma}{\sqrt{n}}\right)$ and $\Sigma X~N(n\mu ,\sqrt{n}\sigma )$. If the size n of the sample is sufficiently large, then the distribution of the sample means and the distribution of the sample sums will approximate a normal distribution regardless of the shape of the population. The mean of the sample means will equal the population mean and the mean of the sample sums will equal n times the population mean. The standard deviation of the distribution of the sample means, $\frac{\sigma}{\sqrt{n}}$, is called the standard error of the mean.
- Confidence Interval (CI)
- an interval estimate for an unknown population parameter. This depends on:
- The desired confidence level.
- Information that is known about the distribution (for example, known standard deviation).
- The sample and its size.
- Hypothesis
- a statement about the value of a population parameter, in case of two hypotheses, the statement assumed to be true is called the null hypothesis (notation H_{0}) and the contradictory statement is called the alternative hypothesis (notation H_{a}).
- Hypothesis Testing
- Based on sample evidence, a procedure for determining whether the hypothesis stated is a reasonable statement and should not be rejected, or is unreasonable and should be rejected.
- Level of Significance of the Test
- probability of a Type I error (reject the null hypothesis when it is true). Notation: α. In hypothesis testing, the Level of Significance is called the preconceived α or the preset α.
- Normal Distribution
- a continuous random variable (RV) with pdf $f(x)=\frac{1}{\sigma \sqrt{2\pi}}{e}^{\frac{-{(x-\mu )}^{2}}{2{\sigma}^{2}}}$, where μ is the mean of the distribution, and σ is the standard deviation, notation: X ~ N(μ, σ). If μ = 0 and σ = 1, the RV is called the standard normal distribution.
- p-value
- the probability that an event will happen purely by chance assuming the null hypothesis is true. The smaller the p-value, the stronger the evidence is against the null hypothesis.
- Standard Deviation
- a number that is equal to the square root of the variance and measures how far data values are from their mean; notation: s for sample standard deviation and σ for population standard deviation.
- Student's t-Distribution
- investigated and reported by William S. Gossett in 1908 and published under the pseudonym Student. The major characteristics of the random variable (RV) are:
- It is continuous and assumes any real values.
- The pdf is symmetrical about its mean of zero. However, it is more spread out and flatter at the apex than the normal distribution.
- It approaches the standard normal distribution as n gets larger.
- There is a "family" of t distributions: every representative of the family is completely defined by the number of degrees of freedom which is one less than the number of data items.
- Type 1 Error
- The decision is to reject the null hypothesis when, in fact, the null hypothesis is true.
- Type 2 Error
- The decision is not to reject the null hypothesis when, in fact, the null hypothesis is false.