Analysis of Variance or commonly put as ANOVA is basically used to compare three or more population means. It has many useful business applications such as determining:-

- If the average amount of time spent per month on Facebook differs between various age groups (three or more age groups)

- If the average number of sales calls per day differs between sales representatives (three or more sales representatives)

Every time you conduct a t-test there is a chance that you will make a Type I error. This error is usually 5%. By running two t-tests on the same data you will have increased your chance of "making a mistake" to 10%. The formula for determining the new error rate for multiple t-tests is not as simple as multiplying 5% by the number of tests. However, if you are only making a few multiple comparisons, the results are very similar if you do. As such, three t-tests would be 15% (For instance, for three groups say A, B and C, we carry out three t-tests - comparison between A and B, then between A and C, and between B and C, at a significance level of 5% or alpha = 0.05. Thus, the overall probability of no Type I error will be 0.95 * 0.95 * 0.95 = 0.857. This implies that the probability of Type I error will be 0.143 (= 1 – 0.857).) and so on. These are unacceptable errors. An ANOVA controls for these errors so that the Type I error remains at 5% and you can be more confident that any statistically significant result you find is not just running lots of tests.

Example 1:

A firm that studies customer satisfaction conducts a survey to measure how satisfied customers are with several smartphones.

In the above table, ‘average score’ is the dependent variable and ‘type of phone’ is the independent variable. This independent variable has 4 groups.

Here, we can employ an ANOVA procedure to test if we have enough evidence from this sample to conclude whether the average satisfaction scores from these populations of phone users are different from one another. Using F-test (F-tests are named after its test statistic, F, which was named in honor of Sir Ronald Fisher. The F-statistic is simply a ratio of two variances. To use the F-test to determine whether group means are equal, it’s just a matter of including the correct variances in the ratio. In one-way ANOVA, the F-statistic is this ratio:**F = variation between sample means / variation within the samples)**, ANOVA determines whether the variation in satisfaction scores is due to the type of phone (*between group variation*) or simply due to randomness (*within group variation*).

F-test compares the amount of systematic variance (variance between groups) in the data to the amount of unsystematic (error) variance (variance within groups – the variance that cannot be explained by the independent variable).

In this example, we use a One-way ANOVA since there’s one dependent variable and one independent variable with more than two treatment levels. There can be situations where there can be more than two factors or independent variables can be tested for which the interaction between the two factors should be tested, this is referred to as a Two-way ANOVA. As one might expect this concept can be extended beyond just two factors to ‘N’ number of factors.

Further to be noted is that there are certain assumptions of ANOVA, which are the following: -

- Data should be from a normally distributed population.

- The variance in each experimental condition is fairly similar, also referred to as homogeneity of variance.

- The observations should be independent and random.

- The dependent variable should be measured on at least an interval scale.

**Null Hypothesis** (**H0**): µ1 = µ2 = µ3 = ….. = µc

*(all population means are equal for ‘c’ different groups, i.e. there is no variation in means among the groups)*

**Alternate Hypothesis** (**H1**): Not all the population means are equal

*(at least one population mean is different to the others)*

To perform an F-test for differences in more than two means, we should calculate the following:

- Total Sum of Squares (SST) SST = SSB + SSW
- Sum of Squares Within Groups (SSW)
- Sum of Squares Between Groups (SSB)
- Mean Square Between (MSB) MSB = SSB/c-1
- Mean Square Within (MSW) MSW = SSW/n-c
- F statistic F = MSB/MSW

Steps to calculate the above are:

**SST**= where, = Grand mean*Xij*=*ith value in group j**nj = number of values in group j**n = total number of values in all groups combined**c = number of groups*

**SSB**= where,*= sample mean of group j***SSW**=

To give an illustration of how to perform calculations in a One-way ANOVA test:

Example – the data below represents the number of hours the battery of laptops (after one full charge cycle) by 5 different brands of laptop manufacturers (Dell, Apple, HP, Sony and ASUS) owned by 25 people. The 25 owners were randomly divided into 5 groups and each group was treated with a different brand. Assume confidence level to be 95%.

In this example, ‘number of hours of battery life’ is the dependent variable and ‘brand of laptop’ is the independent variable. This independent variable has 5 groups. Also, α = 0.05.

**SSW** = (5-5.2)^2 + (4-5.2)^2 + (8-5.2)^2 + (6-5.2)^2 + (3-5.2)^2 + (9-7.8)^2 + (7-7.8)^2 +**SSB** = 5(5.2-5.28)^2 + 5(7.8-5.28)^2 + 5(4-5.28)^2 + 5(2.8-5.28)^2 + 5(6.6-5.28)^2 = **79.44**

(8-7.8)^2 + (6-7.8)^2 + (9-7.8)^2 + (3-4)^2 + (5-4)^2 + (2-4)^2 + (3-4)^2 + (7-4)^2 +

(2-2.8)^2 + (3-2.8)^2 + (4-2.8)^2 + (1-2.8)^2 + (4-2.8)^2 + (7-6.6)^2 + (6-6.6)^2 +

(9-6.6)^2 + (4-6.6)^2 + (7-6.6)^2 = **57.6**

**SST = **(5-5.28)^2 + (4-5.28)^2 + (8-5.28)^2 + (6-5.28)^2 + (3-5.28)^2 + (9-5.28)^2 + (7-5.28)^2 +

(8-5.28)^2 + (6-5.28)^2 + (9-5.28)^2 + (3-5.28)^2 + (5-5.28)^2 + (2-5.28)^2 + (3-5.28)^2 +

(7-5.28)^2 + (2-5.28)^2 + (3-5.28)^2 + (4-5.28)^2 + (1-5.28)^2 + (4-5.28)^2 + (7-5.28)^2 +

(6-5.28)^2 + (9-5.28)^2 + (4-5.28)^2 + (7-5.28)^2 = **137.04**

Now, we conduct a hypothesis test to test whether the mean number of hours the battery runs after a full battery charge cycle is not the same or not the same for all 5 brands.

**H0** : µ1 = µ2 = µ3 = µ4 = µ5 **H1** : not all population means are equal

Decision rule: Reject null hypothesis (H0) if **Fcalc** > **Fcri**

Test statistic: **Fcalc** = 6.90

Critical value: **Fcri** = **Fα,c-1,n-c** = **F**0.05,4,20 = 2.87

Therefore, the **F** calculated value of 6.90 is greater than the **F** critical value of 2.87 and thus we reject the null hypothesis and conclude that the mean number of hours the battery runs after a full battery charge cycle is not the same across all the 5 brands of laptops.

Hence, we have concluded how we conduct a One-way ANOVA test.

In our next blog we would be discussing how we conduct a One-way ANOVA test for the reason of finding whether the population means across groups are equal or not (or at least one of them is not equal). Read our next blog.