These results indicate Statistically significant differences, but very little effect size.
The probability value or p-value in hypothesis testing can be defined as the probability of an observed effect actually occurring as a result of the manipulation of the variables. The different p-values indicate how much of the effect is due to the experimental manipulation, and how much of the effect can be attributed to chance.
Higher p-values indicate that the null hypothesis- that there is no significant effect of the variables- may be accepted as true. Similarly, lower p-values indicate a lower probability of the null hypothesis being accepted. The standard value of p, below which the null hypothesis is rejected is .05. This implies that when the p-value is < or = .05, the null hypothesis is rejected, and the observed effect of the experiment is statistically significant.
Since the p-value given in the question is exactly .05, it is still statistically significant.
The effect size, in simple terms, can be defined as the degree of effect of the experiment observed in a sample- the size of the difference between groups. The effect size of a study can be measured in terms of Cohen's d. According to this measure, d = 0.2, 0.5, and 0.8 correspond to small, medium, and large effects respectively.
In the question above, d=.15, which is less than 0.2, indicates that the effect size is very small.
Learn more about Statistically significant here: brainly.com/question/15848236
#SPJ4