Please help explain this to me. I have tried using an example but i still dont understand it.

The mean of the [tex]n=15[/tex] estimates is [tex]\bar x=62.2[/tex] seconds, and the standard deviation of the sample is about [tex]s=18.1[/tex]. Then the test statistic is
[tex]t=\dfrac{\bar x-\mu_0}{\frac s{\sqrt n}}[/tex]
where [tex]\mu_0[/tex] is the mean under the null hypothesis. So
[tex]t=\dfrac{62.2-60}{\frac{18.1}{\sqrt{15}}}\approx0.47[/tex]
We go on to find
[tex]P(T>0.47)\approx0.32[/tex]
so the difference is statistically significant and you reject the null hypothesis.