As the degrees of freedom decreases, the shape of the t-distribution will

One of the interesting properties of the t-distribution is that the greater the degrees of freedom, the more closely the t-distribution resembles the standard normal distribution. As the degrees of freedom increases, the area in the tails of the t-distribution decreases while the area near the center increases. (The tails consist of the extreme values of the distribution, both negative and positive.) Eventually, when the degrees of freedom reaches 30 or more, the t-distribution and the standard normal distribution are extremely similar.

The following figures illustrate the relationship between the t-distribution with different degrees of freedom and the standard normal distribution. The first figure shows the standard normal and the t-distribution with two degrees of freedom (df). Notice how the t-distribution is significantly more spread out than the standard normal distribution.

As the degrees of freedom decreases, the shape of the t-distribution will

The standard normal and t-distribution with two degrees of freedom.

The graph in the first figure shows that the t-distribution has more area in the tails and less area around the mean than the standard normal distribution. (The standard normal distribution curve is shown with square markers.) As a result, more extreme observations (positive and negative) are likely to occur under the t-distribution than under the standard normal distribution.

As the degrees of freedom decreases, the shape of the t-distribution will

The standard normal and t-distribution with ten degrees of freedom.

The second figure compares the standard normal distribution with the t-distribution with ten degrees of freedom. The two are much closer to each other here than in the first figure.

As the degrees of freedom decreases, the shape of the t-distribution will

The standard normal and t-distribution with 30 degrees of freedom.

As you can see in the third figure, with 30 degrees of freedom, the t-distribution and the standard normal distribution are almost indistinguishable.

Chi-square goodness of fit tests are often used in genetics. One common application is to check if two genes are linked (i.e., if the assortment is independent). When genes are linked, the allele inherited for one gene affects the allele inherited for another gene.

Suppose that you want to know if the genes for pea texture (R = round, r = wrinkled) and color (Y = yellow, y = green) are linked. You perform a dihybrid cross between two heterozygous (RY / ry) pea plants. The hypotheses you’re testing with your experiment are:

  • Null hypothesis (H0): The population of offspring have an equal probability of inheriting all possible genotypic combinations.
    • This would suggest that the genes are unlinked.
  • Alternative hypothesis (Ha): The population of offspring do not have an equal probability of inheriting all possible genotypic combinations.
    • This would suggest that the genes are linked.

You observe 100 peas:

  • 78 round and yellow peas
  • 6 round and green peas
  • 4 wrinkled and yellow peas
  • 12 wrinkled and green peas

Step 1: Calculate the expected frequencies

To calculate the expected values, you can make a Punnett square. If the two genes are unlinked, the probability of each genotypic combination is equal.

RYryRyrYRYRRYYRrYyRRYyRrYYryRrYyrryyRryyrrYyRyRRYyRryyRRyyRrYyrYRrYYrrYyRrYyrrYY

The expected phenotypic ratios are therefore 9 round and yellow: 3 round and green: 3 wrinkled and yellow: 1 wrinkled and green.

From this, you can calculate the expected phenotypic frequencies for 100 peas:

PhenotypeObservedExpectedRound and yellow78100 * (9/16) = 56.25Round and green6100 * (3/16) = 18.75Wrinkled and yellow4100 * (3/16) = 18.75Wrinkled and green12100 * (1/16) = 6.21

Step 2: Calculate chi-square

PhenotypeObservedExpectedO − E(O − E)2(O − E)2 / ERound and yellow7856.2521.75473.068.41Round and green618.75−12.75162.568.67Wrinkled and yellow418.75−14.75217.5611.6Wrinkled and green126.215.7933.525.4

Χ2 = 8.41 + 8.67 + 11.6 + 5.4 = 34.08

Step 3: Find the critical chi-square value

Since there are four groups (round and yellow, round and green, wrinkled and yellow, wrinkled and green), there are three degrees of freedom.

For a test of significance at α = .05 and df = 3, the Χ2 critical value is 7.82.

Step 4: Compare the chi-square value to the critical value

Χ2 = 34.08

Critical value = 7.82

The Χ2 value is greater than the critical value.

Step 5: Decide whether the reject the null hypothesis

The Χ2 value is greater than the critical value, so we reject the null hypothesis that the population of offspring have an equal probability of inheriting all possible genotypic combinations. There is a significant difference between the observed and expected genotypic frequencies (p < .05).

The data supports the alternative hypothesis that the offspring do not have an equal probability of inheriting all possible genotypic combinations, which suggests that the genes are linked

Does the shape of a t

The shape of the t-distribution depends on the degrees of freedom. The curves with more degrees of freedom are taller and have thinner tails. All three t-distributions have “heavier tails” than the z-distribution. You can see how the curves with more degrees of freedom are more like a z-distribution.

What happens to the shape of the t

The Student t distribution is generally bell-shaped, but with smaller sample sizes shows increased variability (flatter). In other words, the distribution is less peaked than a normal distribution and with thicker tails.

How does degrees of freedom affect t statistic?

The t distribution has less spread as the number of degrees of freedom increases because the certainty of the estimate increases. Imagine repeatedly sampling the population and calculating Student's t; the larger the sample size, the less the test statistic will vary between samples.

How does the shape of t

As explained above, the shape of the t-distribution is affected by sample size. As the sample size grows, the t-distribution gets closer and closer to a normal distribution. Theoretically, the t-distribution only becomes perfectly normal when the sample size reaches the population size.