Calculating Simple Linear Regression
Simple linear regression is a procedure that provides an estimate of the value of a dependent variable (outcome) based on the value of an independent variable (predictor). Knowing that estimate with some degree of accuracy, we can use regression analysis to predict the value of one variable if we know the value of the other variable (Cohen & Cohen, 1983). The regression equation is a mathematical expression of the influence that a predictor has on a dependent variable, based on some theoretical framework. For example, in Exercise 14, Figure 14-1 illustrates the linear relationship between gestational age and birth weight. As shown in the scatterplot, there is a strong positive relationship between the two variables. Advanced gestational ages predict higher birth weights.
A regression equation can be generated with a data set containing subjects’ x and y values. Once this equation is generated, it can be used to predict future subjects’ y values, given only their x values. In simple or bivariate regression, predictions are made in cases with two variables. The score on variable y (dependent variable, or outcome) is predicted from the same subject’s known score on variable x (independent variable, or predictor).
Research Designs Appropriate for Simple Linear Regression
Research designs that may utilize simple linear regression include any associational design (Gliner et al., 2009). The variables involved in the design are attributional, meaning the variables are characteristics of the participant, such as health status, blood pressure, gender, diagnosis, or ethnicity. Regardless of the nature of variables, the dependent variable submitted to simple linear regression must be measured as continuous, at the interval or ratio level.
Statistical Formula and Assumptions
Use of simple linear regression involves the following assumptions (Zar, 2010):
- Normal distribution of the dependent (y) variable
- Linear relationship between x and y
- Independent observations
- No (or little) multicollinearity
- Homoscedasticity
320
Data that are homoscedastic are evenly dispersed both above and below the regression line, which indicates a linear relationship on a scatterplot. Homoscedasticity reflects equal variance of both variables. In other words, for every value of x, the distribution of y values should have equal variability. If the data for the predictor and dependent variable are not homoscedastic, inferences made during significance testing could be invalid (Cohen & Cohen, 1983; Zar, 2010). Visual examples of homoscedasticity and heteroscedasticity are presented in Exercise 30.
In simple linear regression, the dependent variable is continuous, and the predictor can be any scale of measurement; however, if the predictor is nominal, it must be correctly coded. Once the data are ready, the parameters a and b are computed to obtain a regression equation. To understand the mathematical process, recall the algebraic equation for a straight line:
y=bx+a
image
where
y=the dependent variable(outcome)
image
x=the independent variable(predictor)
image
b=the slope of the line
image
a=y-intercept(the point where the regression line intersects the y-axis)
image
No single regression line can be used to predict with complete accuracy every y value from every x value. In fact, you could draw an infinite number of lines through the scattered paired values (Zar, 2010). However, the purpose of the regression equation is to develop the line to allow the highest degree of prediction possible—the line of best fit. The procedure for developing the line of best fit is the method of least squares. The formulas for the beta (β) and slope (α) of the regression equation are computed as follows. Note that once the β is calculated, that value is inserted into the formula for α.
β=n∑xy−∑x∑yn∑x 2 −(∑x) 2
image
α=∑y−b∑xn
image
Hand Calculations
This example uses data collected from a study of students enrolled in a registered nurse to bachelor of science in nursing (RN to BSN) program (Mancini, Ashwill, & Cipher, 2014). The predictor in this example is number of academic degrees obtained by the student prior to enrollment, and the dependent variable was number of months it took for the student to complete the RN to BSN program. The null hypothesis is “Number of degrees does not predict the number of months until completion of an RN to BSN program.”
The data are presented in Table 29-1. A simulated subset of 20 students was selected for this example so that the computations would be small and manageable. In actuality, studies involving linear regression need to be adequately powered (Aberson, 2010; Cohen, 1988). Observe that the data in Table 29-1 are arranged in columns that correspond to 321the elements of the formula. The summed values in the last row of Table 29-1 are inserted into the appropriate place in the formula for b.
TABLE 29-1
ENROLLMENT GPA AND MONTHS TO COMPLETION IN AN RN TO BSN PROGRAM
Student ID x y x2 xy
(Number of Degrees) (Months to Completion)
1 1 17 1 17
2 2 9 4 18
3 0 17 0 0
4 1 9 1 9
5 0 16 0 0
6 1 11 1 11
7 0 15 0 0
8 0 12 0 0
9 1 15 1 15
10 1 12 1 12
11 1 14 1 14
12 1 10 1 10
13 1 17 1 17
14 0 20 0 0
15 2 9 4 18
16 2 12 4 24
17 1 14 1 14
18 2 10 4 20
19 1 17 1 17
20 2 11 4 22
sum Σ 20 267 30 238
image
The computations for the b and α are as follows:
Step 1: Calculate b.
From the values in Table 29-1, we know that n = 20, Σx = 20, Σy = 267, Σx2 = 30, and Σxy = 238. These values are inserted into the formula for b, as follows:
b=20(238)−(20)(267)20(30)−20 2
image
b=−2.9
image
Step 2: Calculate α.
From Step 1, we now know that b = −2.9, and we plug this value into the formula for α.
α=267−(−2.9)(20)20
image
α=16.25
image
Step 3: Write the new regression equation:
y=−2.9x+16.25
image
322
Step 4: Calculate R.
The multiple R is defined as the correlation between the actual y values and the predicted y values using the new regression equation. The predicted y value using the new equation is represented by the symbol ŷ to differentiate from y, which represents the actual y values in the data set. We can use our new regression equation from Step 3 to compute predicted program completion time in months for each student, using their number of academic degrees prior to enrollment in the RN to BSN Program. For example, Student #1 had earned 1 academic degree prior to enrollment, and the predicted months to completion for Student 1 is calculated as:
y ̂ =−2.9(1)+16.25
image
y ̂ =13.35
image
Thus, the predicted ŷ is 13.35 months. This procedure would be continued for the rest of the students, and the Pearson correlation between the actual months to completion (y) and the predicted months to completion (ŷ) would yield the multiple R value. In this example, the R = 0.638. The higher the R, the more likely that the new regression equation accurately predicts y, because the higher the correlation, the closer the actual y values are to the predicted ŷ values. Figure 29-1 displays the regression line where the x axis represents possible numbers of degrees, and the y axis represents the predicted months to program completion (ŷ values).
image
FIGURE 29-1 REGRESSION LINE REPRESENTED BY NEW REGRESSION EQUATION.
Step 5: Determine whether the predictor significantly predicts y.
t=Rn−21−R 2 ‾ ‾ ‾ ‾ √
image
To know whether the predictor significantly predicts y, the beta must be tested against zero. In simple regression, this is most easily accomplished by using the R value from Step 4:
t=.638200−21−.407 ‾ ‾ ‾ ‾ ‾ √
image
t=3.52
image
323
The t value is then compared to the t probability distribution table (see Appendix A). The df for this t statistic is n − 2. The critical t value at alpha (α) = 0.05, df = 18 is 2.10 for a two-tailed test. Our obtained t was 3.52, which exceeds the critical value in the table, thereby indicating a significant association between the predictor (x) and outcome (y).
Step 6: Calculate R2.
After establishing the statistical significance of the R value, it must subsequently be examined for clinical importance. This is accomplished by obtaining the coefficient of determination for regression—which simply involves squaring the R value. The R2 represents the percentage of variance explained in y by the predictor. Cohen describes R2 values of 0.02 as small, 0.15 as moderate, and 0.26 or higher as large effect sizes (Cohen, 1988). In our example, the R was 0.638, and, therefore, the R2 was 0.407. Multiplying 0.407 × 100% indicates that 40.7% of the variance in months to program completion can be explained by knowing the student’s number of earned academic degrees at admission (Cohen & Cohen, 1983).
The R2 can be very helpful in testing more than one predictor in a regression model. Unlike R, the R2 for one regression model can be compared with another regression model that contains additional predictors (Cohen & Cohen, 1983). The R2 is discussed further in Exercise 30.
The standardized beta (β) is another statistic that represents the magnitude of the association between x and y. β has limits just like a Pearson r, meaning that the standardized β cannot be lower than −1.00 or higher than 1.00. This value can be calculated by hand but is best computed with statistical software. The standardized beta (β) is calculated by converting the x and y values to z scores and then correlating the x and y value using the Pearson r formula. The standardized beta (β) is often reported in literature instead of the unstandardized b, because b does not have lower or upper limits and therefore the magnitude of b cannot be judged. β, on the other hand, is interpreted as a Pearson r and the descriptions of the magnitude of β can be applied, as recommended by Cohen (1988). In this example, the standardized beta (β) is −0.638. Thus, the magnitude of the association between x and y in this example is considered a large predictive association (Cohen, 1988).
324
SPSS Computations
This is how our data set looks in SPSS.
image
Step 1: From the “Analyze” menu, choose “Regression” and “Linear.”
Step 2: Move the predictor, Number of Degrees, to the space labeled “Independent(s).” Move the dependent variable, Number of Months to Completion, to the space labeled “Dependent.” Click “OK.”
image
325
Interpretation of SPSS Output
The following tables are generated from SPSS. The first table contains the multiple R and the R2 values. The multiple R is 0.638, indicating that the correlation between the actual y values and the predicted y values using the new regression equation is 0.638. The R2 is 0.407, indicating that 40.7% of the variance in months to program completion can be explained by knowing the student’s number of earned academic degrees at enrollment.
Regression
image
The second table contains the ANOVA table. As presented in Exercises 18 and 33, the ANOVA is usually performed to test for differences between group means. However, ANOVA can also be performed for regression, where the null hypothesis is that “knowing the value of x explains no information about y”. This table indicates that knowing the value of x explains a significant amount of variance in y. The contents of the ANOVA table are rarely reported in published manuscripts, because the significance of each predictor is presented in the last SPSS table titled “Coefficients” (see below).
image
The third table contains the b and a values, standardized beta (β), t, and exact p value. The a is listed in the first row, next to the label “Constant.” The β is listed in the second row, next to the name of the predictor. The remaining information that is important to extract when interpreting regression results can be found in the second row. The standardized beta (β) is −0.638. This value has limits just like a Pearson r, meaning that the standardized β cannot be lower than −1.00 or higher than 1.00. The t value is −3.516, and the exact p value is 0.002.
image
326
Final Interpretation in American Psychological Association (APA) Format
The following interpretation is written as it might appear in a research article, formatted according to APA guidelines (APA, 2010). Simple linear regression was performed with number of earned academic degrees as the predictor and months to program completion as the dependent variable. The student’s number of degrees significantly predicted months to completion among students in an RN to BSN program, β = −0.638, p = 0.002, and R2 = 40.7%. Higher numbers of earned academic degrees significantly predicted shorter program completion time.
327
Study Questions
- If you have access to SPSS, compute the Shapiro-Wilk test of normality for months to completion (as demonstrated in Exercise 26). If you do not have access to SPSS, plot the frequency distributions by hand. What do the results indicate?
- State the null hypothesis for the example where number of degrees was used to predict time to BSN program completion.
- In the formula y = bx + a, what does “b” represent?
- In the formula y = bx + a, what does “a” represent?
- Using the new regression equation, ŷ = −2.9x + 16.25, compute the predicted months to program completion if a student’s number of earned degrees is 0. Show your calculations.
- Using the new regression equation, ŷ = −2.9x + 16.25, compute the predicted months to program completion if a student’s number of earned degrees is 2. Show your calculations.
328
- What was the correlation between the actual y values and the predicted y values using the new regression equation in the example?
- What was the exact likelihood of obtaining a t value at least as extreme as or as close to the one that was actually observed, assuming that the null hypothesis is true?
- How much variance in months to completion is explained by knowing the student’s number of earned degrees?
- How would you characterize the magnitude of the R2 in the example? Provide a rationale for your answer.