10 Jul Instructions For this assignment, you will use the six-step hypothesis testing process (noted below) to run and interpret a correlation analysis using SPSS. The followin
Instructions
For this assignment, you will use the six-step hypothesis testing process (noted below) to run and interpret a correlation analysis using SPSS. The following vignette will inform you of the context for this assignment. A data file is provided in the week’s resources for use in this assignment. Also review the section on Presentation of Statistical Results and Explaining Quantitative Findings in a Narrative Report in the NCU School of Business Best Practice Guide for Quantitative Research Design and Methods in Dissertations.
A manager is interested in better understanding job satisfaction by studying the associations between a number of variables. These variables are age, years of experience, level of education, employee engagement, job satisfaction, and job performance levels.
Part 1
She thinks there is a relationship between job satisfaction and
· years of experience
· educational level
· employee engagement
· job performance
· State the null and alternative hypotheses.
1. Identify critical values for the test statistics and state the decision rule concerning when to reject or fail to reject the null hypothesis of no relationship.
2. Run the Pearson correlation analysis and include the correlation matrix in your assignment response.
3. Report and interpret the correlation coefficient and p-value for each variable paired with job satisfaction.
4. Explain what decisions the manager might make using these findings.
Part 2
She thinks that younger employees will perform at a higher level, on average.
1. State the null and alternative hypotheses sets.
2. Select the significance level.
3. Select the test statistics and calculate its value.
4. Identify critical values for the test statistics and state the decision rule concerning when to reject or fail to reject the null hypothesis.
5. Compare the calculated and critical values to reach a conclusion for the null hypothesis.
6. Explain what decisions the manager might make using these findings.
Length: 4 to 6 pages not including title and reference page
References: Include a minimum of 3 scholarly resources.
Your paper should demonstrate thoughtful consideration of the ideas and concepts presented in the course and provide new thoughts and insights relating directly to this topic. Your response should reflect scholarly writing and current APA standards. Be sure to adhere to Northcentral University's Academic Integrity Policy.
References
Correlation. (n.d.). Week 6 Assignment Data (SPSS)
Knapp, Ph.D., H. (Academic). (2016). Correlation and Regression – Pearson [Video]. SAGE Research Methods
McCormick, K., Salcedo, J., & Poh, A. (2015). SPSS statistics for dummies. Hoboken, New Jersey : John Wiley & Sons, Inc
NCU School of Business Best Practice Guide for Quantitative Research Design and Methods in Dissertations
SAGE Research Methods Video
Correlation and Regression – Pearson
Video Title: Correlation and Regression – Pearson
Originally Published: 2016
Publication Date: Sep. 30, 2016
Publishing Company: SAGE Publications, Inc.
City: Thousand Oaks, United States
ISBN: 9781506359212
DOI: https://dx.doi.org/10.4135/9781506359212
(c) SAGE Publications Inc., 2017
HERSCHEL KNAPP: Welcome to Practical Statistics for Nursing Using SPSS. This video shows how to process the Pearson correlation and regression. You can watch the entire video or use the time slider to navigate directly to any time point. [Correlation and Regression – Pearson, Overview] Correlation and regression analysis
HERSCHEL KNAPP [continued]: computes the nature of the relationship between two continuous variables. The relationship can be characterized using two parameters, direction and strength. The regression ranges between -1 and +1. The regression sign indicates the direction of the correlation. Positive correlations occur when the variables
HERSCHEL KNAPP [continued]: move in the same direction. When x goes up, y goes up. Or when x goes down, y goes down. Negative correlations occur when the variables move in opposite directions. When x goes up, y goes down. Or when x goes down, y goes up.
HERSCHEL KNAPP [continued]: The regression value indicates the strength of the correlation. Values nearer to -1 or +1 are stronger than values nearer to zero. To better conceptualize the data, a scatterplot with a regression line is useful. Each point represents two scores gathered from each individual.
HERSCHEL KNAPP [continued]: For example, this point represents two scores gathered on one of the patients surveyed. The patient had a length of stay of 12 days and a depression score of 58. The regression line can be thought of as the average pathway through the points. The positive slope suggests that lower length of stay is associated with lower depression scores,
HERSCHEL KNAPP [continued]: and higher length of stay is associated with higher depression scores for this group of patients. To better comprehend the notion of regression, consider these three examples. Here we see a strong positive correlation between number of homework hours in quiz scores, where lower homework hours are paired with lower quiz scores
HERSCHEL KNAPP [continued]: and higher homework hours are paired with higher quiz scores. In the second scatterplot, we see a strong negative correlation between alcohol consumption and quiz scores, where higher alcohol consumption is paired with lower quiz scores and lower alcohol consumption is paired with higher quiz scores.
HERSCHEL KNAPP [continued]: Finally, we see a fairly weak correlation between baseball-throwing skills and quiz scores, where baseball-throwing skills have virtually no correlation with quiz scores. [Correlation and Regression – Pearson, Pretest Checklist] Before running a Pearson correlation or regression analysis, there are three pretest criteria
HERSCHEL KNAPP [continued]: that need to be met. First, the data for each of the two groups should be normally distributed. We can check for this by observing a histogram with a normal curve for each group. The second and third criteria, linearity and homoscedasticity, can be verified by observing the scatterplot with the regression line.
HERSCHEL KNAPP [continued]: This example uses the dataset Ch 11 – Example 01 – Correlation and Regression.sav. This dataset contains three variables. Patient ID is a string variable, along with two continuous variables, Length of Stay
HERSCHEL KNAPP [continued]: and Depression scores for each patient. To check for normality, order histograms with normal curves for the two variables that will be involved in the correlation, Length of Stay and Depression. Click on Analyze, Descriptive Statistics, Frequencies. Move Length of Stay and Depression into Variables
HERSCHEL KNAPP [continued]: and click Charts. Select Histogram with Normal Curve. Click Continue, and uncheck Display Frequency Table. Click OK, and it'll process. The symmetrical curve
SAGE (c) SAGE Publications Inc., 2017
SAGE Research Methods Video
Page 2 of 3 Correlation and Regression – Pearson
on the histogram for Length of Stay shows a normal distribution.
HERSCHEL KNAPP [continued]: And the curve on the histogram for Depression also shows a normal distribution. The pretest criteria of normality is satisfied. To finalize the pretest checklist, we'll order a scatterplot with a regression line. This will also give us a more comprehensive understanding of the relationship between Length of Stay and Depression.
HERSCHEL KNAPP [continued]: Click on Graphs, Chart Builder. In the Choose From list, select Scatter/Dot and select the Simple Scatter option. Drag Length of Stay to the x-axis and Depression to the y-axis. Click OK, and it'll process.
HERSCHEL KNAPP [continued]: To order the regression line, double-click on the scatterplot and click on the Add Fit Line at Total icon. In terms of linearity, we see that the points lie in a fairly straight line. There are no unexpected curves or twists in the arrangement of the points. This satisfies the linearity criteria.
HERSCHEL KNAPP [continued]: As for homoscedasticity, we see that the field of points is thicker in the middle and tapers at the ends. This satisfies the homoscedasticity criteria. [Correlation and Regression – Pearson, Test Run] To process a Pearson correlational analysis, click on Analyze, Correlate, Bivariate.
HERSCHEL KNAPP [continued]: Move Length of Stay and Depression to variables. Click OK, and it'll process. [Correlation and Regression – Pearson, Results] The correlation table shows a strong positive correlation of 0.789 between Length of Stay and Depression. We also see that the P value is less than 0.05,
HERSCHEL KNAPP [continued]: suggesting that this is a statistically significant correlation. This concludes this video.
SAGE (c) SAGE Publications Inc., 2017
SAGE Research Methods Video
Page 3 of 3 Correlation and Regression – Pearson
- SAGE Research Methods Video
- Correlation and Regression – Pearson
,
7/7/22 11:09 AM 1/1
7105 Week 6 data.sav
Age Experience Education Engagement Satisfaction Performance
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31.00 6.00 2.00 2.00 2.00 3.00
47.00 14.00 3.00 4.00 4.00 4.00
39.00 13.00 3.00 2.00 2.00 2.00
29.00 8.00 2.00 2.00 3.00 3.00
41.00 6.00 1.00 4.00 4.00 5.00
49.00 15.00 2.00 3.00 4.00 4.00
29.00 5.00 2.00 3.00 4.00 5.00
47.00 15.00 2.00 3.00 3.00 5.00
31.00 3.00 2.00 4.00 2.00 2.00
34.00 10.00 1.00 1.00 2.00 3.00
29.00 8.00 2.00 4.00 5.00 4.00
29.00 9.00 3.00 2.00 3.00 5.00
35.00 2.00 1.00 2.00 3.00 1.00
40.00 11.00 1.00 4.00 4.00 5.00
27.00 8.00 3.00 5.00 2.00 1.00
38.00 12.00 1.00 2.00 3.00 2.00
29.00 9.00 2.00 5.00 5.00 5.00
43.00 15.00 3.00 2.00 2.00 2.00
25.00 9.00 3.00 4.00 3.00 4.00
35.00 14.00 2.00 5.00 4.00 5.00
36.00 1.00 1.00 5.00 5.00 5.00
25.00 4.00 1.00 4.00 4.00 3.00
42.00 12.00 3.00 4.00 5.00 4.00
32.00 10.00 1.00 5.00 4.00 5.00
40.00 11.00 2.00 4.00 4.00 4.00
50.00 15.00 1.00 5.00 5.00 4.00
28.00 8.00 3.00 1.00 2.00 2.00
26.00 7.00 1.00 5.00 5.00 4.00
36.00 4.00 2.00 1.00 1.00 1.00
47.00 12.00 3.00 2.00 1.00 2.00
,
Week 6 – Assignment: Interpret a Correlation Analysis
For this assignment, you will use the six-step hypothesis testing process (noted below) to run and interpret a correlation analysis using SPSS. The following vignette will inform you of the context for this assignment. A data file is provided in the week’s resources for use in this assignment. Also review the section on Presentation of Statistical Results and Explaining Quantitative Findings in a Narrative Report in the NCU School of Business Best Practice Guide for Quantitative Research Design and Methods in Dissertations.
A manager is interested in better understanding job satisfaction by studying the associations between a number of variables. These variables are age, years of experience, level of education, employee engagement, job satisfaction, and job performance levels.
Part 1
She thinks there is a relationship between job satisfaction and
· years of experience
· educational level
· employee engagement
· job performance
· State the null and alternative hypotheses.
1. Identify critical values for the test statistics and state the decision rule concerning when to reject or fail to reject the null hypothesis of no relationship.
2. Run the Pearson correlation analysis and include the correlation matrix in your assignment response.
3. Report and interpret the correlation coefficient and p-value for each variable paired with job satisfaction.
4. Explain what decisions the manager might make using these findings.
Part 2
She thinks that younger employees will perform at a higher level, on average.
1. State the null and alternative hypotheses sets.
2. Select the significance level.
3. Select the test statistics and calculate its value.
4. Identify critical values for the test statistics and state the decision rule concerning when to reject or fail to reject the null hypothesis.
5. Compare the calculated and critical values to reach a conclusion for the null hypothesis.
6. Explain what decisions the manager might make using these findings.
Length: 4 to 6 pages not including title and reference page
References: Include a minimum of 3 scholarly resources.
Your paper should demonstrate thoughtful consideration of the ideas and concepts presented in the course and provide new thoughts and insights relating directly to this topic. Your response should reflect scholarly writing and current APA standards. Be sure to adhere to Northcentral University's Academic Integrity Policy.
References
Correlation. (n.d.). Week 6 Assignment Data (SPSS)
Knapp, Ph.D., H. (Academic). (2016). Correlation and Regression – Pearson [Video]. SAGE Research Methods
McCormick, K., Salcedo, J., & Poh, A. (2015). SPSS statistics for dummies. Hoboken, New Jersey : John Wiley & Sons, Inc
NCU School of Business Best Practice Guide for Quantitative Research Design and Methods in Dissertations
,
7/7/22 11:08 AM 1/2
7105 Week 6 data.sav
Name Type Width Decimals Label Values
1
2
3
4
5
6
Age Numeric 8 2 Employee age i… None
Experience Numeric 8 2 Years of experi… None
Education Numeric 8 2 Level of education {1.00, High …
Engagement Numeric 8 2 Employee Enga… {1.00, Very l…
Satisfaction Numeric 8 2 Job Satisfaction {1.00, Very …
Performance Numeric 8 2 Job Performance {1.00, Low p…
7/7/22 11:08 AM 2/2
7105 Week 6 data.sav
Missing Columns Align Measure Role
1
2
3
4
5
6
None 8 Right Scale Input
None 8 Right Scale Input
None 8 Right Nominal Input
None 8 Right Ordinal Input
None 8 Right Ordinal Input
None 8 Right Ordinal Input
,
Showing Relationships between Continuous Dependent and
Independent Variables In This Chapter
▶ Viewing relationships
▶ Running the bivariate procedure
▶ Running the linear regression procedure
▶ Making predictions
T he two most commonly used statistical techniques to analyze relation- ships between continuous variables are the Pearson correlation and
linear regression.
Many people use the term correlation to refer to the idea of a relationship between variables or a pattern. This view of the term correlation is correct, but correlation also refers to a specific statistical technique. Pearson correla- tions are used to study the relationship between two continuous variables. For example, you may want to look at the relationship between height and weight, and you may find that as height increases, so does weight. In other words, in this example, the variables are correlated with each other because changes in one variable impact the other.
Whereas correlation just tries to determine if two variables are related, linear regression takes this one step further and tries to predict the values of one variable based on another (so if you know someone’s height, you can make an intelligent prediction for that person’s weight). Of course, most of the time you wouldn’t make a prediction based on just one independent variable (height); instead, you would typically use several variables that you deemed important (age, gender, BMI, and so on).
Chapter 16
McCormick, K., & Salcedo, J. (2015). Spss statistics for dummies. John Wiley & Sons, Incorporated. Created from ncent-ebooks on 2022-07-07 15:27:26.
C o p yr
ig h t ©
2 0 1 5 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
250 Part V: Analyzing Data This chapter does not address how to create scatterplots, because we cover those in Chapter 12. However, you need to create scatterplots before using the correlation and linear regression procedures because these techniques are only appropriate when you have linear relationships.
Running the Bivariate Procedure Correlations determine the similarity or difference in the way two continuous variables change in value from one case (row) to another through the data. As you can see in Figure 16-1, a scatterplot visually shows the relationship between two continuous variables by displaying individual observations. (This example uses the employee_data.sav data file.)
Notice that, for the most part, low beginning salaries are associated with low current salaries, and that high beginning salaries are associated with high cur- rent salaries — this is called a positive relationship. Positive relationships show that as you increase in one variable, you increase in the other variable, so low numbers go with low numbers and high numbers go with high numbers. Using the example mentioned earlier, you may find that as height increases, so does weight — this would be an example of a positive relationship.
Figure 16-1: Scatterplot
of current and begin-
ning salary.
McCormick, K., & Salcedo, J. (2015). Spss statistics for dummies. John Wiley & Sons, Incorporated. Created from ncent-ebooks on 2022-07-07 15:27:26.
C o p yr
ig h t ©
2 0 1 5 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
251 Chapter 16: Continuous Dependent and Independent Variables With negative relationships, as you increase in one variable, you decrease in the other variable, so low numbers on one variable go with high numbers on the other variable. An example of a negative relationship may be that the more depressed you are, the less exercise you do.
You can use the bivariate procedure, which we demonstrate here, whenever you have a positive or negative linear relationship. However, you shouldn’t use the bivariate procedure when you have a nonlinear relationship, because the results will be misleading.
Figure 16-2 shows a scatterplot of a nonlinear relationship. As an example of a nonlinear relationship, consider the variables test anxiety (on the x‐axis) and test performance (on the y‐axis). People with very little test anxiety may not take a test seriously (they don’t study) so they don’t perform well; likewise people with a lot of test anxiety may not perform well because the test anxiety didn’t allow them to concentrate or even read test questions cor- rectly. However, people with a moderate level of test anxiety should be moti- vated enough to study, but they don’t have too much test anxiety to suffer crippling effects.
Notice that in this example, as we increase in one variable, we increase in the other variable up to a certain point; then as we continue to increase in one variable, we decrease in the other variable. Clearly, there is a relation- ship between these two variables, but the bivariate procedure would indicate (incorrectly) that there is no relationship between these two variables. For this reason, it’s important to always create a scatterplot of any variables you want to correlate so that you don’t reach incorrect conclusions.
Figure 16-2: A scat-
terplot of a nonlinear
relationship.
McCormick, K., & Salcedo, J. (2015). Spss statistics for dummies. John Wiley & Sons, Incorporated. Created from ncent-ebooks on 2022-07-07 15:27:26.
C o p yr
ig h t ©
2 0 1 5 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
252 Part V: Analyzing Data Although a scatterplot visually shows the relationship between two continu- ous variables, the Pearson correlation coefficient is used to quantify the strength and direction of the relationship between continuous variables. The Pearson correlation coefficient is a measure of the extent to which there is a linear (straight line) relationship between two variables. It has values between –1 and +1, so that the larger the value, the stronger the correlation. As an example, a correlation of +1 indicates that the data fall on a perfect straight line sloping upward (positive relationship), while a correlation of –1 would represent data forming a straight line sloping downward (negative relationship). A correlation of 0 indicates there is no straight‐line relationship at all (which is what we would find in Figure 16-2).
To perform a correlation, follow these steps:
1. From the main menu, choose File ➪ Open ➪ Data and load the employee_data.sav data file.
The file is not in the SPSS installation directory. You have to download it from this book’s companion website.
This file contains the employee information from a bank in the 1960s and has 10 variables and 474 cases.
2. Choose Analyze ➪ Correlate ➪ Bivariate.
The Bivariate Correlations dialog box, shown in Figure 16-3, appears.
Figure 16-3: The
Bivariate Correlations
dialog box.
McCormick, K., & Salcedo, J. (2015). Spss statistics for dummies. John Wiley & Sons, Incorporated. Created from ncent-ebooks on 2022-07-07 15:27:26.
C o p yr
ig h t ©
2 0 1 5 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
253 Chapter 16: Continuous Dependent and Independent Variables In this example, we want to study whether current salary is related
to beginning salary, months on the job, and previous job experience. Notice that there is no designation of dependent and independent vari- ables. Correlations will be calculated on all pairs of variables listed.
3. Select the variables salary, salbegin, jobtime, and prevexp, and place them in the Variables box, as shown in Figure 16-4.
You can choose up to three kinds of correlations. The most common form is the Pearson correlation, which is the default. Pearson is used for continuous variables, while Spearman and Kendall’s tau‐b (less common) are used for nonnormal data or ordinal data, as relationships are evaluated after the original data have been transformed into ranks.
If you want, you can click the Options button and decide what is to be done about missing values and tell SPSS Statistics whether you want to calculate the standard deviations.
4. Click OK.
SPSS calculates the correlations between the variables.
Statistical tests are used to determine whether a relationship between two variables is statistically significant. In the case of correlations, we want to test whether the correlation differs from zero (zero indicates no linear associ- ation). Figure 16-5 is a standard Correlations table. First, notice that the table
Figure 16-4: The com-
pleted Bivariate
Correlations dialog box.
McCormick, K., & Salcedo, J. (2015). Spss statistics for dummies. John Wiley & Sons, Incorporated. Created from ncent-ebooks on 2022-07-07 15:27:26.
C o p yr
ig h t ©
2 0 1 5 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
254 Part V: Analyzing Data is symmetric, so the same information is represented above and below the major diagonal. Also, notice that the correlations in the major diagonal are 1, because these are the correlations of each variable with itself.
The Correlations table provides three pieces of information:
✓ The Pearson Correlation, which will range from +1 to –1. The further away from 0, the stronger the relationship.
✓ The two‐tailed significance level. All correlations with a significance level less than 0.05 will have an asterisk next to the coefficient.
✓ N, which is the sample size.
In our data, we have a very strong positive correlation (0.880) that is statisti- cally significant between current and beginning salary. Notice that the proba- bility of the null hypothesis being true for this relationship is extremely small (less than 0.01). So, we reject the null hypothesis and conclude that there is a positive, linear relationship between these variables.
The correlations between months on the job and all the other variables were not statistically significant. Surprisingly, we do see that there is a statistically significant negative correlation, although weak (–0.097), between current salary and previous job experience.
Figure 16-5: The
Correlations table.
McCormick, K., & Salcedo, J. (2015). Spss statistics for dummies. John Wiley & Sons, Incorporated. Created from ncent-ebooks on 2022-07-07 15:27:26.
C o p yr
ig h t ©
2 0 1 5 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
255 Chapter 16: Continuous Dependent and Independent Variables Every statistical test has assumptions. The better you meet these assump- tions, the more you can trust the results of the test. The Pearson correlation coefficient has three assumptions:
✓ You have continuous variables.
✓ The variables are linearly related.
✓ The variables are normally distributed.
Running the Linear Regression Procedure Correlations allow you to determine if two continuous variables are linearly related to each other. So, for example, current and beginning salaries are positively related for employees. Regression analysis is about predicting the future (the unknown) based on data collected from the past (the known). Regression allows you to further quantify relationships by developing an equation predicting, for example, current salary based on beginning salary. Linear regression is a statistical technique that is used to predict a continu- ous dependent variable from one or more continuous independent variables.
When there is a single independent variable, the relationship between the independent variable and dependent variable can be visualized in a scatter- plot, as shown in Figure 16-6.
Figure 16-6: A scat-
terplot of current and
beginning salary with
a regression line.
McCormick, K., & Salcedo, J. (2015). Spss statistics for dummies. John Wiley & Sons, Incorporated. Created from ncent-ebooks on 2022-07-07 15:27:26.
C o p yr
ig h t ©
2 0 1 5 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
256 Part V: Analyzing Data The line superimposed on the scatterplot is the best straight line that describes the relationship. The line has the equation, y = mx + b, where, m is the slope (the change in y for a one‐unit change in x) and b is the y‐intercept (the value of y when x is zero).
In the scatterplot, notice that many points fall near the line, but some are quite a distance from it. For each point, the difference between the value of the dependent variable and the value predicted by the equation (the value on the line) is called the residual (also known as the error). Points above the line have positive residuals (they were underpredicted), and points below the line have negative residuals (they were overpredicted); a point falling on the line has a residual of zero (a perfect prediction). The regression equation is built so that if you were to add up all the residuals (some will be positive and some will be negative), they would sum to zero.
Overpredictions and underpredictions constitute noise in the model, and noise is normal. All models have some error. A way of thinking about R Square (discussed and defined in the next section) is that this noise is “unexplained variance.” R Square, or the signal in our model, is a measure of “explained variance.” Add them up and you get the total variance, which we just call variance — the same variance that we use for measures like stan- dard deviation. Conceptually, it makes a lot of sense.
To perform a linear regression, follow these steps:
1. From the main menu, choose File ➪ Open ➪ Data and load the employee_data.sav data file.
The file is not in the SPSS installation directory. You have to download it from this book’s companion website.
2. Choose Analyze ➪ Regression ➪ Linear.
The Linear Regression dialog box, shown in Figure 16-7, appears.
In this example, we want to predict current salary from beginning salary, months on the job, number of years of education, gender, and previous job experience. You can place the dependent variable in the Dependent box; this is the variable for which we want to set up a prediction equa- tion. You can place the predictor variables in the Independent(s) box; these are the variables we’ll use to predict the dependent variable.
When only one independent variable is taken into account, the proce- dure is called a simple regression. If you use more than one independent variable, it’s called a multiple regression. All dialog boxes in SPSS provide for multiple regression.
McCormick, K., & Salcedo, J. (2015). Spss statistics for dummies. John Wiley & Sons, Incorporated. Created from ncent-ebooks on 2022-07-07 15:27:26.
C o p yr
ig h t ©
2 0 1 5 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
257 Chapter 16: Continuous Dependent and Independent Variables
3. Select the variable salary, and place it in the Dependent box.
4. Select the variables salbegin, jobtime, educ, gender, and prevexp, and place them in the Independent(s) box, as shown in Figure 16-8.
Note that gender is a dichotomous variable coded 0 for males and 1 for females, but it was added to the regression model. This is because a variable coded as a dichotomy (say, 0 and 1) can technically be con- sidered a continuous variable because a continuous variable assumes that a one‐unit change has the same meaning throughout the range of the scale. If a variable’s only possible codes are 0 and 1 (or 1 and 2, or whatever), then a one‐unit change does mean the same change through- out the scale. Thus, dichotomous variables (for example, gender) can be used as predictor variables in regression. It also permits the use of nominal predictor variables if they’re converted into a series of dichoto- mous variables; this technique is called dummy coding.
The last choice we need to make to perform linear regression is that we need to specify which method we want to use. By default, the Enter regression method is used, which means that all independent variables will be entered into the regression equation simultaneously. This method works well when you have a limited number of independent variables or you have a strong rationale for including all your independent variables. However, at times, you may want to select predictors from a larger set of independent variables; in this case, you would request the Stepwise method so that the best predictors from a statistical sense are used.
Figure 16-7: The Linear
Regression dialog box.
McCormick, K., & Salcedo, J. (2015). Spss statistics for dummies. John Wiley & Sons, Incorporated. Created from ncent-ebooks on 2022-07-07 15:27:26.
C o p yr
ig h t ©
2 0 1 5 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
258 Part V: Analyzing Data
At this point, you can run the linear regression procedure, but we want to briefly point out the general uses of some of the other dialog boxes:
• The Statistics dialog box has many additional descriptive statis- tics, as well as statistics that determine variable overlap.
• The Plots dialog box is used to create graphs that allow you to better assess assumptions.
• The Save dialog box adds new variables (predicti
Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteEdu. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.
Do you need help with this question?
Get assignment help from WriteEdu.com Paper Writing Website and forget about your problems.
WriteEdu provides custom & cheap essay writing 100% original, plagiarism free essays, assignments & dissertations.
With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Chat with us today! We are always waiting to answer all your questions.