Creating an Outcomes Assessment Instrument that Incorporates Information Technology Dimensions Robert M. Wolk Management Department, Bridgewater State College Bridgewater, MA. 02325 Abstract An outcomes assessment instrument was created to measure student satisfaction in overall program content and specific skill levels at a four year college business degree program. The instrument includes a scale that measures student perceptions of effective teaching methods. Within the instrument are questions that can be used to measure information technology dimensions of computing availability and student perceptions of their own computing and information technology skills. Effective teaching methods measured include spreadsheet analysis, Internet based assignments, the use of PowerPoint, and the use of instructional technology. Keywords: Outcomes assessment, higher education, information technology. 1. INTRODUCTION The graduation class of 2004 has been subjected to information technology advances in higher education for a longer period of time and to a level of sophistication never before seen. The effectiveness of these advances requires continual measurement. The School of Management is currently developing instruments for outcomes assessment as required by the accrediting organization of the International Assembly for Collegiate Business Education (IACBE). Required are two direct and two indirect methods of evaluating outcomes. . This survey will be used as one of the direct measurements. It is designed to measure the perceptions of the effectiveness of teaching methods utilizing technology and the satisfaction levels of those measures on undergraduate students in their final semester compared to more traditional teaching methodologies. The entire college experience will be examined. The study will measure demographic differences. After a concentrated effort, the School of Management obtained accreditation from the International Assembly for Collegiate Business Education (IACBE) in 2002. Most of the full-time faculty was involved in the project. One of the conditions requested by the IACBE was for the College to establish an Outcome Assessment Process for student learning. The key problem for the College is that outcome assessment has never been done before and that none of the faculty has experience in the process. The State College had originally started 150 years ago as the source for elementary and secondary school teachers for the Commonwealth of Massachusetts. The School of Management at the College grew out of a desire by higher education officials in the early 1980’s to have specialties at each of the state colleges. The specialty at the College was the Aviation Management degree which later grew to the School of Management and Aviation Science and then eventually to a sizeable School of Management with over 1100 students on a campus of 5000 full-time students and 5000 part-time students. The College is particularly suited for this research as the level of technology investment on the campus has been formidable. The entire campus has been wireless for two years, all classrooms have an integrated computer/multimedia console, there is a 50,000 square foot technology center and computer labs in every classroom building. The use of Blackboard is common by School of Management Faculty members. Starting in 2004, all incoming students will be required to have a college designated laptop computer. 2. THEORY OF OUTCOME ASSESSMENTS Comparing objectives to outcomes is a common practice in human endeavors ranging from business plans to government budgets. Education reform movements in the 1980’s and pressure from accrediting organizations brought outcome assessment policies to higher education (Manton, 2002). This shift is best expressed (Birenbaum and Douchy, 1996) as one from a culture of testing students to one of assessing students. When faculty is given the responsibility for the design and implementation, their acceptance of assessment plans is increased by the opportunity to understand and develop the tools to be incorporated. In a sense, faculties take ownership of the process. In comparing actual versus ideal forms of assessment, grades in major course work are the most common form of assessment. The preferred tools of both educators and practitioners are internships and surveys of employer satisfaction. Pre- and post-test questionnaires were found to be a highly effective assessment instrument for measuring information literacy (Fiegen et al. 2002). The combined perceptions of grade inflation and observed lower performance by employers of recent college graduates have placed pressure on outcome assessment to reverse a trend of lower perceived educational results (Rybacki and Lattimore, 1999). Research (Kuh and Hu, 2002) based on 18,344 under-graduate students at 71 four-year colleges that responded to the College Student Experiences Questionnaire found that there was a positive relationship between computer/information technology use when used frequently and student educational effort. However, a study of MBA students (Tootoonchi et al. 2002) found that computer and information technology ranked low as a teaching methodology by students. Assessment Tools Outcome assessment administrators have a variety of assessment tools (collected from Websites available at www2.acs.ncsu.edu) to choose from to insure the quality of their educational product that include the following: * Student self-assessment of what was learned and how effective the program of education is in meeting stated objectives. * Student evaluations of individual courses, instructors, and the overall program may be used at the end of each semester. * Exit interviews may reveal the deeper analysis of the students overall education and how it could have been improved. * Alumni surveys can provide valuable information as a reflection on their educational experiences by completing annual surveys. * Student Portfolios are collections of significant achievement collected during matriculation. * Job Placement Analysis reveals important data on whether students were able to obtain employment in their field. * Employer Surveys identify areas needing improvement discovered by employers of students from the program. * Student Retention studies can identify the reasons why students may have failed to register for the following semester. * Skills Assessment of areas such as writing skills, communication, problem solving, mastery of technology, ethics, and mastery of the functional disciplines are measured with faculty, institution or nationally standardized testing. * Capstone course evaluations evaluate the overall effectiveness of a college education. * Pre-test to Post-test evaluations reveal the effectiveness of course instruction. * Curriculum and syllabus analysis reveal weaknesses in the program and identify courses that may have poorly defined objectives. * Videotape evaluation of performance is a useful tool for measuring presentation skills. Validity of Student Evaluations is a Problem The validity and reliability of student evaluations have been a source of debate and research since their growth in the 1970's when 30% of colleges used them. At present, 80% of colleges use student evaluations. Over 2000 studies have been written on the subject. The topic represents the most researched area in higher education (Wilson 1998). Student evaluations are a systematic method of collecting information to be used by faculty in improving their performance. Criticisms of student evaluations come from the abundance of available research. Professors were considered effective if they displayed strong communication skills demonstrating a concern for learning and motivation (Young and Shaw, 1999). Female students rate female faculty higher than male faculty (Bachen et al. 1999). Evaluations shift responsibility of learning to faculty and administrators and away from students (Armstrong, 1998). Faculty traits of extroversion were the only significant predictor of student evaluations (Radmacher and Martin, 2001). Many faculty members believe that lenient grading policies positively affect student evaluations. Research on bias does not support this contention. Students receive higher grades in courses when they learn more (Centra, 1993). Too much weight was given to student evaluations over other assessment tools influencing faculty to teach to the evaluation (Read et al. 2001). There is a lack of evidence correlating highly rated instructors to higher levels of learning (Nerger et al. 1997). Contrasting Perceptions Dimensions of different perceptions by students and faculty indicate significant misunderstandings. Both factions disagree on the importance of evaluations. There is disagreement as to the variables that influence faculty ratings and the influence of student evaluations on grading and careers. Perceptions differ on the seriousness of the evaluations importance to students and faculty. Faculty members are more likely to believe that evaluations will affect their careers while students are less likely to believe that evaluations will have an effect on faculty careers, promotions and tenure decisions. While faculty members believe students do not take evaluations seriously, students feel the opposite (Sojka et al. 2000). Perceptions differ as to student access to technology as University Professors tend to underestimate that access and professors are comparative late adopters of technology (Bates and Poole, 2003). 3. RESEARCH METHOD Students taking the capstone course at the School of Management were surveyed in their classrooms during March and April of 2004. The capstone course is usually taken by students in the last semester of their senior year by management, accounting and finance majors. The surveys were distributed to the students and the following instructions were announced to the group as a whole: “Please do not sign your name anywhere on this survey. This survey is voluntary and anonymous. It will be used for outcomes assessment measurement. Your answers should reflect your total experience at Bridgewater, not just this course or management courses, but all your courses taken at Bridgewater. It is in three sections. The first page collects demographic information. The second page records your level of satisfaction for each item using the scale on the top of the page. The third section records your impression of effectiveness of teaching methods using the 5-1 scale at the top of the page”. 4. SUBJECTS Students taking the capstone course (n=139) were surveyed with 117 surveys completed. The capstone course is taken in the final semester before graduation. All subjects are matriculating within the School of Management. Full-time students numbered 107 and part-time students numbered 10. The average age of the students was 24.31 years. The mean years in attendance at the college were 4.18 years. Students that were the first in their immediate family to graduate from a four year college were 31.6% of the total. The gender of the students was 46.2% male and 53.8% female.. Students that were living off campus comprised 72.6% of the total sample, while those that were living in campus housing totaled 27.4%. 5. INSTRUMENT An instrument was created in January of 2004 in three sections. The first section asked for demographic information, the second section obtained information on student satisfaction on a five point Likert-type scale on student satisfaction. The first seven questions related to the overall satisfaction of program areas. The next thirteen questions measured student self-assessment of skills. The third section of the survey instrument measured student perceptions of effective instructional methods. A five point Likert-type scale with 5 equaling “very effective” and 1 equaling “not effective” was employed. The instrument was reviewed and accepted by the Outcomes Assessment Coordinator, the Accounting and Finance Department Chair, the Management Department Chair, and the Acting Dean of the School of Management. Two methods (objective exams and subjective exams) included several examples to avoid confusion. Several of the topics were adapted from previous research (Tootoonchi et al. 2002) in a study of MBA students and Hennessey and MacDonald (1993). A major difference from previous research is the updating of methods to reflect the use of information technology. Four questions were used to evaluate the effectiveness of information technology: Internet based assignments, instructional technology, the use of spreadsheets, and the use of slides or PowerPoint. 6. FINDINGS The question of “The level of satisfaction in your computing ability” had an average rating of 4.07 on a Likert-type scale of 1 to 5 with 5=Excellent. This was higher than the overall satisfaction average of 3.95. The question of “The level of computing availability of the college” had an average of 3.95 which was identical to the overall satisfaction average. The question measuring “the level of satisfaction in your information technology skills” had an average of 3.85 which was lower than the overall satisfaction average of 3.95. The questions on effectiveness of instructional methods on a 5 point Likert-type scale with 5 being “very effective” and 1 as “very ineffective” revealed the use of slides or PowerPoint by the instructor of 3.69, the use of Internet based assignments of 3.78 and the use of spreadsheets of 3.9. Correlations were run (Table 1) with the “overall level of instruction” and found correlation with computing ability and computing availability were lower than other variables in the scale. Students indicated that he variable of “degree of preparation for major” and “preparation for future career had the highest correlations. Academic advising was the third highest correlated item. Rankings (Table 2) of satisfaction in skills and other areas of the college experience found “management skills “ and the understanding of ethics as the highest rated in satisfaction. Table 1-Correlations-Satisfaction with Overall Level of Instruction Question Pearson Correlation Degree of prep for major 0.551 Prep for future career 0.466 Library services 0.26 Computing availability 0.255 Finding courses 0.338 Academic Advising 0.409 Computing ability 0.238 Table 2 – Rankings of Satisfaction Question Mean Management skills. 4.26 Understanding of business ethics. 4.26 Your computing ability. 4.07 Your analytical ability. 3.98 Problem solving abilities. 3.97 Human resource skills 3.96 The overall level of Instruction. 3.95 Degree of preparation for your major 3.95 The level of computing availability of the college 3.95 Communication skills. 3.95 Information technology skills. 3.85 Marketing skills. 3.79 Mastery of writing skills. 3.75 Operations management skills. 3.72 Accounting skills. 3.7 Financial analysis skills. 3.68 Preparation for your future career aims 3.63 The level of services of the college library 3.5 The level of satisfaction in finding the courses you wanted. 3.47 Academic advising. 3.13 In analysis of means (5 point Likert-type scale with 5 = excellent or very effective) by subject concentration, computing availability was rated highest by MIS of 4.38 and Marketing majors of 4.27, skill rating of computing ability by marketing concentrators at 4.55, information technology skills of 4.25 by MIS students and internet based assignments rated highest by operations majors at 3.88. 7. DISCUSSION The survey found (Table 3) that case studies, real world examples and open classroom discussion were rated by subjects as the most effective instructional methods. This does not implicate that information technology would not take a role in these methods. Internet based assignments were viewed positively, but it is unclear as to the perception of this question being related to programs such as Blackboard or web based assignments accompanying textbooks. Ratings varied by demographic variables which require additional research. Table 3 – Ranking of Instructional Methods Question Mean Real world examples. 4.5 The use of case studies. 4.42 Open classroom discussion. 4.33 Classroom lectures. 3.92 The use of spreadsheets for assignments 3.9 The use of objective (multiple choice, true or false, one correct answer) exams. 3.9 The use of subjective exams. 3.89 The use of individual projects 3.81 The use of group projects. 3.8 Internet based assignments 3.78 The use of Socratic Questioning by the Instructor 3.75 The use of slides or PowerPoint by the instructor 3.69 The use of a research paper. 3.52 Instructional technology 3.47 The use of guest speakers. 3.24 The use of field trips to outside sources of information. 3.11 Validity and reliability The survey was tested and re-tested on 60 upper level business students that were not taking the capstone course. The testing occurred in February and March using the same procedures that were employed for the capstone survey. Of the 60 students, only 53 completed both surveys. The instrument was found to be both reliable and valid. Internal reliability analysis of the twenty items in the satisfaction scale produced an alpha coefficient of .8779 (n=115). The sixteen items in the effectiveness scale produced an alpha coefficient of .7893 (n=100). 8. REFERENCES Anonymous, 2000, “IACBE Self Study Guide.” The International Assembly for Collegiate Business Education Armstrong, J. Scott, 1998, “Are Student Ratings of Instruction Useful?” The American Psychologist. 53. pp. 1223-1224. Bates, A.W. and Gary Poole, 2003, Effective Teaching With Technology in Higher Education: Foundations for Success. Josey-Bass. San Francisco. Bachen, Christine, Moira M. McLaughlin, and Sara S. Garcia, 1999, “Assessing the role of gender in college students' evaluations of faculty. Communication Education.” 48. 193. Birenbaum, Menucha, and Filip Douchy. 1996, Alternative In Assessment of Achievements, Learning Processes and Prior Knowledge. Kluwer Academic Publishers. Norwell, MA. Centra, A. John, 1993, Reflective Faculty Evaluations. Josey-Bass. San Francisco. Fiegen, A.M., Bennett Cherry, and Kathleen Watson, 2002, “Reflections on Collaboration: Learning Outcomes and Information Literacy Assessment in the Business Curriculum.” Reference Services Review. 30-4: pp. 307-318. Kuh, D. George and Shouping Hu, 2001, “The Relationships between Computer and Information Technology Use, Selected Learning and Personal Development Outcomes, and Other College Experiences.” Journal of College Student Development. 42,3: pp. 217-233. Manton, J. Edgar and Donald E. English, 2002, “The College of Business and Technology’s Course Embedded Student Outcomes Assessment Process.” College Student Journal. v36, pp. 261-270. Montgomery, Kathleen, 2002, “Authentic Tasks and Rubrics: Going Beyond Traditional Assessments in College Teaching.” College Teaching. v50,1: pp. 34-40. Nerger, L. Janice, Wayne Viney, and Robert Riedel, 1997, “Student Ratings of Teacher Effectiveness: Use and Misuse.” The Midwest Quarterly. 38. pp. 218-236. Radmacher, A. Sally and David J. Martin, 2001, “Identifying Significant Predictors of Student Evaluations of Faculty Through Hierarchical Regression Analysis.” The Journal of Psychology, v135 pp. 259-268. Read, J. William, Dasaratha V. Rama, and K. Raghunandan, 2001, “The Relationship Between Student Evaluations of Teaching and Faculty Evaluations.” Journal of Education for Business. 76. pp. 189-198. Rybacki, D. and Dan Lattimore, 1999, “Assessment of Undergraduate and Graduate Programs.” Public Relations Review. v25: pp. 65-75. Sojka, J., Ashok K. Gupta, and Dawn R. Deeter-Schmelz, D. 2002, “Student and Faculty Perceptions of Student Evaluations of Teaching: a Study of Similarities and Differences.” College Teaching. v50. pp. 44-57. Tootoonchi, Ahmad, Paul Lyons, and Abdalla Hagan, 2002, “MBA Students Perceptions of Effective Teaching Methodologies and Instructor Characteristics.” International Journal of Commerce & Management 12. pp. 79-94. Wilson, Robin. 1998, “New Research Casts Doubt on Value of Student Evaluations of Professors. Chronicle of Higher Education January A1-A12. Young, Suzanne and Dale G. Shaw, 1999, “Profiles of Effective College and University Teachers.” Journal of Higher Education. v70. pp. 670-688. www2.acs.ncsu.edu. Appendix Survey Instrument for Outcomes Assessment SECTION I: DEMOGRAPHICS Circle the word that best describes yourself for each question. 1. Are you full-time or a part-time student? 2. Are you male or female? 3. Do you live on campus or do you commute? 4. How many years have you been attending the College? _______. 5. Are you a citizen of the USA? Yes or no 6. Which of the following best describes your ethnicity? White African American Hispanic Black African Native American Asian Other____________ 7. What is your age?_________ 8. Has an immediate family member ever graduated from a 4 year college? Yes or no. 9. What is the area of your concentration (state 2 if double major).__________________________ SECTION II: CONTENT RATINGS OF THE OVERALL PROGRAM. PLEASE RATE THE OVERALL PROGRAM AT THIS STATE COLLEGE FOR EACH OF THE STATEMENTS BELOW: Use the following scale. 5=Excellent 4=Good 3=Average 2=Poor 1=Failure 1. The overall level of Instruction. _________ 2. Degree of preparation for your major. _________ 3. Preparation for your future career aims. _________ 4. The level of services of the college library. _________ 5. The level of computing availability of the college. _________ 6. The level of satisfaction in finding the courses you wanted. _________ 7. The level of satisfaction in academic advising. _________ 8. The level of satisfaction in your computing ability. _________ 9. The level of satisfaction in your analytical ability. _________ 10. The level of satisfaction in your problem solving abilities. _________ 11. The level of satisfaction in your mastery of writing skills. _________ 12. The level of satisfaction in your communication skills. _________ 13. The level of satisfaction in your management skills. _________ 14. The level of satisfaction in your accounting skills. _________ 15. The level of satisfaction in your financial analysis skills. _________ 16. The level of satisfaction in your marketing skills. _________ 17. The level of satisfaction in your information technology skills. _________ 18. The level of satisfaction in your human resource skills. _________ 19. The level of satisfaction in your operations management skills. _________ 20. The level of satisfaction in your understanding of business ethics. _________ SECTION III: INSTRUCTIONAL METHODS. RATE THE FOLLOWING INSTRUCTIONAL METHODS AS TO THEIR EFFECTIVENESS. Use the following scale. 5=Very Effective 4=Somewhat Effective 3=Neither Effective or Ineffective 2=Somewhat Ineffective 1=Very Ineffective 1. The use of case studies._________ 2. The use of slides or PowerPoint by the instructor. _________ 3. The use of group projects. _________ 4. The use of individual projects. _________ 5. The use of classroom lectures. _________ 6. The use of guest speakers. _________ 7. The use of a research paper. _________ 8. The use of real world examples. _________ 9. The use of Internet based assignments. _________ 10. The use of instructional technology such as videos. _________ 11. The use of spreadsheets (example: Microsoft Excel) for assignments. _________ 12. The use of field trips to outside sources of information. _________ 13. The use of objective (multiple choice, true or false, one correct answer) exams. _________ 14. The use of subjective (short answer or essay, more than one correct answer) exams. _________ 15. The use of open classroom discussion. _________ 16. The use of Socratic Questioning by the Instructor. _________ ?? ?? ?? ??