Chapter 7 - Student Sucess Measures
Student success is a very complex concept and is certainly challenging to measure. The Mānoa Institutional Research Office (MIRO) built a comprehensive data toolbox for various student success data, both quantitative and qualitative, and helped the university to better support student success through using more accurate, accessible, and actionable data.
Abstract
Introduction
“Student success” is a phrase often bounced around in higher education, but what exactly does it mean? Institutions often use retention and graduation rates as popular student success measures, but there are many factors when it comes to what makes someone feel successful accomplishing what they came to school for. Some students attend college mainly to explore an intellectual journey, some focus more on preparing themselves for better career opportunities, and some care the most about having a good time; it all varies for each student, which makes defining or measuring student success quite challenging for institutional researchers.
Over the years, MIRO has concluded that there is no one-size-fits-all, nor is there one common idea across the board. Our goal is not to make a fixed definition or set universal success measures, but to use different data to draw a more comprehensive picture of the diverse and constantly changing concepts of student success. We would like to encourage readers to continue thinking about what student success means to you, to your students, and to your institution as we go throughout this article.
MIRO’s Triple-A Data Strategy
There is a saying that goes “not everything counted counts,” meaning what really matters might not even be measurable. There are great ideas of what student success is, but from an institutional research perspective, we need to be mindful about what data is available in the university’s database. Some data that truly matters might be difficult to collect or not be available at all. MIRO’s approach of preparing student success data is to make this data more accurate, accessible, and actionable for campus decision makers. We called it the Triple-A data strategy.
Make Data More Accurate (Strategy 1)
Data has limitations and people interpret it in different ways. Some popular data measures may not be suitable to address the university’s needs of tracking different groups of students. For example, the IPEDS retention and graduation rates which only include first-time, full-time student cohorts. The IPEDS calculation excludes transfer and graduate students even though they make up a majority of UH Manoa's student population. Additionally, when looking at certain student populations, the IPEDS method often ends up with small sample sizes and has significant changes in retention and graduation rates. For example, UH-Mānoa does not have a large African American student population. Whenever there is a small cohort to track graduation rates, there are significant fluctuations over the years. In Figure 1, the 4-year graduation rates of African American students varied from 4% to 33% and the 6-year graduation rates varied from 25% to 55%, both without a steady historical trend.
Figure 1: Graduation Trends of African American Students
(Corresponding Video Here)
MIRO does not recommend using such fluctuating data as evidence to draw key conclusions, but there are not many other commonly accepted measures to provide decision makers better data. Hence, we took the matter into our own hands and developed a more inclusive and easy-to-understand method to track the persistence rates. We will elaborate on this method later in this chapter.
Make Data More Accessible (Strategy 2)
One of the biggest challenges of using data to improve student success is the accessibility of information. To make data-informed decisions, people first need to access the data and the data has to be easily understood. Institutional Research offices can certainly address a lot of data inquiries, but it takes time to pull out data, analyze it, and prepare the reports and charts.
MIRO created a Decision Support System platform where people can easily access data and reports. MIRO actively seeks to make data customizable so that data users can cater the results to their needs, so a lot of filters were added to allow faculty and staff to easily customize data the way they need anytime and anywhere. To help users more conveniently understand what each web app does and how to use it to address their data needs, MIRO created video tutorials for most of the web apps. Other than the convenience of accessing data, there is a democratic appeal of this approach: that people have equal access to the same information with no differences between their position or authority.
Figure 2: Web Apps and other Resources to Improve Data Accessibility
(Corresponding Video Here)
Make Data more Actionable (Strategy 3)
Student success is centered around making sure students are accomplishing what they came to do at their colleges or universities. Numerical data is helpful, but it’s hard to capture individual experiences and challenges through numbers alone. To balance the large amount of workload involved in the qualitative data collection and analysis, MIRO chose open-ended questions in online surveys to collect data regarding student experiences and feedback. Some of the most useful surveys MIRO administered were the series of Campus Experience surveys, which mainly include open-ended questions to let students talk about their experiences anonymously and in their own words. Students are quite open, honest, and transparent about their appreciation and challenges. They also make specific suggestions that can advise campus decision makers on how to better support them. This valuable information is truly a gold mine for the university to dig through and utilize. Not only does it help the office gain a more in-depth understanding of student experiences, it also helps Mānoa decision makers to make actionable changes. MIRO created a few qualitative data web app tools to help faculty and staff gain easy access to genuine and concrete feedback from students to then take action and respond to the students' needs.
Organizing Student Success Data
At UH Mānoa, MIRO has had many conversations with faculty, staff, and administrators to learn about their vision of student success and has always been open to keeping the conversations going. MIRO staff also looked at other commonly used student success measures within higher education and were able to organize the student success measures into six categories: Persistence, Degrees, Time to degree, Course Performance, NSSE, and College Experience & Satisfaction. MIRO’s Decision Support System provides customizable student success data to campus decision makers.
Student Success Measure 1: Persistence Data
Persistence rate data is often considered an overall student success measure because they combine both retention rates and graduation rates. MIRO has a few web apps that use different calculation methods, and one of them uses the widely-accepted persistence rate calculation in higher education–the Integrated Postsecondary Education Data (IPEDS) standard. As mentioned earlier, the IPEDS method does not include transfer students nor graduate students, so it does not show the whole picture of an institution's performance in retaining and graduating students. Using the IPEDS method could also result in small sample sizes and significant changes in retention and graduation rates, making it harder to track persistence by majors and certain student groups.
As an attempt to address the limitation of the IPEDS method, we developed a new way to track student persistence. Different from the IPEDS method, Mānoa’s persistence rate calculation tracks all registered degree-seeking undergraduate students, including the transfers, so that the cohort tracked is representative of the targeted student population. To understand how we calculate this, let’s assume there were 100 degree seeking undergraduate students enrolled at the start of fall 2015. These students are considered a cohort that the persistence rate will track over the years. Let’s say that among the cohort, 10 students graduated in the fall 2015 semester, while 85 students enroll for the following spring 2016 semester. In total, 95 students either graduated or retained while 5 students neither graduated or returned, which means the semester-to-semester persistence rate is 95% and semester-to-semester attrition rate is 5%.
From MIRO’s experience working with campus data users, data generated by this method can be used to address a lot more questions than the IPEDS method. Using the example of persistence rate of Black or African American undergraduate students mentioned earlier (see Figure 1), MIRO’s persistence method creates a much more consistent persistence rate and graduation rate (see Figure 3). To be more specific, among all the degree-seeking African American undergraduate students who enrolled in fall 2015, approximately 65% are still enrolled or have graduated in 6 years. The persistence rate over the years are pretty consistent in the mid or higher 60s, which is a steadier trend compared to the fluctuated persistence rates generated by the IPEDS method that ranged from 25% to 55%.
Making Data More Accurate (Applying Strategy 1)
Data has limitations and people interpret it in different ways. Some popular data measures may not be suitable to address the university’s needs of tracking different groups of students. For example, the IPEDS retention and graduation rates which only include first-time, full-time student cohorts. The IPEDS calculation excludes transfer and graduate students even though they make up a majority of UH Manoa's student population.
Additionally, when looking at certain student populations, the IPEDS method often ends up with small sample sizes and has significant changes in retention and graduation rates. For example, UH-Mānoa does not have a large African American student population. Whenever there is a small cohort to track graduation rates, there are significant fluctuations over the years. In Figure 1, the 4-year graduation rates of African American students varied from 4% to 33% and the 6-year graduation rates varied from 25% to 55%, both without a steady historical trend.
Figure 3: Undergraduate African American Students Persistence Rate
(Corresponding Video Here)
MIRO’s method of calculating persistence rates is also more helpful in tracking success rates of transfer students, degree programs, and graduate students--all of which are important student groups for a research university like UH Mānoa. Collecting this data would not be possible if using the IPEDS persistence rates calculation.
Student Success Measure 2: Degrees Awarded
For the next measure, most students attend universities to earn degrees, which is why the Mānoa Institutional Research Office considers the number of degrees awarded a key player in student success. The degree web apps can provide data users with accessible historical trends of different types of degrees awarded at UH Mānoa such as bachelor’s, masters, and doctoral degrees, as well as customizable data on specific degrees such as BA, BS, MBA, MFA, and Ph.D.
Figure 4: MIRO Degree Trend Web App
(Corresponding Video Here)
Student Success Measure 3: Time-to-Degree
Time-to-Degree gives people an idea of how long it takes students to complete their degrees. A shorter Time-to-Degree is desirable because it implies that students graduate in a timely manner, which is why it’s considered an element of student success. At the University of Hawai‘i at Mānoa, undergraduate time-to-degree is reported separately for both first-time, full-time students and transfer students. This is because the transfer students may enter in with a certain amount of credits that will affect their time-to-degree. To calculate time-to-degree, we first need to identify a cohort of students who graduated in a certain time period, usually by fiscal year, then look backwards to measure the amount of time between each student’s first term pursuing the degree they graduated from, and the graduation. MIRO’s Time-to-Degree web app includes different entry cohorts in the filter to track first-time and transfer students separately. The default selection is first-time full-time freshmen, and the data generated from this group is used to track the university’s historical trend of time-to-degree (see Figure 5).
Figure 5: Full-Time, First-Time Freshmen Time to Degree
(Corresponding Video Here)
This report shows that UH Mānoa’s undergraduate students have continuously decreased their time-to-degree over the past decade. The average time-to-degree for Mānoa students to earn their bachelor’s degrees has been decreasing over the years, which is a positive and desirable trend. Same as MIRO’s other web apps, users have many demographic and academic programs filters to select student groups they are interested in. If they end up with a small sample size, MIRO recommends using the median, rather than the mean, to avoid data getting skewed.
Student Success Measure 4: GPA
How students perform in class is an important measure of their academic achievement, and GPA is the most important and commonly used measure for students’ class performance. GPA at UH Mānoa is determined by taking the total number of grade points and dividing it by the total number of credit hours for which the student received a letter grade. On the online reports, MIRO provides both “current semester GPA” and the “cumulative GPA”. The current semester GPA is the average of GPAs for all the courses taken at UH Mānoa in a given semester, and the cumulative GPA is the average of GPAs of all the semesters at UH Mānoa. The online reports are designed to show both of these two different GPA data with the percent of students who fall into different GPA ranges (see Figure 6).
Figure 6: GPA Trend for Fall Semester, Degree-Seeking Undergraduate Students
(Corresponding Video Here)
MIRO’s GPA web app has many other filters that can be used to create customized reports. For example, users can look at GPA data of students who transfer from University of Hawaiʻi community colleges, students who came from different geographic origins, veterans, first-generation students, high school types, racial groups, and academic programs. Users are able to explore all kinds of data and now have greater opportunities to use data to tell their own stories.
Student Success Measure 5: National Survey of Student Engagement (NSSE)
At UH Mānoa, NSSE is also considered a key student success data source. NSSE’s founding director, George Kuh, promotes the concept of student engagement as an important factor in student success. He describes student engagement as a family of constructs that measure the time and energy students devote to educationally purposeful activities—activities that matter to learning and student success (Kuh, n.d.; 2001).
Understanding the importance of NSSE data to our university, MIRO made great efforts to significantly improve the NSSE survey response rate to 52% and created a series of online data tools to disseminate NSSE data such as student engagement, skill development, and high impact practices like undergraduate research and study abroad. NSSE data covers a wide range of college experience aspects that many offices and programs may find helpful. NSSE also made great resources available on their website, such as the “NSSE Item Campuswide Mapping” that helps offices quickly locate NSSE data relevant to their offices. MIRO built a series of NSSE web apps to disseminate the data collected from over 100 NSSE questions that provides valuable data about students’ coursework, campus activities, services, and overall satisfaction. For example, the Engagement Indicators web app displays data of 10 indicators under 4 themes: Academic Challenge, Learning with Peers, Experiences with Faculty, and Campus Environment. Users can select from the NSSE “year” filter to generate reports that only include a single year’s worth of data or click “comparison” to view multi-year comparison data, which provides much richer information to help us interpret trends. Figure 7 shows comparative data trends for collaborative learning between 2015 and 2020.
Figure 7: Collaborative Learning Engagement Indicator
(Corresponding Video Here)
Student Success Measure 6: College Experience & Satisfaction
A question many people ask is “what kind of help do students truly need and how can the university better support them?” Although there is no silver bullet answer, MIRO believes that humbly listening to student voices is always helpful when trying to understand their real needs and find ways to better support their success.
From the open-ended survey question responses, there are many students who recognize the importance of holding themselves accountable in their own college success, especially when they answer the questions of why the current semester is going well or not going well. MIRO created an indicator called “student accountability” to summarize comments that touched on the topic. Students mentioned that they felt they were successful because they tried their best to step out of their comfort zone to make more friends or try new things. They also felt successful because they set up short and long term goals in academia to make sure they were on the right track, and even worked on managing their work-life balance well. Sharing those ideas and experiences with other students can be convincing, because the comments are directly from their peers. Some of MIRO’s unique strengths are natural language processing and distributing open-ended survey results.
A series of innovative web apps were created to help users locate useful student feedback from tens of thousands of random and anonymous narratives. Whenever a student fills out the survey, their responses remain anonymous and are organized by each question. Once students fill out the survey, faculty and staff can use various filters in the web app to select questions they want to see from specific student populations they're interested in. From there, they can read through students’ suggestions to understand students’ daily experiences, what they think, how they feel, and what they would recommend to change.
In our opinion, the student voice is the most valuable student success data. MIRO truly believes that this data is more important and actionable than the other success measures combined because it helps the university understand what obstacles students face, how they overcome them, and what success means for them.
Closing Remarks
Student success is a very complex concept and is certainly challenging to measure. It is important for institutional researchers to provide as many data points as possible to campus decision makers, so they can have a more comprehensive understanding about issues they would like to work on. What we have shared here is only the tip of the iceberg of rich student success data that MIRO’s data tools and reports can offer. MIRO continues to promote these data tools and reports by giving presentations, publishing data briefs, and hosting virtual symposiums.
To find ways to help students overcome their obstacles and reach their best potential, nothing works better than hearing directly from students about their experiences and suggestions. After all, student success is about the students–their opinions and experience should matter the most.
We believe that retention and graduation rates should be a reflection of our work, not the goal. Like what economist Charles Goodhart said, “when a measure becomes a target, it ceases to be a good measure” (Mattson, 2021). We should never lose sight of what matters and MIRO always tries to find ways to deliver that message.