Online Blogucation
3Aug/110

Hallmark #5: Evaluation

This is the fifth in a series of posts on the 9 Hallmarks of Quality for Distance Education programs that were developed by the Council of Regional Accrediting Commissions (C-RAC) earlier this year.

The institution evaluates the effectiveness of its online offerings, including the extent to which the online learning goals are achieved, and uses the results of its evaluations to enhance the attainment of the goals (MSCHE, 2011).

As institutions seek to develop a culture of assessment that meets increasingly stringent accreditor requirements, a myth prevails that a pre-defined template exists that elegantly solves this ill-structured problem. The truth is that accreditors defer most of the responsibility to the institution who must set their own mission (Hallmark #1), program goals, and individual course outlines that provide the learning experience required for students to demonstrate mastery of the curriculum. They evaluate the extent to which a school has developed an assessment approach that measures curricular and instructional effectiveness and shows how data is used to further the continuous improvement of student learning.

While this may be frustrating to read, there are definitely patterns and best practices that scholars of teaching and learning have developed which synthesize characteristics of successful accountability programs.

First, institutions must be purposeful in their assessment program which means there is a plan for what data to collect and how it will be used to improve student learning. A holistic assessment approach includes both formative and summative assessment within courses and at the program level so students have the ability to remediate their weaknesses before it’s too late. Programs new to assessment usually begin with evaluation of program level goals and move into course level assessment as they mature. Ideally, most assessment can be embedded within the course so faculty of record can gather the data as part of their ongoing student assessment workflow.

This leads to a second major challenge in that perfection can be the enemy of good – or even the ability to get better. Our partners often tell us they’re not ready for assessment and we see academic leaders go through numerous models in their heads without ever actually implementing anything. Getting started creates informed use which yields better questions and action plans going forward.

As we consult on assessment and methods to integrate technology into the outcome management process, we nearly always expose what seem like obvious gaps in curriculum and instruction. This is part of the continuous improvement process and the important thing is to remedy that gap and to then look for the next most critical issue to resolve.

Finally, I’ve often heard assessment experts encourage academic leaders to actually scale back the volume of data they’re collecting. As mentioned earlier, data is meaningless unless you take the time to analyze what you’ve gathered to diagnose gaps and to implement improvement action plans to address the gaps. So, you might consider assessing random samples of student artifacts instead of trying to assess every student each term or you can assess all students against an outcome but only evaluate the outcome every two years.

Our consultants have developed the following modules to support educators in meeting requirements for Hallmark #5.

  • Creating a Culture of Assessment
  • Writing Quality SLOs
  • Rubric Design
  • Curriculum Mapping (Institution > Program > Course)
  • SLOs and Impact on Course Design (Curriculum mapping within a course)
  • Fostering Faculty Ownership of Campus Assessment Culture
  • Closing the Loop - Ensuring that SLO Data Impacts Curriculum & Instruction

In addition to the purposeful management of student learning, Hallmark #5 also requires institutions to monitor and set goals for both in-course retention and student persistence through a degree program along with the effectiveness of an institution’s academic and support services (MSCHE, 2011). Again, our consultants can work with you to develop custom reports to track and monitor progress for retention and persistence with student activity and completion data from the LMS. We can also help to identify at-risk students to support the requirement to measure effectiveness of academic and support services although this component certainly requires additional offline analysis of process and services at the institution.

Let us know if you have recommendations for any additional content area we should develop or if you’d like more information on our consulting services.

Works Cited

Middle States Commission on Higher Education (MSCHE). (2011, February). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved July 18, 2011 from http://www.msche.org/publications/Guidelines-for-the-Evaluation-of-Distance-Education.pdf

Brian Epp, M.Ed. | Assessment & Analytics Group, Academic Training & Consulting | Pearson eCollege

Comments (0) Trackbacks (0)

No comments yet.


Leave a comment


No trackbacks yet.