Online Blogucation
27Jun/121

What we can learn from peer-to-peer learning…

Recently, I read an article in the Chronicle of Higher Education about "P2PU," a rather unfortunate-sounding abbreviation for Peer 2 Peer University. (Here's a link to the article.) But despite the abbreviation, there is a lot we can learn in online education from what the founders of P2PU are trying to accomplish.

The long and the short of peer-to-peer learning is that students can collaborate on learning in such a way so that each brings his or her expertise to bear in solving a problem or answering a question. For example, one person who studies sociology might team up with another person who studies mathematics, and between them, they have the requisite knowledge to teach themselves statistics for the social sciences.

At P2PU, and largely in the open-course environment, the idea is to have this kind of learning taking place without the confines of a traditional institution of higher education. I understand where they're coming from. Professors from traditional universities are branching out into MOOCs -- "massive open online courses." (There's a great article on these in Wired Magazine from a few months ago. You can find it here.) The professors who run such courses -- at schools such as MIT, Stanford, Princeton, and Michigan -- are struggling within their own institutions to determine how to provide some sort of useful credential for those who pass their courses, which in and of itself requires some bona fide way to assess all the students who participate in the course.

P2PU is attempting to solve the credentialing problem by offering badges for the completion of a course, much like the social badges many of us earn when reaching a new achievement in an online game. But don't take that as an indication that the work required for such a badge is trite; far from it, there is a lot of learning to be done to achieve badges such as "Critical Thinking" or "Javascript Expert." The challenge faced by P2PU is whether anyone will recognize those badges as legitimate indicators of knowledge and skills.

For those of us who do teach at traditional institutions of higher education, there is a lot to learn here. Social learning is becoming the canon of online education, and just within the last 12 to 18 months. It's the foundation of peer-to-peer learning, and open-universities like P2PU recognize that. MOOCs also recognize the value of a facilitating professor. And employers -- the vast majority of them still -- recognize the value of a college degree or certificate.

I don't see that any of these strategies or perspectives is right all by itself. But, taken together, we have the ingredients for a successful online program. If you have your own ideas on this topic, please comment -- I'd appreciate some peer-to-peer learning with this blog post!

9May/120

Inter-Institutional Collaboration Project

At the Pearson Cite conference several weeks ago I met with representatives from several colleges who were interested in piloting a Pearson CourseConnect Analytics Edition (CoCo AE) course. The Analytics Edition versions are currently intended for LearningStudio customers who are also using the Learning Outcome Manager (LOM) tool to manage and track student progress toward mastery on learning outcomes.

CoCo AE courses come with student learning outcomes pre-mapped to presentation content and assignments. This mapping also includes assessment rubrics which have both content and assignment type criteria. Pearson’s course authors wrote performance level descriptors for the assignment type rubrics which can be modified if desired. The course design team decided it would be best to allow individual instructors the ability to define their own performance level descriptors for content type criterion (see rubric example below from the American Government course).

Sample CoCo Analytics Edition American Government Outcome Rubric

Part one of the collaboration will be to get teaching faculty together from participating institutions to work on group authorship of the content type rubrics. We’ll create criteria banks by outcome that all faculty can choose from or adapt for their own instances of a course. Colleges will then run the selected CoCo AE course in their Winter/Spring 2013 terms. For the second collaboration component we intend to work on a data sharing project that will allow peer institutions to see anonymized data on aggregated student performance against commonly taught outcomes. Our hope is to present this project at Pearson Cite 2013 in Chicago.

Another possibility for this type of collaboration is the ability to provide institutions with a new option for the inter-institutional comparability requirements that often accompany professional program accreditor reaffirmations. Historically, the most common compliance method is a standardized assessment measure along with student surveys like CCSSEE or NSSE.

Imagine a future where you could define peer groups and then compare your students’ performance towards mastery on commonly defined learning outcomes against those of your peer institutions along with the very best even if they weren’t in your peer group. My hope is that part of that future involves more inter-institutional collaboration among faculty and content providers like Pearson to create stronger, more effective curriculum that can proves its effectiveness.

While comparison on its own is interesting, an extension would be to consider external benchmarking where an institution could identify peer(s) whose students tend to outperform those of the home institution. Jeffrey Alstete from the George Washington University Graduate School of Education and Human Development explains that the benchmarking strategy can be used to improve teaching and learning by studying processes and practices at institutions that excel and then adapting their methods to the context of the home institution (1995).

A key value of benchmarking is that all institutions involved in the study expect something in exchange for participating in the project; even those institutions who are recognized as best in class. (Management Consulting Partners, 2008). This is not a passive endeavor for any participant so, if you’re interested in benchmarking, it’s important to obtain support from senior leadership and to recognize that this effort will require a significant time investment.

Benchmarking is yet another strategy available to add to the assessment toolkit for higher education. We’re excited to engage with our partners to provide direct data on student mastery of learning outcomes and welcome your feedback on additional ways that we can support continuous improvement efforts on your campus.

Works Cited

Alstete, J. W. (1995). Benchmarking in Higher Education: Adapting Best Practices To Improve Quality. ERIC Digest. Retrieved May 7, 2012 from http://www.eric.ed.gov/PDFS/ED402800.pdf.

Management Consulting Partners. (2008). Applying Benchmarking to Higher Education. 1 (2). Retrieved May 7, 2012 from http://www.mcpartnersllc.com/download/Applying%20Benchmarking.pdf

Brian Epp | Assessment and Analytics Group Manager | Pearson eCollege

4Apr/120

Rinse Then Repeat: The Lost Secret to Preventing Plagiarism

Plagiarism.org defines plagiarism as an act of fraud. “It involves both stealing someone else's work and lying about it afterward.” As we read what is contained in this definition, it is evident why plagiarism is such a very huge issue. Acts of plagiarism can lead to expulsion, suspension and even job termination for some. These are very strong outcomes for something that can be committed by accident.

What else can be offered about plagiarism that has not already been said? How much more can instructors and administrators hold student’s feet to the fire of academic mandates that suggests, “Here are the rules, you must play by them lest we hammer thou into the ground.” This was the tone of my graduate school “writing workshop orientation;” a mandatory session that left me with the sense that I could potentially find myself in serious trouble for my writing without even knowing what I did wrong.

As I progressed through the ranks of student to higher education instructor (now since 2004) it became crystal clear that institutions come by their rigid posture against plagiarism honestly. From having to contend with the likes of paper millers such as Ed Dante (a pseudo name for The Shadow Scholar), to the department chair who orders faculty to leave their students alone when they are caught in the web of plagiarism and should rightfully be held to the school and department standards of conduct. Perhaps more can be done to actively assist students how not to plagiarize, innocently or otherwise.

At Pearson’s eTeaching Institute, we often hear faculty who take our Web-based courses on special topics related to designing and teaching online, express concerns about preventing cheating. In such cases, we advise a more proactive approach by asking future online instructors to consider, “how they can encourage honesty in coursework,” as a way to preempt academic dishonesty. We believe this and more is a good position to take. In addition, I propose that a shared sense of partnership between students, faculty and administration is a proactive step in the right direction to prevent plagiarism. After all, if we are going to maintain tight control with rigid anti-plagiarism mandates in place then, the least we can do is to move a bit closer in the direction of, “here are the rules, you must play by them AND I am going to help you.”

Plagiarism.org goes on to suggest that by giving proper authorship credit, we can avoid plagiarism. However, writing a good paper which avoids plagiarism involves much more than citing. If citing sources is all that is needed then why is the practice of plagiarism such a huge issue demanding large expenditures of academic energy and resources to prevent and detect and punish students for committing the act? Are students receiving enough “hands-on” resources and training to assist them with preventing plagiarism outside of doling out the building number, address or web site to the writing lab? Perhaps more of a sense of partnership with students is one way to help accomplish the goal.

Having evaluated many papers from undergraduate and graduate students over the past eight years; some replete with word-for-word transcripts from Wikipedia including links to the plagiarized content listed as the source, I decided to try tactics different from the usual, “don’t you dare.” The first task in all of this was to focus on that sense of partnership with my students, which I have hawked about previously. I decided to view plagiarism prevention as a shared responsibility that included some very positive and attainable steps students could take to prevent these acts. After all, if we are to hold them to the standards of our plagiarism deterrence tactics then, the least we can do is show them how not to plagiarize; and not necessarily in a one-time event or a syllabus policy or student handbook they may never read in the first place.

Could a more direct approach and (repeated) conversation be appropriate, followed by some very non-threatening steps on how to avoid plagiarism? Should we institutionalize methodologies that suggest to our charges that we recognize the temptation to take dishonest shortcuts then demonstrate that it is possible and relatively easy to avoid acts of plagiarism? After arriving at, “I need to do this without making a part-time job out of it,” I developed my mini-lecture, a cliff note of sorts, which included some very critical but important steps to avoiding plagiarism.

The first step in my brief tutorial to students is to make sure they understand what plagiarism is and its consequences. In our August 2010 Online Blogucation entry, my colleague, Dr. Jennifer Golightly noted that instructors should present clear and understandable statements about, “what plagiarism is, how it’s defined, and what the consequences for committing it are.” This is a first and critical step in the process of deterring plagiarism. I also believe that when delivered with a message of partnership, assistance and other measures that Jennifer discussed, we would likely assist more students from committing plagiarism. Next, I encourage students to:

Decide what their argument or premise of the paper will be. This may be assigned but sometimes not.
• Find time to read journal articles or other sources which supports the work. This is a requirement.
• Properly summarize and paraphrase sources. This does not mean changing a word here or there.
Quote sources sparingly using proper punctuation; another mandate.
• Deliberately cite sources within the body of the offering. This will give credibility to the work.
Reference sources by using a properly formatted works cited or reference page.
• Rinse then repeat. Perform these steps throughout the entire paper.

The session takes about fifteen minutes depending on Q & A and I wrap things up by reiterating my commitment to their academic success (as they should too) and that they should ask me for assistance when needed. Again, it takes much more than the steps above to write a good paper but it’s a start.

Practices to prevent plagiarism may seem harsh to students who find themselves caught in the snare of the deed. However, they are necessary and should be refined based on our experiences with the problem. Many institutions see the wisdom of ranking punishment based on the severity and number of offenses. Some schools employ student tutorials as a proactive measure. Others, keep a pile of lopped off heads in the back of the school. Not a first choice in my book.

Where needed, faculty and administrators should ramp-up their efforts to be partners in their students’ academic success to the extent that we present regular reminders and brief ‘how to sessions’ on avoiding plagiarism. Additionally, we should find creative and cost effective ways to assist students to make better decisions such as instilling a sense of partnership, more orientation and training aimed at preventing plagiarism before our students find themselves in really big trouble. The result could save valuable time for faculty and administrators then, schools can plant a nice flower bed where those heads are kept.

Do you have creative ideas about assisting students with preventing plagiarism? What do you think about an online student discussion forum with assignment endpoints addressing how not to plagiarize? What would be the benefit? Post your comments and suggestions in the space below.

Other Resources:
Best Practices to Promote Academic Integrity in Online Learning
Cheating

Ralph Kennedy, MSW
Academic Trainer & Consultant

14Dec/110

They’ve left us, but that doesn’t mean they aren’t still students.

2Nov/110

The Buzz on Assessment

I had the pleasure of attending the 2011 Assessment Institute in Indianapolis this week. The conference is the nation’s oldest and largest event focused exclusively on outcomes assessment in higher education. Administrators, Faculty and Student Affairs professionals convened this week to discuss techniques and approaches across outcomes assessment areas. This year, the event featured tracks on Capstone Experience, ePortfolios, and Faculty Development, among others.

I’d like to share with you a few of the recurring themes I heard and will take with me from the keynotes, workshops and best practice sessions. I will share specifically three themes and considerations. These few points may serve as a marker for some of the noteworthy issues and considerations in the higher education outcomes assessments landscape.

The first two themes are indeed linked in both process and practice, so I will identify both of them at this point. They are: 1) Faculty Engagement and 2) Using Results to Inform Improvement Processes. For those of us who have been doing outcomes assessment for any extended period of time, these themes may echo many of the questions and issues as well as the successes we have faced.

The engagement of faculty in the assessment process is certainly not a new issue in the practice of assessment. Notwithstanding, faculty engagement in the process of outcomes assessment is a reality many institutions are still desiring and even stretching to achieve. The corporate understanding among practitioners gathered at the event appears to reveal an arrival, or perhaps a standstill in some cases, at a place of resounding confirmation, one that points to faculty engagement in the assessment process as a critical component to successful assessment. In her 2010 paper entitled “Opening Doors to Faculty Involvement in Assessment”, Pat Hutchings wrote:

“As Peter Ewell (2009) points out in another NILOA paper, from its early days in higher education, assessment was “consciously separated from what went on in the classroom,” and especially from grading, as part of an effort to promote “objective” data gathering (p. 19). In response, many campuses felt they had no choice but to employ external tests and instruments that kept assessment distinct from the regular work of faculty as facilitators and judges of student learning. In fact, the real promise of assessment—and the area in which faculty involvement matters first and most—lies precisely in the questions that faculty, both individually and collectively, must ask about their students’ learning in their regular instructional work: what purposes and goals are most important, whether those goals are met, and how to do better. As one faculty member once told me, “assessment is asking whether my students are learning what I am teaching.”

Further, the notion was submitted that seeking faculty engagement should not be seen as a one-time achievement but as an ongoing and evolving effort that characterizes a campus assessment strategy. Inasmuch as the issue is not a new one for assessment, the corporate sentiment among conference participants is that garnering this engagement remains a key dynamic and often great challenge. Several presenters admonished institutions represented at the conference to engage in cross-institutional dialogue to share strategies on how to foster a deeper degree of faculty engagement.

The second recurring theme centers on a question of the value, strategy and purpose of assessment efforts, asking What’s it all for? Assessment is hard work. And the growing sentiment appears to be a desire to see campus assessment efforts translate into actual impact on student learning, beyond the collection of data and documentation for accreditation and/or certification. This pull for results that impact student learning is a call to move beyond data collection and planning of assessment to the informed and strategic improvement of teaching and learning based on the data. To make assessment more useful, we must include within our strategy an intentional approach to leverage data and documentation to help bridge the gaps between our current and improved realities. This process must be ongoing. And it undoubtedly must include faculty.

Finally, the third takeaway comes in the form of a resource. The National Institute for Learning Outcomes Assessment (NILOA) had a strong presence at the 2011 Assessment Institute. Several of the organization’s staff and associates were keynote presenters and include a notable group of internationally recognized experts on assessment. NILOA presenters pointed conference participants to what they called the ‘crown jewel’ of the organization’s efforts, a recently-enhanced and robust website featuring a collection of papers, articles, presentations, websites and survey results compiled in alignment with the organization’s vision for discovering and adopting promising practices in the assessment of college student learning outcomes. Reviewing the organization’s website will quickly reveal its valuable contribution to the field of assessment and current issues, including those I’ve highlighted from the conference. Take a moment to explore this great resource by visiting www.learningoutcomeassessment.org.

It was certainly a rich experience to attend the conference and have the opportunity to share with institutions and hear the collective voice of higher education assessment practitioners.

Rachel Cubas
Academic Trainer & Consultant
Assessment & Analytics Group | Academic Training & Consulting (ATC)

References

Hutchings, P. (2010) Opening Doors to Faculty Involvement in Assessment. National Institute for Learning Outcomes Assessment.

25Aug/110

Hallmark #8 – Resources

We are nearing the end of our series on the Interregional Guidelines for the Evaluation of Distance Education, and this week we will consider the 8th Hallmark, which concerns resourcing. These Guidelines can be found here as presented by the Middle States Commission on Higher Education (MSCHE).  

Hallmark #8 reads: The institution provides sufficient resources to support and, if appropriate, expand its online learning offerings.

I think of this Hallmark as essentially saying: “put your money where your mouth is.” In our discussion of the previous Hallmarks, we’ve seen that online education must be integrated into an institution’s mission and operations, and must have strong student and faculty support. Hallmark #8 supports those previously discussed ideas by stipulating that the institution must actually provide the budget and resources to make their online goals happen. We know that resources are tight everywhere, but dedicating resources to a distance education program is an important way to show that the institution values that program, and also funds the training and services that help set it up for success.

An institution seeking accreditation (for example, by MSCHE), will be asked to include evidence documenting how they are meeting the 9 Hallmarks for their online education program. MSCHE provides two areas of evidence that would allow an institution to demonstrate that they are meeting Hallmark #8:

Analysis/Evidence:

  • The institution prepares a multi-year budget for online learning that includes resources for assessment of program demand, marketing, appropriate levels of faculty and staff, faculty and staff development, library and information resources, and technology infrastructure;

  • The institution provides evidence of a multi-year technology plan that addresses its goals for online learning and includes provision for a robust and scalable technical infrastructure.

These seem pretty straightforward, as essentially the institution needs to demonstrate that they have allocated sufficient resources to support their online educational goals. To truly support their online venture, they need strong plans for marketing, faculty and staff support (including items laid out in this blog earlier), student support (as discussed here), a robust online curriculum (see discussion here), etc.  The institution must also show commitment to the technological aspects of online course delivery, including the technical infrastructure and a great LMS, such as Pearson LearningStudio (not so subtle hint!).

Working through these items should help the success of an online program and allow the institution to truly “put their money where their mouth is.” Good luck and happy budgeting!

– Gail E. Krovitz, Ph.D. –

Director of Academic Training & Consulting

MSCHE (2011) Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved online from: http://www.msche.org/publications/Guidelines-for-the-Evaluation-of-Distance-Education.pdf

10Aug/110

Hallmark #6: Qualified Faculty & Effective Faculty Support

Of the 9 Hallmarks that we’ve been discussing over the past few weeks, this one is perhaps one of the most obvious yet challenging. I am often reminded of a lesson I learned when working at a grocery store in my teenage years. I was asked to stack 20-lb. bags of potatoes on a lower shelf, and so I just started piling them on. Before I knew it, they were sliding and falling off the shelf. My supervisor came over to me to help. He said, “How do you build a house? Begin with a solid foundation.” He then proceeded to stack the bags of potatoes in long rows, side-by-side, packing them so that each additional bag supported the weight of previous bags. Problem solved. (This may seem obvious to us “grown-ups,” but hey, I was a teenager!)

The same concept rings true when building and maintaining an online program — build a solid foundation. And in this case, that solid foundation consists of quality faculty members who are sufficiently trained and supported to do what they need to deliver learning experiences that meet all students’ needs.

Now, a bit of background — Hallmark #6 reads as follows, from the The Middle States Commission on Higher Education’s (MSCHE) Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning):

Faculty responsible for delivering the online learning curricula and evaluating the students’ success in achieving the online learning goals are appropriately qualified and effectively supported.

MSCHE provides six points by which institutions can provide evidence that they are meeting this hallmark. Let’s break these down one at a time.

  • Online learning faculties are carefully selected, appropriately trained, frequently evaluated, and are marked by an acceptable level of turnover

Hopefully, this is already happening in any institution’s on-ground program, and so applying the same principles here should be relatively easy. However, one must also consider the “appropriately trained” part of the statement. (See below, also.) It is not enough simply to know how to teach, nor is it enough to know how to use a Learning Management System. Knowing how to teach online is the key here. There are different methods for engaging students in online courses than there are in on-ground courses. Take the lecture, for example. Many on-ground instructors still stick to the time-honored 45-minute lecture format. However, video recording that same 45-minute lecture and presenting it in a sit-n-get format in the LMS is a quick way to turn students into zombies who would rather do anything other than watch an instructor drone on. Even the most exuberant of instructors loses something in the conversion from live to video. It’s important to acknowledge that we must train our online instructors on not only the appropriate use of video but also the whole host of methods for engaging students in the online world.

  • The institution’s training program for online learning faculty is periodic, incorporates tested good practices in online learning pedagogy, and ensures competency with the range of software products used by the institution

Clearly, this point follows directly from that above. A good framework to bear in mind when developing training for faculty is Mishra and Koehler’s (2006) “TPACK” or Technological-Pedagogical Content Knowledge. The authors ground their research in Shulman’s (1986) Pedagogical Content Knowledge. Shulman argues that teaching teachers how to teach (pedagogy) should not be separated from the content that they are teaching. To use a trite example, teaching one group of instructors how to lead discussions in a history course is different from teaching another group of instructors how to lead discussions in a biology course. Mishra and Koehler add to this by saying that learning to teach with technology — and specifically, in this case, online — also should not be separated either from the pedagogy or the content. TPACK is at the center of the convergence of three circles: technology; pedagogy; and content. By covering all three bases, we can meet this second point.

  • Faculty are proficient and effectively supported in using the course management system

If we meet the above two points regarding training, we’ve won half of this battle. The key here is support. It is not enough just to give faculty one training on the LMS and say, “Go forth and conquer!” An effective training program will include ongoing support, not just for technical question (i.e., a Help Desk) but also for questions around instructional design and best practices. Technology is ever-changing; therefore knowledgeable support staff who are up-to-date with new technological tools and systems are required for maximum faculty effectiveness.

  • The office or persons responsible for online learning training programs are clearly identified and have the competencies to accomplish the tasks, including knowledge of the specialized resources and technical support available to support course development and delivery

This point is fairly straightforward, and I interpret this to mean that having a few go-to faculty super users is not enough to be considered a “training program.” Unfortunately, this happens often at smaller schools that do not have the budget to run their own training program. Fortunately, Pearson eCollege has the Academic Training & Consulting team, who can be engaged on an as-needed basis for training as well as the faculty instructional support discussed above.

  • Faculty members engaged in online learning share in the mission and goals of the institution and its programs and are provided the opportunities to contribute to the broader activities of the institution

While this point may sound a little too general to be implemented accurately, it is fairly straightforward: keep faculty in the loop. Too often, institutions with online programs — especially those that use a lot of adjunct instructors — simply put their faculty in front of computers and have them teach. But there is no broader context as to why they should teach for this institution, why they should teach online, what principles of the institution are important within all courses (online or otherwise), etc. Therefore, a structured communication system, be it via email distribution list, newsletter, or whatever, is required and indeed useful to make sure that all faculty are a part of the institution and serve to meet the institution’s mission and goals.

  • Students express satisfaction with the quality of the instruction provided by online learning faculty members.

Regarding this final point, the reader can probably see that it addresses the value of student evaluations of instructors and ensures that the data from these evaluations actually matter. Like Brian McKay Epp’s previous blog post about formative and summative evaluations of student work, it is important to have both formative and summative evaluation of instructors’ abilities to teach online. Insofar as formative data are used to reflect proficiencies and deficiencies in instruction, the information can be used to tailor training programs that meet individual instructors’ needs.

In sum, Hallmark #6 is a valuable and well-thought-out list of measures that ensures that faculty are ready to be the solid foundation of your online learning program. Pearson eCollege’s Academic Training & Consulting team is ready to help your institution meet this hallmark!

Works Cited

Middle States Commission on Higher Education (MSCHE). (2011, February). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved Aug. 4, 2011 from http://www.msche.org/publications/Guidelines-for-the-Evaluation-of-Distance-Education.pdf

Mishra, P., & Koehler, M. J. (2006). Technological Pedagogical Content Knowledge: A new framework for teacher knowledge. Teachers College Record 108 (6), 1017-1054.

Shulman, L.S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4 - 14.

Rob Kadel, Ph.D. | Training & Pedagogy Group, Academic Training & Consulting | Pearson eCollege

3Aug/110

Hallmark #5: Evaluation

This is the fifth in a series of posts on the 9 Hallmarks of Quality for Distance Education programs that were developed by the Council of Regional Accrediting Commissions (C-RAC) earlier this year.

The institution evaluates the effectiveness of its online offerings, including the extent to which the online learning goals are achieved, and uses the results of its evaluations to enhance the attainment of the goals (MSCHE, 2011).

As institutions seek to develop a culture of assessment that meets increasingly stringent accreditor requirements, a myth prevails that a pre-defined template exists that elegantly solves this ill-structured problem. The truth is that accreditors defer most of the responsibility to the institution who must set their own mission (Hallmark #1), program goals, and individual course outlines that provide the learning experience required for students to demonstrate mastery of the curriculum. They evaluate the extent to which a school has developed an assessment approach that measures curricular and instructional effectiveness and shows how data is used to further the continuous improvement of student learning.

While this may be frustrating to read, there are definitely patterns and best practices that scholars of teaching and learning have developed which synthesize characteristics of successful accountability programs.

First, institutions must be purposeful in their assessment program which means there is a plan for what data to collect and how it will be used to improve student learning. A holistic assessment approach includes both formative and summative assessment within courses and at the program level so students have the ability to remediate their weaknesses before it’s too late. Programs new to assessment usually begin with evaluation of program level goals and move into course level assessment as they mature. Ideally, most assessment can be embedded within the course so faculty of record can gather the data as part of their ongoing student assessment workflow.

This leads to a second major challenge in that perfection can be the enemy of good – or even the ability to get better. Our partners often tell us they’re not ready for assessment and we see academic leaders go through numerous models in their heads without ever actually implementing anything. Getting started creates informed use which yields better questions and action plans going forward.

As we consult on assessment and methods to integrate technology into the outcome management process, we nearly always expose what seem like obvious gaps in curriculum and instruction. This is part of the continuous improvement process and the important thing is to remedy that gap and to then look for the next most critical issue to resolve.

Finally, I’ve often heard assessment experts encourage academic leaders to actually scale back the volume of data they’re collecting. As mentioned earlier, data is meaningless unless you take the time to analyze what you’ve gathered to diagnose gaps and to implement improvement action plans to address the gaps. So, you might consider assessing random samples of student artifacts instead of trying to assess every student each term or you can assess all students against an outcome but only evaluate the outcome every two years.

Our consultants have developed the following modules to support educators in meeting requirements for Hallmark #5.

  • Creating a Culture of Assessment
  • Writing Quality SLOs
  • Rubric Design
  • Curriculum Mapping (Institution > Program > Course)
  • SLOs and Impact on Course Design (Curriculum mapping within a course)
  • Fostering Faculty Ownership of Campus Assessment Culture
  • Closing the Loop - Ensuring that SLO Data Impacts Curriculum & Instruction

In addition to the purposeful management of student learning, Hallmark #5 also requires institutions to monitor and set goals for both in-course retention and student persistence through a degree program along with the effectiveness of an institution’s academic and support services (MSCHE, 2011). Again, our consultants can work with you to develop custom reports to track and monitor progress for retention and persistence with student activity and completion data from the LMS. We can also help to identify at-risk students to support the requirement to measure effectiveness of academic and support services although this component certainly requires additional offline analysis of process and services at the institution.

Let us know if you have recommendations for any additional content area we should develop or if you’d like more information on our consulting services.

Works Cited

Middle States Commission on Higher Education (MSCHE). (2011, February). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved July 18, 2011 from http://www.msche.org/publications/Guidelines-for-the-Evaluation-of-Distance-Education.pdf

Brian Epp, M.Ed. | Assessment & Analytics Group, Academic Training & Consulting | Pearson eCollege

27Jul/110

Hallmark #4: Curricula

We are continuing the series that explores the new set of outcomes from which online education will be evaluated. These outcomes represent elements of standardization and quality espoused by every regional (and now national) accreditor for implementation in December 2011.

As you may remember from previous posts in the series, we’ve looked at the following three Hallmarks: (1) Mission & Purpose, (2) Planning, and (3) Governance & Academic Oversight. Curricula is the 4th focus in the series of 9 Hallmarks of Quality for the Evaluation of Online Learning.

Hallmark #4: Curricula
“Curricula for the institution’s online learning offerings are coherent, cohesive, and comparable in academic rigor to programs offered in traditional instructional formats.”

From this statement, you may have immediately picked up on subjects revolving around the comparability of online programs to their on-ground counterparts. You may have also noticed reference to considerations of academic rigor, a concept which many have regarded as a characteristic of traditional education but a challenge (even an impossibility) for online ed (Sloan-C, 2010).

Even though this hallmark addresses the goals of coherence, cohesion and comparability between delivery modalities, you may be surprised to find that it also encompasses aspects of the student experience that you may not have originally gleaned.

For instance, consider that if your online courses and/or program would ever require an online student to come to campus, be it to take a test at a lab or to complete a special project, these expectations would have to be explicitly disclosed to the student upon their matriculation in an online program:

Analysis/Evidence
“Expectations for any required face-to-face, on-ground work (e.g., internships, specialized laboratory work) are stated clearly;”

While it is an effective (and expected) practice to inform students of such expectations ahead of time, this has not always been the case, and forecasting such needs will certainly layer implications to the administration of online programs.

Further, reflect on the implication that the general expectation will be that faculty and staff constituents of online programs would be well-versed in best practices of online learning, and that such knowledge would make itself evident and be reflected in the curriculum.

Analysis/Evidence
“The curricular goals and course objectives show that the institution or program has knowledge of the best uses of online learning in different disciplines and settings;”

We can begin to envision the initiatives that will need to be launched and the processes that will need to be evaluated in regards to the development of faculty and staff constituents and the observable manner in which such expertise will be infused in the development and deployment of the curriculum.

One may have initially considered the Hallmark of Curricula to be largely a faculty concern. However, there are also notable elements in the Analysis/Evidence that carry significant administrative responsibilities. Consider the following:

Analysis/Evidence
“Curriculum design and the course management system enable active faculty contribution to the learning environment;”

“Scheduling of online learning courses and programs provides students with a dependable pathway to ensure timely completion of degrees;”

“The institution or program has established and enforces a policy on online learning course enrollments to ensure faculty capacity to work appropriately with students;”

How will administrators go about ensuring that the Institution’s choice of course management system and the Institution’s policies and procedures concerning online course delivery will allow for “active faculty contribution”? Further, how will program administrators contend with the balance of administrative process regarding course offerings while planning for and delivering each online student a dependable pathway to program completion? With the great variance of online class sizes and faculty load, what will it mean for institutions to establish and enforce course enrollment policy that ensures faculty’s ability to address student needs? These, and many others, are the implied administrative responsibilities of the Hallmark of Curricula.

To complete our brief exploration of this fourth, in a series of nine Hallmarks of Quality, following are the remaining points of Analysis/Evidence as provided by the Middle States Commission on Higher Education (MSCHE, 2011):

Analysis/Evidence
“Curricula delivered through online learning are benchmarked against on-ground courses and programs, if provided by the institution, or those provided by traditional institutions;”

“The curriculum is coherent in its content and sequencing of courses and is effectively defined in easily available documents including course syllabi and program descriptions;”

“Course design and delivery supports student-student and faculty-student interaction;”

“Course and program structures provide schedule and support known to be effective in helping online learning students persist and succeed.”

Stay tuned for next week as we continue this series on the 9 Hallmarks of Quality in the Evaluation of Online Programs. We hope you enjoy exploring these along with us!

- Rachel Cubas, M.Sc -
Academic Trainer & Consultant
Assessment & Analytics Group, ATC Team
rachelc@ecollege.com

References:

(MSCHE), M. S. (2011). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Philadelphia: Middle States Commission on Higher Education.

Sloan-C. (2010). Class Differences: Online Education in the United States, 2010. Babson Park: Babson Survey Research Group.

20Jul/110

Hallmark #3 – Governance and Academic Oversight

Third in a series of nine blogs.

The Hallmarks of Quality # 3

Online learning is incorporated into the institution’s systems of governance and academic oversight.

While my colleagues have written “no brainers” of Hallmarks #1 and #2, mission and planning, #3 begins to become more challenging. Many will cringe, when reading this Hallmark, as I did. The words governance and academic oversight sound too invasive from my perspective but who among us would not want to ensure the institution we teach for protects the integrity of its curriculum and ensures students are offered a valued education? The challenge however with this assurance is in the execution.

Academic governance takes many different forms among colleges but I firmly believe the strongest and most highly regarded schools are those with full transparency of what occurs within the Ivory tower and the classrooms. Instructors and administrators sharing resources, collaborating on essential student learning outcomes and then allowing the creativity of the instructors to emerge makes us all stronger.  The concept of online learning “shared governance” provides for individual and collective voices that matter.

We can retain academic freedom and still preserve quality education. This Hallmark is in place for on ground classes and absolutely needs to be the same for online courses. Maybe I am naive but does anyone really believe the standards of quality education should be different for the two modalities of instruction? While the debate rages on over the academic rigor of online classes it seems to me it all boils down to the quality of the faculty. What good teachers do.... they do online and face to face. So, it is these faculty members in conjunction with the administration that need to be the ones monitoring the academic oversight.

Middle States Commission on Higher Education (MSCHE) suggest the analysis and evidence of sustaining this Hallmark may be provided by:

  • An institution’s faculty having a designated role in the design and implementation of its online learning offerings;
  • An institution ensuring the rigor of the offerings and the quality of the instruction;
  • Approval of online courses and programs following standard processes used in the college or university;
  • Online learning courses and programs being evaluated on a periodic basis;
  • Contractual relationships and arrangements with consortial partners, if any, are clear and guarantee that the institution can exercise appropriate responsibility for the academic quality of all online learning offerings provided under its name.

As Gary A. Olson (2009), provost and vice president for academic affairs at Idaho State University succinctly summarized:

"Clearly, when it comes to university governance, "shared" is a much more capacious concept than most people suspect. True shared governance attempts to balance maximum participation in decision making with clear accountability. That is a difficult balance to maintain, which may explain why the concept has become so fraught. Genuine shared governance gives voice (but not necessarily ultimate authority) to concerns common to all constituencies as well as to issues unique to specific groups.

The key to genuine shared governance is broad and unending communication. When various groups of people are kept in the loop and understand what developments are occurring within the university, and when they are invited to participate as true partners, the institution prospers. That, after all, is our common goal". (para. 17 & 18)

Let's embrace the opportunity!

Karen R. Owens, Ph.D.
Academic Assessment Consultant

References

Middle States Commission on Higher Education (MSCHE).  (2011, February). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved July 18, 2011 from http://www.msche.org/publications/Guidelines-for-the-Evaluation-of-Distance-Education.pdf

Olson, G. (2009, July 23). Exactly what is 'shared governance' ? The Chronicle of Higher Education Retrieved July 18, 2011 from http://chronicle.com/article/Exactly-What-Is-Shared/47065/.