Online Blogucation
14Dec/110

They’ve left us, but that doesn’t mean they aren’t still students

The National Student Clearinghouse Research Center released a Snapshot Report on persistence last week with some interesting new data on student persistence. To obtain a copy of the report visit their website at http://research.studentclearinghouse.org. According to the Research Center, "students were counted as having persisted if they: 1)remained enrolled in any postsecondary institution 60 days after the end of the term that included October 15, 2010 or 2) completed a degree within 60 days of the end of the term that included October 15, 2010.

The Research Center was able to identify students persisting in higher education regardless if the students remained at a single institution or moved among institutions. Accounting for this student movement, researchers found that overall, 84.7% of students persisted in higher education. Data were further broken down between full- and part-time status with 92.5% of full-time and 71.2% of part-time students identified as persisting. An examination of the persistence rates by type of institution attended revealed that the highest rate (91.4%) was found among students attending private, not-for-profit, 4-year institutions while the lowest rate (74.9%) was among students attending public, 2-year instititons.

These findings are encouraging as they show that while some students leave an institution before earning a degree or certificate, many continue their education at another institution. These "leavers" are typically viewed as drop-outs, an undesirable outcome from the institution's perspective. But, because of the data reported by the Research Center we can see that many of these students are, in fact, persisting but have just moved from one institution to another.

Institutions participating as data providers to the National Student Clearing House are able to use the data to help them determine how many of their former students are continuing at other institutions and can make adjustments to their own reports on persistence and completion. The data can also be useful to states and others who are interested in better understanding the enrollment patterns of today's college students.

The bottom line for those of us interested in seeing all students succeed is that the picture is not as bleak as our previous incomplete data on persistence would have us believe. And even more importantly, these findings suggest that students seem willing to continue their education even if, for whatever reasons, they have left one institution at some point during their education journey.
Kimberly Thompson

2Nov/110

The Buzz on Assessment

I had the pleasure of attending the 2011 Assessment Institute in Indianapolis this week. The conference is the nation’s oldest and largest event focused exclusively on outcomes assessment in higher education. Administrators, Faculty and Student Affairs professionals convened this week to discuss techniques and approaches across outcomes assessment areas. This year, the event featured tracks on Capstone Experience, ePortfolios, and Faculty Development, among others.

I’d like to share with you a few of the recurring themes I heard and will take with me from the keynotes, workshops and best practice sessions. I will share specifically three themes and considerations. These few points may serve as a marker for some of the noteworthy issues and considerations in the higher education outcomes assessments landscape.

The first two themes are indeed linked in both process and practice, so I will identify both of them at this point. They are: 1) Faculty Engagement and 2) Using Results to Inform Improvement Processes. For those of us who have been doing outcomes assessment for any extended period of time, these themes may echo many of the questions and issues as well as the successes we have faced.

The engagement of faculty in the assessment process is certainly not a new issue in the practice of assessment. Notwithstanding, faculty engagement in the process of outcomes assessment is a reality many institutions are still desiring and even stretching to achieve. The corporate understanding among practitioners gathered at the event appears to reveal an arrival, or perhaps a standstill in some cases, at a place of resounding confirmation, one that points to faculty engagement in the assessment process as a critical component to successful assessment. In her 2010 paper entitled “Opening Doors to Faculty Involvement in Assessment”, Pat Hutchings wrote:

“As Peter Ewell (2009) points out in another NILOA paper, from its early days in higher education, assessment was “consciously separated from what went on in the classroom,” and especially from grading, as part of an effort to promote “objective” data gathering (p. 19). In response, many campuses felt they had no choice but to employ external tests and instruments that kept assessment distinct from the regular work of faculty as facilitators and judges of student learning. In fact, the real promise of assessment—and the area in which faculty involvement matters first and most—lies precisely in the questions that faculty, both individually and collectively, must ask about their students’ learning in their regular instructional work: what purposes and goals are most important, whether those goals are met, and how to do better. As one faculty member once told me, “assessment is asking whether my students are learning what I am teaching.”

Further, the notion was submitted that seeking faculty engagement should not be seen as a one-time achievement but as an ongoing and evolving effort that characterizes a campus assessment strategy. Inasmuch as the issue is not a new one for assessment, the corporate sentiment among conference participants is that garnering this engagement remains a key dynamic and often great challenge. Several presenters admonished institutions represented at the conference to engage in cross-institutional dialogue to share strategies on how to foster a deeper degree of faculty engagement.

The second recurring theme centers on a question of the value, strategy and purpose of assessment efforts, asking What’s it all for? Assessment is hard work. And the growing sentiment appears to be a desire to see campus assessment efforts translate into actual impact on student learning, beyond the collection of data and documentation for accreditation and/or certification. This pull for results that impact student learning is a call to move beyond data collection and planning of assessment to the informed and strategic improvement of teaching and learning based on the data. To make assessment more useful, we must include within our strategy an intentional approach to leverage data and documentation to help bridge the gaps between our current and improved realities. This process must be ongoing. And it undoubtedly must include faculty.

Finally, the third takeaway comes in the form of a resource. The National Institute for Learning Outcomes Assessment (NILOA) had a strong presence at the 2011 Assessment Institute. Several of the organization’s staff and associates were keynote presenters and include a notable group of internationally recognized experts on assessment. NILOA presenters pointed conference participants to what they called the ‘crown jewel’ of the organization’s efforts, a recently-enhanced and robust website featuring a collection of papers, articles, presentations, websites and survey results compiled in alignment with the organization’s vision for discovering and adopting promising practices in the assessment of college student learning outcomes. Reviewing the organization’s website will quickly reveal its valuable contribution to the field of assessment and current issues, including those I’ve highlighted from the conference. Take a moment to explore this great resource by visiting www.learningoutcomeassessment.org.

It was certainly a rich experience to attend the conference and have the opportunity to share with institutions and hear the collective voice of higher education assessment practitioners.

Rachel Cubas
Academic Trainer & Consultant
Assessment & Analytics Group | Academic Training & Consulting (ATC)

References

Hutchings, P. (2010) Opening Doors to Faculty Involvement in Assessment. National Institute for Learning Outcomes Assessment.

13Jul/110

Hallmark #2 – Planning

Fasten your seatbelt and hold on to your hat! This week we are going to talk about planning in regards to the Middle States Accreditation plan. While I say that a bit facetiously it is actually a little piece of the canvas which is part of a bigger more exciting piece of work.  By standardizing accreditation requirements nationwide for higher education online learning programs, those of us firmly planted in online learning programs can take a huge leap forward to demonstrate (with statistics, research and data) that what we are doing is not only catering to a growing market’s demands but doing so because the pedagogy and statistics show that our students are learning and competing and often exceeding their counterparts in fully online programs.

There are 9 hallmarks in the Middle States Accreditation plan and today we look closely at #2-Planning. On a side note, I will give you some background into this series of blogs. After an introduction to the overall Distance Education Programs--Interregional Guidelines for the Evaluation of Distance Education (Online Learning) each person on our team (the Academic Consulting team at Pearson eCollege) took a hallmark to focus on and fully explain. In the draw, I drew #2 Planning.

Now, as I plan for this blog (I deliberately chose the word plan in case you missed that) I can see how apropos it is that I have the planning topic. I am a planner to the point of a clinical neurosis some might say. I am the person who, when the seatbelt light goes off on an airplane as we pull into the gate, I get up and find my car keys and my credit card so when I get off the plane and get to the end of the very long walk to my car, I can jump in, start the car and proceed to pay for parking. Downtime is used for reflection and analysis but it is also a moment or two that can be used to take care of details and save time later on. So from the planner’s perspective, let’s look at hallmark #2.

With that statement of credibility (I am qualified to talk about planning because I am a neurotic planner in my day to day life), let us take a look at how EduKan, the consortium of online campuses for 6 Kansas community colleges, leads by example when it comes to these accreditation hallmarks. Some institutions will fret and have to hire consultants to comply when this becomes standard whereas other institutions, such as EduKan, will simply look at the list and say: “we already do that.”

Hallmark #2 reads:
The institution’s plans for developing, sustaining, and, if appropriate, expanding online learning offerings are integrated into its regular planning and evaluation processes (MSCHE Standard 2).

From the guidelines, analysis and evidence of this hallmark will review:

  • Development and ownership of plans for online learning extend beyond the administrators directly responsible for it and the programs directly using it;
  • Planning documents are explicit about any goals to increase numbers of programs provided through online learning courses and programs and/or numbers of students to be enrolled in them;
  • Plans for online learning are linked effectively to budget and technology planning to ensure adequate support for current and future offerings;
  • Plans for expanding online learning demonstrate the institution’s capacity to assure an appropriate level of quality;
  • The institution and its online learning programs have a track record of conducting needs analysis and of supporting programs.

So in asking how EduKan’s director Mark Sarver addresses the topic of planning, he replied that all aspects of the planning guideline are addressed through their Strategic Planning committee. The Strategic Planning committee for EduKan includes representatives from all jobs and roles within the organization. The group includes but is not limited to: academic deans, advisors, instructors, registrars, other administrators et. al. They devise a 3 year strategic plan which is created and agreed upon by all members of the committee. It is all encompassing to include goals, budget planning, technology planning, and indicators of success. The stakeholders on the committee then take the plan back to their respective groups and gain approval from those groups. As the committee meets every three years, they check the indicators of progress, document successes and adjust or re-define goals for the next three year plan. Statistics, reporting and data analysis provide the documentation needed to assure the required appropriate level of quality. The process is ongoing and it includes every role in the EduKan system to gain buy-in from all those with a role in the success of the online program and the consortium as a whole.

EduKan is not unique in this process. All institutions have a similar program or committee that examines, develops, implements and then reviews their overall plan for successfully educating the students who attend their institution and enrolls in their courses. If they have always been a traditionally on ground campus, this will have to expand to include the online goals above. If they already have an online component to their offerings, they will have to be sure they can document that they are addressing the analysis components above. Of the 9 hallmarks soon to be part of the accreditation process for online learning programs, number two might be one that you can check off as already being in place. Good luck!

-Pamela Kachka, M.A.Ed.-
Academic Consultant

5May/101

Outcome Assessment vs. Assignment Grading

Admittedly, this debate will only catch the eyes of those of us who passionately engage in the role that outcome assessment plays in improving curricular and instructional effectiveness. My experience is that in most cases, course assignments predate the integration (or imposition) of outcomes into the course delivery process. As a result there is often loose alignment between assignments and the outcomes that have been associated with a course.

The core question then becomes whether faculty can integrate these two evaluation requirements into a single workflow or if they must be two discrete processes. My colleagues at Texas Christian University’s (TCU) Koehler Center for Teaching Excellence have deeply engaged in this debate with targeted faculty on their campus. They’ve summarized their perspective in a January 2010 newsletter article which is well worth reading. A case is made for maintaining outcome assessment as a unique process because of the aforementioned alignment issue.

For example, a student may turn in an assignment late which means s/he should receive a lower grade even though the student may have demonstrated mastery of the associated outcomes. Another common situation is that many departments want to include writing quality criterion in their assignment rubrics even though this may not be a stated outcome for the course.

While the points made in the article are valid, my belief is that ultimately these two processes need to be integrated into a single workflow for faculty. Professors have a limited amount of time they can dedicate to the feedback and evaluation steps in the teaching and learning cycle. If we ask them to add outcome assessment on top of an already full workload the quality of their feedback to students will likely be distributed more sparsely across a broader range of assessment requirements.

There are many committed faculty who are willing to go the extra mile but a well-designed course and assessment process can go a long way toward integrating these two components of a course-based evaluation approach. Assignments can be rewritten so that their evaluation criteria more closely align to the stated course learning outcomes. This takes effort too; however, once this alignment has been completed the efficiencies are realized in subsequent terms.

“Rubrics for Grading, Rubrics for Learning Outcomes”. (2010) January 2010 Koehler Center eNewsletter. Retrieved May 4, 2010, from http://www.elearning.tcu.edu/enewsletters/2010/january10.asp#learningoutcomes

Brian McKay Epp | Academic Trainer & Consultant| Pearson eCollege

1Jul/090

Momentum Building for Competency Based Learning

Most of us have heard of the European Union along with the establishment of the Euro as a common currency across the continent. Fewer have heard of the Bologna Process which began in June, 1999 with the goal of creating a more standardized higher education system in EU member nations. One initiative has been a tuning project where academics work to define a common set of learning outcomes by discipline and degree level.

The dialogue continues worldwide today about whether a focus on competencies versus assignment grading leads to an improved student learning experience but most would agree there is a difference. Many students are able to memorize processes or to cram for an exam but the ability to apply knowledge, skills, and concepts to new situations requires a deeper level of learning which is better suited for competency based assessment.

A June 4, 2009 blog post on The Chronicle for Higher Education website summarized a recent report commissioned by the Association of American Medical Colleges and the Howard Hughes Medical Institute calling for institutions to focus on competencies instead of courses as a way to improve curriculum for pre-med and medical schools. The report convened a group of educators, practitioners, and researchers to define a set of competencies both for entrance into and graduation from medical school. NCATE has already defined similar competencies for educators and other accreditation bodies are coming on board as well with efforts to agree on a core set of competencies by discipline.

The Lumina Foundation for Education also recently announced a three state Tuning USA project that seeks to define “the subject-specific knowledge and transferable skills that students in six fields must demonstrate upon completion of a degree program”. This is a bottom up effort involving faculty, students, and employers. Representatives from Indiana, Minnesota, and Utah will each define student learning outcomes for two disciplines while striving to preserve the ability for individual institutions and faculty to retain their academic freedom to teach to a common set of outcomes in the manner of their own choosing.

Pearson eCollege will continue to monitor this trend and seeks input from our partner institutions for best practices in outcome management and competency based learning.

References

Benelux Bologna Secretariat (n.d.). About the Bologna Process. Retrieved June 12, 2009 from Web site: http://www.ond.vlaanderen.be/hogeronderwijs/bologna/about/

Lumina Foundation for Education (2009, April 8). News Release. Retrieved June 12, 2009 from , Web site: http://www.luminafoundation.org/newsroom/news_releases/2009-04-08.html

Mangan, K. (2009, June 4). 'Competencies,' Not Courses, Should Be Focus of Medical-School Curricula, Report Says. Retrieved June12, 2009 from The Chronicle of Higher Education, Web site: http://chronicle.com/news/article/6588/competencies-not-courses-should-be-focus-of-medical-school-curricula-report-says