Online Blogucation

They’ve left us, but that doesn’t mean they aren’t still students

The National Student Clearinghouse Research Center released a Snapshot Report on persistence last week with some interesting new data on student persistence. To obtain a copy of the report visit their website at According to the Research Center, "students were counted as having persisted if they: 1)remained enrolled in any postsecondary institution 60 days after the end of the term that included October 15, 2010 or 2) completed a degree within 60 days of the end of the term that included October 15, 2010.

The Research Center was able to identify students persisting in higher education regardless if the students remained at a single institution or moved among institutions. Accounting for this student movement, researchers found that overall, 84.7% of students persisted in higher education. Data were further broken down between full- and part-time status with 92.5% of full-time and 71.2% of part-time students identified as persisting. An examination of the persistence rates by type of institution attended revealed that the highest rate (91.4%) was found among students attending private, not-for-profit, 4-year institutions while the lowest rate (74.9%) was among students attending public, 2-year instititons.

These findings are encouraging as they show that while some students leave an institution before earning a degree or certificate, many continue their education at another institution. These "leavers" are typically viewed as drop-outs, an undesirable outcome from the institution's perspective. But, because of the data reported by the Research Center we can see that many of these students are, in fact, persisting but have just moved from one institution to another.

Institutions participating as data providers to the National Student Clearing House are able to use the data to help them determine how many of their former students are continuing at other institutions and can make adjustments to their own reports on persistence and completion. The data can also be useful to states and others who are interested in better understanding the enrollment patterns of today's college students.

The bottom line for those of us interested in seeing all students succeed is that the picture is not as bleak as our previous incomplete data on persistence would have us believe. And even more importantly, these findings suggest that students seem willing to continue their education even if, for whatever reasons, they have left one institution at some point during their education journey.
Kimberly Thompson


Whom will the data serve? Thoughts on Usefulness and Portals for Education

As noted in the article Salman Khan: The New Andrew Carnegie? -

...knowledge no longer needs to be bound into the paper and cloth of a book but can float free on the wireless waves of the Internet. There’s a lot of junk bobbing in those waves as well — information that is outdated, inaccurate, or flat-out false — so the emergence of online educational materials that are both free of charge and carefully vetted is a momentous development. This phenomenon is all the more significant given the increasing scrutiny directed at for-profit online universities, which have been criticized for burdening students with debt even as they dispense education of questionable usefulness. Websites offering high-quality instruction for free are the Carnegie libraries of the 21st century: portals of opportunity for curious and motivated learners, no matter what their material circumstances (Paul, 2011, para. 6).

I pursue the goal of excelling as an engineer or architect of learning and to be otherwise associated with the proliferation of "portals of opportunity for curious and motivated learners, no matter what their material circumstances" (Paul, 2011, para. 6). In some sense, I am these things already as an Academic Trainer and Consultant with Pearson eCollege. If I had a personal mission statement, it would be worded similarly and my destiny would be to serve in an industry associated with or embedded within the systems of education.

Yet, that’s not the point of this post!

I found it interesting the Paul (2011) article quoted above suggests the phenomenon of high quality online vetted materials " all the more significant given the increasing scrutiny directed at for-profit online universities, which have been criticized for burdening students with debt even as they dispense education of questionable usefulness."

Could not many of us argue that public colleges and universities also "dispense education" of "questionable usefulness"? Actually, many might also debate whether education is dispensed or received or shared or…

Wait, that’s not the point of this post either!

So, what is the point you ask?

The point is to consider critically the reality that all colleges and universities - regardless of profit motive or mission statement - are justifiably susceptible to this questioning of usefulness. Knowledge and skills needed for professions and trades evolve quickly in part due to the globalization of knowledge and virtual removal of barriers to access to information through the internet for a large portion of the world’s population, but certainly not all of that population! Let's question some things...

Could we argue that a nursing or teaching degree in the United States from 1990 is as useful today in the same locale as one from 2010? Does locale matter? How does that impact usefulness?

Does on the job real-world apprenticeship style workflow-learning add value to the formal education received? If yes, how is that measured?

Does a graduate's lack of continued professional or personal development post-graduation to become or remain productive in the workforce as laborer or entrepreneur necessarily reflect negatively on the value of educational portals provided by a college or university?

Yes, that’s the point.

While there is much that can be unpackaged from the messages of the selected quote opening this post, the point of this post is to ask you to think critically about what we are measuring when we refer to educational usefulness, how we are measuring it and defining the variables associated with the measures, and ultimately why we are measuring it – whom will the data serve?

Lisa Marie Johnson, Ph.D.
Academic Trainer & Consultant
Pearson eCollege


Paul, A.M. (2011, November 16). Salman Khan: The new Andrew Carnegie? The emergence of free, high-quality online courses could change learning forever. Retrieved from Times Online MagazineIdeas section (link opens new page):


The Buzz on Assessment

I had the pleasure of attending the 2011 Assessment Institute in Indianapolis this week. The conference is the nation’s oldest and largest event focused exclusively on outcomes assessment in higher education. Administrators, Faculty and Student Affairs professionals convened this week to discuss techniques and approaches across outcomes assessment areas. This year, the event featured tracks on Capstone Experience, ePortfolios, and Faculty Development, among others.

I’d like to share with you a few of the recurring themes I heard and will take with me from the keynotes, workshops and best practice sessions. I will share specifically three themes and considerations. These few points may serve as a marker for some of the noteworthy issues and considerations in the higher education outcomes assessments landscape.

The first two themes are indeed linked in both process and practice, so I will identify both of them at this point. They are: 1) Faculty Engagement and 2) Using Results to Inform Improvement Processes. For those of us who have been doing outcomes assessment for any extended period of time, these themes may echo many of the questions and issues as well as the successes we have faced.

The engagement of faculty in the assessment process is certainly not a new issue in the practice of assessment. Notwithstanding, faculty engagement in the process of outcomes assessment is a reality many institutions are still desiring and even stretching to achieve. The corporate understanding among practitioners gathered at the event appears to reveal an arrival, or perhaps a standstill in some cases, at a place of resounding confirmation, one that points to faculty engagement in the assessment process as a critical component to successful assessment. In her 2010 paper entitled “Opening Doors to Faculty Involvement in Assessment”, Pat Hutchings wrote:

“As Peter Ewell (2009) points out in another NILOA paper, from its early days in higher education, assessment was “consciously separated from what went on in the classroom,” and especially from grading, as part of an effort to promote “objective” data gathering (p. 19). In response, many campuses felt they had no choice but to employ external tests and instruments that kept assessment distinct from the regular work of faculty as facilitators and judges of student learning. In fact, the real promise of assessment—and the area in which faculty involvement matters first and most—lies precisely in the questions that faculty, both individually and collectively, must ask about their students’ learning in their regular instructional work: what purposes and goals are most important, whether those goals are met, and how to do better. As one faculty member once told me, “assessment is asking whether my students are learning what I am teaching.”

Further, the notion was submitted that seeking faculty engagement should not be seen as a one-time achievement but as an ongoing and evolving effort that characterizes a campus assessment strategy. Inasmuch as the issue is not a new one for assessment, the corporate sentiment among conference participants is that garnering this engagement remains a key dynamic and often great challenge. Several presenters admonished institutions represented at the conference to engage in cross-institutional dialogue to share strategies on how to foster a deeper degree of faculty engagement.

The second recurring theme centers on a question of the value, strategy and purpose of assessment efforts, asking What’s it all for? Assessment is hard work. And the growing sentiment appears to be a desire to see campus assessment efforts translate into actual impact on student learning, beyond the collection of data and documentation for accreditation and/or certification. This pull for results that impact student learning is a call to move beyond data collection and planning of assessment to the informed and strategic improvement of teaching and learning based on the data. To make assessment more useful, we must include within our strategy an intentional approach to leverage data and documentation to help bridge the gaps between our current and improved realities. This process must be ongoing. And it undoubtedly must include faculty.

Finally, the third takeaway comes in the form of a resource. The National Institute for Learning Outcomes Assessment (NILOA) had a strong presence at the 2011 Assessment Institute. Several of the organization’s staff and associates were keynote presenters and include a notable group of internationally recognized experts on assessment. NILOA presenters pointed conference participants to what they called the ‘crown jewel’ of the organization’s efforts, a recently-enhanced and robust website featuring a collection of papers, articles, presentations, websites and survey results compiled in alignment with the organization’s vision for discovering and adopting promising practices in the assessment of college student learning outcomes. Reviewing the organization’s website will quickly reveal its valuable contribution to the field of assessment and current issues, including those I’ve highlighted from the conference. Take a moment to explore this great resource by visiting

It was certainly a rich experience to attend the conference and have the opportunity to share with institutions and hear the collective voice of higher education assessment practitioners.

Rachel Cubas
Academic Trainer & Consultant
Assessment & Analytics Group | Academic Training & Consulting (ATC)


Hutchings, P. (2010) Opening Doors to Faculty Involvement in Assessment. National Institute for Learning Outcomes Assessment.


Collaborate and customize!

As we work with our Educational Partners on implementing their outcome assessment plan, from the macro to the micro level in distinctive academic cultures, a common theme emerged. The focus was on organizing and collecting evidence but there was no formal plan for action steps.

Our Assessment & Analytics team worked closely with several of our Educational Partners to customize templates, devise online faculty discussion forums and offer other technology enhanced solutions to be a catalyst for improving curriculum and instruction. As discussion pursued it became evident that there was no “one way” to accommodate the unique requirements of all colleges and universities but we were providing methods that could be used and customized for most.

The next step for us grew into Assessment Consulting Modules that could be designed around the needs of our Educational Partners. This included a “backwards” planning method to help ensure the multitude of assessment data the academies collect would actually help answer their ultimate questions and lead to evidence driven action plans.

From beginning to end the Assessment Consulting Modules are designed to lead participants through best practices of student learning outcome (SLO) assessment. The series begins by exploring why we engage in assessment and by defining a roadmap to creating a culture of evidence on campuses. Participants will then have opportunities to develop and connect SLOs and rubrics that apply directly to their unique curriculum. Our end goal is to provide specific strategy and design suggestions that translate into meaningful and sustainable assessment plans.

Following is a brief description of a few of the modules:

Curriculum Mapping
This module looks at the relational aspect of student learning outcomes within an academic institution. Intentional and explicit alignment of each outcome from the discrete course level to the increasingly broader, program, department, campus or institutional level is examined. The mapping concept is designed to provide opportunities and evidence of student learning at various stages of the curriculum.

Rubric Design
Measuring and assessing students’ demonstration of learning through the use of rubrics is the focus of this module. Designing an assessment rubric using explicit criteria statements and identification of examples of student performance at varying mastery levels of each outcome is presented. Included are comprehensive and clear rubric exemplars.

SLOs and Impact on Course Design
Quality course SLOs are the foundation for assessing student learning. Quality assessment of student performance requires those SLOs to be purposefully aligned to the learning activities, assessment activities and schedule of the course. In this module participants use a modified card-sort method to analyze the relationship of the course’s design to the SLOs and inform design changes to optimize student learning.

Fostering Faculty Ownership of Campus Assessment Culture
Assessing students’ mastery of learning outcomes falls primarily within the scope of faculty responsibility so it is critical they be an integral stakeholder in the development of campus assessment plans. Faculty engagement is further fostered by focusing on improving the student learning experience. This module provides tangible actions for academic leaders working to integrate faculty into the development of campus assessment culture.

We are no longer living in academic silos and must use web services and other technology enhanced services (along with colleagues) to link information similar to using Lego blocks. The design can be simple or very complex. The creativity is unlimited if we begin by understanding the combination of links within assessment and then proceed connecting until we have designed an application that fits real-time teaching and learning. Collaborate and customize!

Karen R. Owens, Ph.D.
Higher Education Assessment Consultant
Assessment & Analytics Group
Pearson eCollege



Are our students learning what we’re teaching?

Assessment is not a four letter word but among many higher education faculty it might as well be. The current tide of “show me” in assessment has alienated faculty. The approach has often been a top down model and it isn’t working.

Let’s listen and learn… 

I know my students. I know my subject matter. I can tell you which students “get it” and which ones “don’t”. I am in the classroom.

Here is what good teachers do. We start with intended student learning outcomes that allow us as instructors to design our curriculum with a focus on guiding student learning and not just on course content delivery.
Critical thinking skills are essential in all disciplines of higher education but how often do we have students enter our courses not bringing with them the tools they have acquired in their cumulative learning? This linkage for students requires that our teaching not only be systematic but behaviorally systemic. We push students to apply their knowledge and skills throughout all parts of their life. The trend in higher education is no longer about “seat time” or “activity minutes” but rather student demonstration of learning and we get it!

 So now you ask us, “How will we know if the students learned what we had hoped? How will they know?”

The progression of gathering information from course assignments, discussion threads and exams extends to improvement of subsequent learning and is the way we facilitate learning. Formative assessment allows for learning to be a process of improvement. It encourages students to build on previous learning and to transfer that learning into new situations. Summative assessment on the other hand evaluates an end product or process. In Levels of Assessment: From the Student to the Institution, Miller and Leskes (2005) explain:

“While the holistic assignment of grades (an A, B or F) is a way to evaluate student work, such grades represent averaged estimates of overall quality and communicate little to students about their strengths, weaknesses, or ways to improve. A better way to aid learning is through analyticalassessment, which can be as simple as written comments on student papers or as structured as the use of a detailed rubric for an assignment; such analysis can reveal precisely which concepts a student finds challenging.”

Using the student information we collect (assess) to inform our curriculum design means improved student learning within and across courses and as good instructors this is what we do!
So, is this about better teaching or better learning? You be the judge. But we will tell you it is not about extra work as we perceive the imposed ‘culture of evidence’ called assessment! It is about promoting collaborative work among all stakeholders to benefit our students!

Karen R. Owens, Ph.D.
Higher Education Assessment Consultant
Pearson eCollege

Miller, R. & Leskes, A. (2005). Levels of Assessment: From the Student to the Institution. A Greater Expectations Publication: Association of American Colleges and Universities (AAC & U).
Retrieved July 20, 2010 from:


Outcome Assessment vs. Assignment Grading

Admittedly, this debate will only catch the eyes of those of us who passionately engage in the role that outcome assessment plays in improving curricular and instructional effectiveness. My experience is that in most cases, course assignments predate the integration (or imposition) of outcomes into the course delivery process. As a result there is often loose alignment between assignments and the outcomes that have been associated with a course.

The core question then becomes whether faculty can integrate these two evaluation requirements into a single workflow or if they must be two discrete processes. My colleagues at Texas Christian University’s (TCU) Koehler Center for Teaching Excellence have deeply engaged in this debate with targeted faculty on their campus. They’ve summarized their perspective in a January 2010 newsletter article which is well worth reading. A case is made for maintaining outcome assessment as a unique process because of the aforementioned alignment issue.

For example, a student may turn in an assignment late which means s/he should receive a lower grade even though the student may have demonstrated mastery of the associated outcomes. Another common situation is that many departments want to include writing quality criterion in their assignment rubrics even though this may not be a stated outcome for the course.

While the points made in the article are valid, my belief is that ultimately these two processes need to be integrated into a single workflow for faculty. Professors have a limited amount of time they can dedicate to the feedback and evaluation steps in the teaching and learning cycle. If we ask them to add outcome assessment on top of an already full workload the quality of their feedback to students will likely be distributed more sparsely across a broader range of assessment requirements.

There are many committed faculty who are willing to go the extra mile but a well-designed course and assessment process can go a long way toward integrating these two components of a course-based evaluation approach. Assignments can be rewritten so that their evaluation criteria more closely align to the stated course learning outcomes. This takes effort too; however, once this alignment has been completed the efficiencies are realized in subsequent terms.

“Rubrics for Grading, Rubrics for Learning Outcomes”. (2010) January 2010 Koehler Center eNewsletter. Retrieved May 4, 2010, from

Brian McKay Epp | Academic Trainer & Consultant| Pearson eCollege


2009 Conference Reflections

Over the past year I’ve attended academic conferences in the U.S., Mexico, Spain, and Bahrain. Here are a few key takeaways I can offer from my perspective as a higher education assessment consultant.
Academics worldwide are debating the scholarship of teaching and learning quite intensely due largely to the disrupting change of the online for-profits, the ubiquitous acceptance of social networking, and the reality of user created content. An article in last week’s The Chronicle of Higher Education nicely summarized the online for profit sector’s impact on challenging all colleges and universities to do a better job not only of creating and tracking student learning outcomes but also for using the data collected to refine curriculum and instruction with an eye toward improving the student learning experience. Most online programs are able to track all activity in a course including page visits, class discussions, assignment uploads, exams, and grades. They are also able to standardize learning outcomes for all sections of a course to ensure comparability of data. This is the point where traditional academics will raise the academic freedom argument, however, I’ve seen traditional faculty agree on a common set of outcomes and even common assessment rubrics even though the assignments they develop to assess student progress may differ by instructor.

I also just returned from the SACS-Commission on Colleges Annual Meeting in Atlanta. I noticed that many universities were talking about course level assessment of student learning outcomes this year which was new. This is an area where I’ve been focusing for the past 18 months so it was nice to see the academy starting to recognize the importance of getting more granular in the assessment of student learning. Previously nearly everyone was satisfied with program level assessment. Program assessment is still important but it should be triangulated with course level assessment data along with indirect measures such as NSSE, CSSE, or Noel Levitz. Many institutions also participate in either the Voluntary System of Accountability (VSA) or the University and College Accountability Network.

The rise of social networking and user created content is another salient takeaway this year. If Web 2.0 or education gaming was in the session title you could count on a packed room. This was the case worldwide. These technologies are moving beyond the early adaptor stage and more into the mainstream. It is important for digital immigrants (those born before 1995) to recognize that digital natives are used to processing multiple channels at once and having just in time access to information. There are theories circulating that indeed even the structure of a digital native’s brain is different. This means we must adapt our method of teaching to be more of a facilitator as opposed to a lecturer who disseminates knowledge. During a Web 2.0 presentation in Guadalajara, Mexico last week I challenged participants to start using at least one new Web 2.0 application first in their personal lives and then to try to integrate the application into their teaching in the Spring semester. I’d be happy to share my presentation with anyone who’s interested. You can email me at if you’re interested.

It’s truly an exciting time in higher education. The next decade is going to bring about dramatic changes at colleges and universities. I look forward to participating in dialogue with many of you as we do our best to make education more accessible and effective for both learners and employers.

Brian McKay Epp
Academic Trainer and Consultant

Tagged as: No Comments

Momentum Building for Competency Based Learning

Most of us have heard of the European Union along with the establishment of the Euro as a common currency across the continent. Fewer have heard of the Bologna Process which began in June, 1999 with the goal of creating a more standardized higher education system in EU member nations. One initiative has been a tuning project where academics work to define a common set of learning outcomes by discipline and degree level.

The dialogue continues worldwide today about whether a focus on competencies versus assignment grading leads to an improved student learning experience but most would agree there is a difference. Many students are able to memorize processes or to cram for an exam but the ability to apply knowledge, skills, and concepts to new situations requires a deeper level of learning which is better suited for competency based assessment.

A June 4, 2009 blog post on The Chronicle for Higher Education website summarized a recent report commissioned by the Association of American Medical Colleges and the Howard Hughes Medical Institute calling for institutions to focus on competencies instead of courses as a way to improve curriculum for pre-med and medical schools. The report convened a group of educators, practitioners, and researchers to define a set of competencies both for entrance into and graduation from medical school. NCATE has already defined similar competencies for educators and other accreditation bodies are coming on board as well with efforts to agree on a core set of competencies by discipline.

The Lumina Foundation for Education also recently announced a three state Tuning USA project that seeks to define “the subject-specific knowledge and transferable skills that students in six fields must demonstrate upon completion of a degree program”. This is a bottom up effort involving faculty, students, and employers. Representatives from Indiana, Minnesota, and Utah will each define student learning outcomes for two disciplines while striving to preserve the ability for individual institutions and faculty to retain their academic freedom to teach to a common set of outcomes in the manner of their own choosing.

Pearson eCollege will continue to monitor this trend and seeks input from our partner institutions for best practices in outcome management and competency based learning.


Benelux Bologna Secretariat (n.d.). About the Bologna Process. Retrieved June 12, 2009 from Web site:

Lumina Foundation for Education (2009, April 8). News Release. Retrieved June 12, 2009 from , Web site:

Mangan, K. (2009, June 4). 'Competencies,' Not Courses, Should Be Focus of Medical-School Curricula, Report Says. Retrieved June12, 2009 from The Chronicle of Higher Education, Web site: