Online Blogucation

Do Web 2.0 tools really work?

In my work, I help higher education instructors to hone their online teaching capabilities. I've not had a class go by without someone posing the question: do Web 2.0 tools really work? Their hesitation is warranted: they are bombarded daily with proclamations of the Web 2.0 as the education magic bullet. Is it? Or is it just another education fashion that will fade away as fast as it came? There is actually a dearth of rigorous research that supports such emphatic rhetoric. But I thought that there might be some and I set out to find it.

Before I tell you about the results of my literature search journey, let me tell how I embarked on it. The variety of Web 2.0 tools and the amount of educational variables are humongous and hence to expect to cover it all in just one blog post was preposterous. I had to be selective. I wanted true comparisons; that is, studies in which there is control of variables and in which at least two groups received comparable instruction. In other words, I excluded studies where one group used Web 2.0 tools while the other did not receive supplementary instruction as well as studies in which there was no pre-treatment measurement. Further, I selected quantitative studies because I was interested in measures of student achievement. Studies that focused on students' engagement and perception and qualitative studies were also deselected. Moreover, studies in which the timing of the treatment (but not the treatment itself) varied were also excluded. Finally, I focused on higher education and typical academic core coursework published during the last three years. Let me tell you though that these constrains almost put me at the verge of ending this blog right here due to lack of studies, so I refrain from any further limitation.

The first study I found was Malhiwsky's (2010) Ph.D. dissertation. The study compared the achievement of community college students in the US who were taking ten-week long online Spanish classes. All students had to complete assignments related to authentic practices in language learning but one group did so by producing podcasts and videos while the other group did not use Web 2.0 tools. The two groups did not show any significant difference in pre-test scores but post-test scores for the Web 2.0 group resulted significantly higher than those of the non-Web 2.0 group. The author also implemented the Classroom Community Survey, which revealed a higher level of classroom community in the Web 2.0 courses. Interestingly though, both groups expressed the same level of self-reported learning, i.e. both groups had a similar perception of the extent of their learning.

Another study conducted by O'Bannon et al. (2011 and personal communication: email 11/28/2012) compared pre-service teachers' achievement when visually enhanced podcasts were used instead of lectures in a semester-long mandatory technology course at a US university. All students used a learning management system to submit assignments, participate in discussions, take quizzes, and access course resources. They also read textbook chapters, watched instructor's demonstrations, participated in hands-on practices and develop a project. Half of the students attended lectures taking notes on the slideshow handouts but the other half received the podcast treatment. Again, pre-test scores were not significantly different but post-test scores were significantly higher for the podcast group than for the lecture group. Students tended to like the podcasts and felt that they were reasonably effective but they disagreed that podcast should replace lectures.

Su et al (2010) conducted a quasi-experiment during an introductory computer class in Taiwan. The control class used a Web-based discussion board that did not include the article under discussion. The experimental class used a Web 2.0 collaborative annotation system that included the article and students could add mark ups on it. The collaborative tool also had annotation searching capabilities and could host discussion forums. Pre-test scores (yes, you guessed it) showed no statistically significant differences between the classes. During the fifteen-week long course, students received three low stakes tests plus a mid term and a final. The first test showed no statistical significant differences between the classes, which the authors attributed to lack of experience of the experimental group with the Web 2.0 tool but in the other two low stakes tests the experimental group showed scores that were statistically higher than those of the control group. For the mid-term and final exams the two groups did not show statistically significant differences, which the authors argued was due to more "catching up" done by the students in the control group.

In my quest, I did find a few other studies (with emphasis on "a few"). On the way, I read very many studies and concluded that while I still do not believe in magic bullets, Web 2.0 tools do a great deal when properly implemented. And this is not said lightly because they do present its challenges. Yet, I am still to find an article that describes a negative effect and in the overwhelming majority of cases the effect was positive. Sometimes, it required more training than initially thought, both for students to become familiar with the tool and for instructors to learn how to teach with it. I also encountered that as effective as Web 2.0 tools might be, students do not see them as replacements but rather as additions. Thus, they do not necessarily feel that they learn more when they use Web 2.0 tools and they may just express a preference for them, although test scores may vouch for an actual learning effect. This last point may talk to a smooth integration of Web 2.0 tools and instruction. Students using Web 2.0 tools may be learning more easily, without much catching up to do for high stakes exams. From my readings, I also formed the impression that the most salient value of Web 2.0 tools in learning is its collaborative nature: the fact that students feel the responsibility of being a member in a learning community that is to produce an authentic product for authentic audiences and do so in a 21st century kind of way.

Yes, there is one caveat here: all the studies I reviewed were limited in duration and may have suffered from the novelty effect, i.e. the tendency for individuals to initially increase their performance when new technology is involved because of increased interest in the new technology itself and not necessarily in the content it delivers. But until we have longitudinal studies and until a new contestant comes up to this arena, we have to go with what we have and by now this juror is out: Web 2.0 tools do work.

Laura Moin, Ph.D.

Academic Trainer & Consultant, Teaching & Learning Group


Malhiwsky, D. R. (2010). Student Achievement Using Web 2.0 Technologies: A Mixed Methods Study. Open Access Theses and Dissertations from the College of Education and Human Services. Paper 58.

O'Bannon, B. W., Lubke, J. K., Beard, J. L. & Britt, V. G. (2011). Using podcasts to replace lecture: Effects on student achievement. Computers & Education, 57, 1885-1892.

Su, A. Y. S., Yang, S. J. H., Hwang, W.Y, & Zhang. J. (2010). A Web 2.0-based collaborative annotation system for enhancing knowledge sharing in collaborative learning environments. Computers & Education, 55, 752-766.


Let’s Talk About: “What’s Going Well?”

I saw this blog recently in the Chronicle of Higher Education and want to share it with you. It’s short, so rather than trying to summarize, I’ve copied it in its entirety.

What’s Going Well?
March 21, 2012
By Natalie Houston

My training and experience as both a teacher of literature and as a personal productivity coach have shown me time and time again the value of asking simple questions. A good question doesn’t have to be long or complicated. A good question shouldn’t be an argument misleadingly packaged as a query. A good question often opens up other questions.

So here’s today’s question: what’s going well for you right now?

I like this question for several reasons:

Most people don’t spend enough time thinking or talking about what’s going well. At a deep neurological level, our brains are designed to pay more attention to potential danger than to neutral or beneficial things. Learning to pay more attention to the good stuff, even just with simple journaling exercises or breathwork, can help create new, more positive neural pathways.

Most people find it easier to focus on or complain about what’s not going well. I’ve written about this before, in relation to the social scripts that academics often engage in. (Have you heard anyone say, “oh, I didn’t get enough done over spring break” lately?) Rewriting those scripts has the power to shift your energy and that of people around you.

It’s also the case that our intellectual training tends to be organized around critique and competition. It’s much more challenging to sustain a conversation about what you liked and agree with in a text than about what you disagree with (try it with your next graduate seminar and you’ll see what I mean). There’s nothing wrong with intellectual critique – but it’s good to experience appreciation and celebration too, of yourself and others.

We can learn from what’s going well. By exploring what’s going well, you can discover core values and habits that you can extend from one area of your life to another. Do you prefer to be alone or with others? What do you find motivating? What helps you be persistent? Whether it’s writing, exercising, or cleaning the garage that you want to improve, you can apply strategies and ideas from some other area in which you feel more successful.

If we take this article to heart, and think about how we can apply this to our own work in an academic setting, what might be some questions we can pose to our students? I can think of a few examples.

Let’s imagine the beginning of the class period (for face-to-face) or a discussion item in an on-line course immediately following a lengthy reading assignment. We typically ask students if there was anything they found confusing or didn’t understand in the assignment. What if we turn that around and instead we ask our students to name one thing they really understood well and to give us a summary of their understanding of that one thing. This serves a similar purpose, in that we would be getting information about what our students learned from the assignment. It also provides a nice review and can help students who may not have understood the item.

I liked Natalie’s suggestion that using journaling could “help create new more positive neural pathways.” I wonder what the result might be for students if we asked them to keep a journal in which they must identify things that are going well in the course but with a focus on how they personally are doing well in the course. Perhaps by asking our students to focus on their own feelings about themselves as learners and by targeting what’s working and going well, students may come to see themselves in a more positive light and this might improve their confidence. It might also help students to better understand the important role they must play in their learning and thus, take more responsibility over their learning.

I’m sure you can think of many other ways to use this approach with your students. Please add your ideas or experiences with using this approach with your students (or coworkers). Tell us what is going well.

Kimberly Thompson
Assessment Consultant
Academic Training & Consulting
Pearson eCollege


Philosophy of Teaching Twitter Challenge!

This post could have been titled “What’s Your Teaching Philosophy in 110 Characters or Less?” because we’re asking you to participate in a challenge related to developing and succinctly crafting a version of your philosophy of teaching!

The Challenge*

Please review this this post and the examples provided below about writing a brief teaching philosophy. Then, we challenge our readers here to try it for yourself! We would like to receive your submissions via our Twitter account using a hashtag and to mention our Twitter name in your post. So, how do you do it? When posting your 110 character philosophy of teaching to twitter, please include the following in your post so we can follow your responses: @atcecollege #teachphilosophy

What is a Philosophy of Teaching? Why Should I Write One?

Though many formal teaching philosophy statements run two or more pages, having even a brief framework of your philosophy can be beneficial. According to Chapnick (2009), “creating a philosophy of teaching and learning statement is ultimately both personally and professionally rewarding, and is therefore well worth the effort” (p. 4). Defining our philosophy of teaching helps to provide a framework for our practice as educators.

Do you believe timeliness and access are important, as Stevens III (2009) does in this example of his principles? “The principles I follow are simple: be accessible to students and treat them with respect. Accessibility means being available not just during class and office hours, but at any reasonable time. I encourage them to call me at home, and I promise them a response to email messages within 24 hours” (p. 11). If yes, for example, your philosophy would feature timeliness and access as important to you and in your practice you would work to achieve these principles.

What the philosophy includes might reflect a diverse set of information and depends on the audience. The Teaching Center (2007) offers these as guiding questions: (1) Why do you teach? (2) What do you teach? (3) How do you teach? and (4) How do you measure your effectiveness? Let’s apply that framework here in our challenge!

Can I See an Example?

Of Course! Following the model described above, here are some examples:

Inspiring humanity social science and education engaging and interactive
authentic experience designs @atcecollege #teachphilosophy

Learning experiencing sharing knowing doing frequent engagement
anywhere anytime @atcecollege #teachphilosophy

Lisa Marie Johnson, Ph.D.
Academic Trainer & Consultant
Pearson eCollege


  • Do you want to follow the tweets associated with @atcecollege or the tag #teachphilosophy? You can search without a twitter account by going to the Twitter Search page:
  • Hashtags on Twitter allow for “tagging” a post to twitter (tweet) that makes it easier to search for on twitter.  When you include the Twitter name preceded by the at-symbol - @ - it is a Mention of the account and your post shows up in a list of tweets that refer to that account.
  • If you do not have a Twitter account, but are on Facebook, you could instead post to our ATC eCollege Facebook account in response to the comment about this post:
  • Hashtags on Twitter allow for “tagging” a post to twitter (tweet) that makes it easier to search for on twitter.  When you include the Twitter name preceded by the at-symbol - @ - it is a Mention of the account and your post shows up in a list of tweets that refer to that account.
  • If you do not have a Twitter account, but are on Facebook, you could instead post to our ATC eCollege Facebook account in response to the comment about this post: MindShift.


Chapnick, A. (2009). How to write a philosophy of teaching and learning statement (pp. 4-5). Faculty Focus Special Report - Philosophy of Teaching Statements: Examples and Tips on How to Write a Teaching Philosophy Statement. Magna Publications. Available from

Stevens III, R. S. (2009). Education as becoming: A philosophy of teaching (pp. 11). Faculty Focus Special Report - Philosophy of Teaching Statements: Examples and Tips on How to Write a Teaching Philosophy Statement. Magna Publications. Available from

The Teaching Center (2007). Writing a teaching philosophy statement. Available from the Washington University in St. Louis:


They’ve left us, but that doesn’t mean they aren’t still students

The National Student Clearinghouse Research Center released a Snapshot Report on persistence last week with some interesting new data on student persistence. To obtain a copy of the report visit their website at According to the Research Center, "students were counted as having persisted if they: 1)remained enrolled in any postsecondary institution 60 days after the end of the term that included October 15, 2010 or 2) completed a degree within 60 days of the end of the term that included October 15, 2010.

The Research Center was able to identify students persisting in higher education regardless if the students remained at a single institution or moved among institutions. Accounting for this student movement, researchers found that overall, 84.7% of students persisted in higher education. Data were further broken down between full- and part-time status with 92.5% of full-time and 71.2% of part-time students identified as persisting. An examination of the persistence rates by type of institution attended revealed that the highest rate (91.4%) was found among students attending private, not-for-profit, 4-year institutions while the lowest rate (74.9%) was among students attending public, 2-year instititons.

These findings are encouraging as they show that while some students leave an institution before earning a degree or certificate, many continue their education at another institution. These "leavers" are typically viewed as drop-outs, an undesirable outcome from the institution's perspective. But, because of the data reported by the Research Center we can see that many of these students are, in fact, persisting but have just moved from one institution to another.

Institutions participating as data providers to the National Student Clearing House are able to use the data to help them determine how many of their former students are continuing at other institutions and can make adjustments to their own reports on persistence and completion. The data can also be useful to states and others who are interested in better understanding the enrollment patterns of today's college students.

The bottom line for those of us interested in seeing all students succeed is that the picture is not as bleak as our previous incomplete data on persistence would have us believe. And even more importantly, these findings suggest that students seem willing to continue their education even if, for whatever reasons, they have left one institution at some point during their education journey.
Kimberly Thompson


Whom will the data serve? Thoughts on Usefulness and Portals for Education

As noted in the article Salman Khan: The New Andrew Carnegie? -

...knowledge no longer needs to be bound into the paper and cloth of a book but can float free on the wireless waves of the Internet. There’s a lot of junk bobbing in those waves as well — information that is outdated, inaccurate, or flat-out false — so the emergence of online educational materials that are both free of charge and carefully vetted is a momentous development. This phenomenon is all the more significant given the increasing scrutiny directed at for-profit online universities, which have been criticized for burdening students with debt even as they dispense education of questionable usefulness. Websites offering high-quality instruction for free are the Carnegie libraries of the 21st century: portals of opportunity for curious and motivated learners, no matter what their material circumstances (Paul, 2011, para. 6).

I pursue the goal of excelling as an engineer or architect of learning and to be otherwise associated with the proliferation of "portals of opportunity for curious and motivated learners, no matter what their material circumstances" (Paul, 2011, para. 6). In some sense, I am these things already as an Academic Trainer and Consultant with Pearson eCollege. If I had a personal mission statement, it would be worded similarly and my destiny would be to serve in an industry associated with or embedded within the systems of education.

Yet, that’s not the point of this post!

I found it interesting the Paul (2011) article quoted above suggests the phenomenon of high quality online vetted materials " all the more significant given the increasing scrutiny directed at for-profit online universities, which have been criticized for burdening students with debt even as they dispense education of questionable usefulness."

Could not many of us argue that public colleges and universities also "dispense education" of "questionable usefulness"? Actually, many might also debate whether education is dispensed or received or shared or…

Wait, that’s not the point of this post either!

So, what is the point you ask?

The point is to consider critically the reality that all colleges and universities - regardless of profit motive or mission statement - are justifiably susceptible to this questioning of usefulness. Knowledge and skills needed for professions and trades evolve quickly in part due to the globalization of knowledge and virtual removal of barriers to access to information through the internet for a large portion of the world’s population, but certainly not all of that population! Let's question some things...

Could we argue that a nursing or teaching degree in the United States from 1990 is as useful today in the same locale as one from 2010? Does locale matter? How does that impact usefulness?

Does on the job real-world apprenticeship style workflow-learning add value to the formal education received? If yes, how is that measured?

Does a graduate's lack of continued professional or personal development post-graduation to become or remain productive in the workforce as laborer or entrepreneur necessarily reflect negatively on the value of educational portals provided by a college or university?

Yes, that’s the point.

While there is much that can be unpackaged from the messages of the selected quote opening this post, the point of this post is to ask you to think critically about what we are measuring when we refer to educational usefulness, how we are measuring it and defining the variables associated with the measures, and ultimately why we are measuring it – whom will the data serve?

Lisa Marie Johnson, Ph.D.
Academic Trainer & Consultant
Pearson eCollege


Paul, A.M. (2011, November 16). Salman Khan: The new Andrew Carnegie? The emergence of free, high-quality online courses could change learning forever. Retrieved from Times Online MagazineIdeas section (link opens new page):


The Buzz on Assessment

I had the pleasure of attending the 2011 Assessment Institute in Indianapolis this week. The conference is the nation’s oldest and largest event focused exclusively on outcomes assessment in higher education. Administrators, Faculty and Student Affairs professionals convened this week to discuss techniques and approaches across outcomes assessment areas. This year, the event featured tracks on Capstone Experience, ePortfolios, and Faculty Development, among others.

I’d like to share with you a few of the recurring themes I heard and will take with me from the keynotes, workshops and best practice sessions. I will share specifically three themes and considerations. These few points may serve as a marker for some of the noteworthy issues and considerations in the higher education outcomes assessments landscape.

The first two themes are indeed linked in both process and practice, so I will identify both of them at this point. They are: 1) Faculty Engagement and 2) Using Results to Inform Improvement Processes. For those of us who have been doing outcomes assessment for any extended period of time, these themes may echo many of the questions and issues as well as the successes we have faced.

The engagement of faculty in the assessment process is certainly not a new issue in the practice of assessment. Notwithstanding, faculty engagement in the process of outcomes assessment is a reality many institutions are still desiring and even stretching to achieve. The corporate understanding among practitioners gathered at the event appears to reveal an arrival, or perhaps a standstill in some cases, at a place of resounding confirmation, one that points to faculty engagement in the assessment process as a critical component to successful assessment. In her 2010 paper entitled “Opening Doors to Faculty Involvement in Assessment”, Pat Hutchings wrote:

“As Peter Ewell (2009) points out in another NILOA paper, from its early days in higher education, assessment was “consciously separated from what went on in the classroom,” and especially from grading, as part of an effort to promote “objective” data gathering (p. 19). In response, many campuses felt they had no choice but to employ external tests and instruments that kept assessment distinct from the regular work of faculty as facilitators and judges of student learning. In fact, the real promise of assessment—and the area in which faculty involvement matters first and most—lies precisely in the questions that faculty, both individually and collectively, must ask about their students’ learning in their regular instructional work: what purposes and goals are most important, whether those goals are met, and how to do better. As one faculty member once told me, “assessment is asking whether my students are learning what I am teaching.”

Further, the notion was submitted that seeking faculty engagement should not be seen as a one-time achievement but as an ongoing and evolving effort that characterizes a campus assessment strategy. Inasmuch as the issue is not a new one for assessment, the corporate sentiment among conference participants is that garnering this engagement remains a key dynamic and often great challenge. Several presenters admonished institutions represented at the conference to engage in cross-institutional dialogue to share strategies on how to foster a deeper degree of faculty engagement.

The second recurring theme centers on a question of the value, strategy and purpose of assessment efforts, asking What’s it all for? Assessment is hard work. And the growing sentiment appears to be a desire to see campus assessment efforts translate into actual impact on student learning, beyond the collection of data and documentation for accreditation and/or certification. This pull for results that impact student learning is a call to move beyond data collection and planning of assessment to the informed and strategic improvement of teaching and learning based on the data. To make assessment more useful, we must include within our strategy an intentional approach to leverage data and documentation to help bridge the gaps between our current and improved realities. This process must be ongoing. And it undoubtedly must include faculty.

Finally, the third takeaway comes in the form of a resource. The National Institute for Learning Outcomes Assessment (NILOA) had a strong presence at the 2011 Assessment Institute. Several of the organization’s staff and associates were keynote presenters and include a notable group of internationally recognized experts on assessment. NILOA presenters pointed conference participants to what they called the ‘crown jewel’ of the organization’s efforts, a recently-enhanced and robust website featuring a collection of papers, articles, presentations, websites and survey results compiled in alignment with the organization’s vision for discovering and adopting promising practices in the assessment of college student learning outcomes. Reviewing the organization’s website will quickly reveal its valuable contribution to the field of assessment and current issues, including those I’ve highlighted from the conference. Take a moment to explore this great resource by visiting

It was certainly a rich experience to attend the conference and have the opportunity to share with institutions and hear the collective voice of higher education assessment practitioners.

Rachel Cubas
Academic Trainer & Consultant
Assessment & Analytics Group | Academic Training & Consulting (ATC)


Hutchings, P. (2010) Opening Doors to Faculty Involvement in Assessment. National Institute for Learning Outcomes Assessment.


Hallmark #2 – Planning

Fasten your seatbelt and hold on to your hat! This week we are going to talk about planning in regards to the Middle States Accreditation plan. While I say that a bit facetiously it is actually a little piece of the canvas which is part of a bigger more exciting piece of work.  By standardizing accreditation requirements nationwide for higher education online learning programs, those of us firmly planted in online learning programs can take a huge leap forward to demonstrate (with statistics, research and data) that what we are doing is not only catering to a growing market’s demands but doing so because the pedagogy and statistics show that our students are learning and competing and often exceeding their counterparts in fully online programs.

There are 9 hallmarks in the Middle States Accreditation plan and today we look closely at #2-Planning. On a side note, I will give you some background into this series of blogs. After an introduction to the overall Distance Education Programs--Interregional Guidelines for the Evaluation of Distance Education (Online Learning) each person on our team (the Academic Consulting team at Pearson eCollege) took a hallmark to focus on and fully explain. In the draw, I drew #2 Planning.

Now, as I plan for this blog (I deliberately chose the word plan in case you missed that) I can see how apropos it is that I have the planning topic. I am a planner to the point of a clinical neurosis some might say. I am the person who, when the seatbelt light goes off on an airplane as we pull into the gate, I get up and find my car keys and my credit card so when I get off the plane and get to the end of the very long walk to my car, I can jump in, start the car and proceed to pay for parking. Downtime is used for reflection and analysis but it is also a moment or two that can be used to take care of details and save time later on. So from the planner’s perspective, let’s look at hallmark #2.

With that statement of credibility (I am qualified to talk about planning because I am a neurotic planner in my day to day life), let us take a look at how EduKan, the consortium of online campuses for 6 Kansas community colleges, leads by example when it comes to these accreditation hallmarks. Some institutions will fret and have to hire consultants to comply when this becomes standard whereas other institutions, such as EduKan, will simply look at the list and say: “we already do that.”

Hallmark #2 reads:
The institution’s plans for developing, sustaining, and, if appropriate, expanding online learning offerings are integrated into its regular planning and evaluation processes (MSCHE Standard 2).

From the guidelines, analysis and evidence of this hallmark will review:

  • Development and ownership of plans for online learning extend beyond the administrators directly responsible for it and the programs directly using it;
  • Planning documents are explicit about any goals to increase numbers of programs provided through online learning courses and programs and/or numbers of students to be enrolled in them;
  • Plans for online learning are linked effectively to budget and technology planning to ensure adequate support for current and future offerings;
  • Plans for expanding online learning demonstrate the institution’s capacity to assure an appropriate level of quality;
  • The institution and its online learning programs have a track record of conducting needs analysis and of supporting programs.

So in asking how EduKan’s director Mark Sarver addresses the topic of planning, he replied that all aspects of the planning guideline are addressed through their Strategic Planning committee. The Strategic Planning committee for EduKan includes representatives from all jobs and roles within the organization. The group includes but is not limited to: academic deans, advisors, instructors, registrars, other administrators et. al. They devise a 3 year strategic plan which is created and agreed upon by all members of the committee. It is all encompassing to include goals, budget planning, technology planning, and indicators of success. The stakeholders on the committee then take the plan back to their respective groups and gain approval from those groups. As the committee meets every three years, they check the indicators of progress, document successes and adjust or re-define goals for the next three year plan. Statistics, reporting and data analysis provide the documentation needed to assure the required appropriate level of quality. The process is ongoing and it includes every role in the EduKan system to gain buy-in from all those with a role in the success of the online program and the consortium as a whole.

EduKan is not unique in this process. All institutions have a similar program or committee that examines, develops, implements and then reviews their overall plan for successfully educating the students who attend their institution and enrolls in their courses. If they have always been a traditionally on ground campus, this will have to expand to include the online goals above. If they already have an online component to their offerings, they will have to be sure they can document that they are addressing the analysis components above. Of the 9 hallmarks soon to be part of the accreditation process for online learning programs, number two might be one that you can check off as already being in place. Good luck!

-Pamela Kachka, M.A.Ed.-
Academic Consultant


Are our students learning what we’re teaching?

Assessment is not a four letter word but among many higher education faculty it might as well be. The current tide of “show me” in assessment has alienated faculty. The approach has often been a top down model and it isn’t working.

Let’s listen and learn… 

I know my students. I know my subject matter. I can tell you which students “get it” and which ones “don’t”. I am in the classroom.

Here is what good teachers do. We start with intended student learning outcomes that allow us as instructors to design our curriculum with a focus on guiding student learning and not just on course content delivery.
Critical thinking skills are essential in all disciplines of higher education but how often do we have students enter our courses not bringing with them the tools they have acquired in their cumulative learning? This linkage for students requires that our teaching not only be systematic but behaviorally systemic. We push students to apply their knowledge and skills throughout all parts of their life. The trend in higher education is no longer about “seat time” or “activity minutes” but rather student demonstration of learning and we get it!

 So now you ask us, “How will we know if the students learned what we had hoped? How will they know?”

The progression of gathering information from course assignments, discussion threads and exams extends to improvement of subsequent learning and is the way we facilitate learning. Formative assessment allows for learning to be a process of improvement. It encourages students to build on previous learning and to transfer that learning into new situations. Summative assessment on the other hand evaluates an end product or process. In Levels of Assessment: From the Student to the Institution, Miller and Leskes (2005) explain:

“While the holistic assignment of grades (an A, B or F) is a way to evaluate student work, such grades represent averaged estimates of overall quality and communicate little to students about their strengths, weaknesses, or ways to improve. A better way to aid learning is through analyticalassessment, which can be as simple as written comments on student papers or as structured as the use of a detailed rubric for an assignment; such analysis can reveal precisely which concepts a student finds challenging.”

Using the student information we collect (assess) to inform our curriculum design means improved student learning within and across courses and as good instructors this is what we do!
So, is this about better teaching or better learning? You be the judge. But we will tell you it is not about extra work as we perceive the imposed ‘culture of evidence’ called assessment! It is about promoting collaborative work among all stakeholders to benefit our students!

Karen R. Owens, Ph.D.
Higher Education Assessment Consultant
Pearson eCollege

Miller, R. & Leskes, A. (2005). Levels of Assessment: From the Student to the Institution. A Greater Expectations Publication: Association of American Colleges and Universities (AAC & U).
Retrieved July 20, 2010 from:


Outcome Assessment vs. Assignment Grading

Admittedly, this debate will only catch the eyes of those of us who passionately engage in the role that outcome assessment plays in improving curricular and instructional effectiveness. My experience is that in most cases, course assignments predate the integration (or imposition) of outcomes into the course delivery process. As a result there is often loose alignment between assignments and the outcomes that have been associated with a course.

The core question then becomes whether faculty can integrate these two evaluation requirements into a single workflow or if they must be two discrete processes. My colleagues at Texas Christian University’s (TCU) Koehler Center for Teaching Excellence have deeply engaged in this debate with targeted faculty on their campus. They’ve summarized their perspective in a January 2010 newsletter article which is well worth reading. A case is made for maintaining outcome assessment as a unique process because of the aforementioned alignment issue.

For example, a student may turn in an assignment late which means s/he should receive a lower grade even though the student may have demonstrated mastery of the associated outcomes. Another common situation is that many departments want to include writing quality criterion in their assignment rubrics even though this may not be a stated outcome for the course.

While the points made in the article are valid, my belief is that ultimately these two processes need to be integrated into a single workflow for faculty. Professors have a limited amount of time they can dedicate to the feedback and evaluation steps in the teaching and learning cycle. If we ask them to add outcome assessment on top of an already full workload the quality of their feedback to students will likely be distributed more sparsely across a broader range of assessment requirements.

There are many committed faculty who are willing to go the extra mile but a well-designed course and assessment process can go a long way toward integrating these two components of a course-based evaluation approach. Assignments can be rewritten so that their evaluation criteria more closely align to the stated course learning outcomes. This takes effort too; however, once this alignment has been completed the efficiencies are realized in subsequent terms.

“Rubrics for Grading, Rubrics for Learning Outcomes”. (2010) January 2010 Koehler Center eNewsletter. Retrieved May 4, 2010, from

Brian McKay Epp | Academic Trainer & Consultant| Pearson eCollege


Momentum Building for Competency Based Learning

Most of us have heard of the European Union along with the establishment of the Euro as a common currency across the continent. Fewer have heard of the Bologna Process which began in June, 1999 with the goal of creating a more standardized higher education system in EU member nations. One initiative has been a tuning project where academics work to define a common set of learning outcomes by discipline and degree level.

The dialogue continues worldwide today about whether a focus on competencies versus assignment grading leads to an improved student learning experience but most would agree there is a difference. Many students are able to memorize processes or to cram for an exam but the ability to apply knowledge, skills, and concepts to new situations requires a deeper level of learning which is better suited for competency based assessment.

A June 4, 2009 blog post on The Chronicle for Higher Education website summarized a recent report commissioned by the Association of American Medical Colleges and the Howard Hughes Medical Institute calling for institutions to focus on competencies instead of courses as a way to improve curriculum for pre-med and medical schools. The report convened a group of educators, practitioners, and researchers to define a set of competencies both for entrance into and graduation from medical school. NCATE has already defined similar competencies for educators and other accreditation bodies are coming on board as well with efforts to agree on a core set of competencies by discipline.

The Lumina Foundation for Education also recently announced a three state Tuning USA project that seeks to define “the subject-specific knowledge and transferable skills that students in six fields must demonstrate upon completion of a degree program”. This is a bottom up effort involving faculty, students, and employers. Representatives from Indiana, Minnesota, and Utah will each define student learning outcomes for two disciplines while striving to preserve the ability for individual institutions and faculty to retain their academic freedom to teach to a common set of outcomes in the manner of their own choosing.

Pearson eCollege will continue to monitor this trend and seeks input from our partner institutions for best practices in outcome management and competency based learning.


Benelux Bologna Secretariat (n.d.). About the Bologna Process. Retrieved June 12, 2009 from Web site:

Lumina Foundation for Education (2009, April 8). News Release. Retrieved June 12, 2009 from , Web site:

Mangan, K. (2009, June 4). 'Competencies,' Not Courses, Should Be Focus of Medical-School Curricula, Report Says. Retrieved June12, 2009 from The Chronicle of Higher Education, Web site: