Online Blogucation

Hallmark #5: Evaluation

This is the fifth in a series of posts on the 9 Hallmarks of Quality for Distance Education programs that were developed by the Council of Regional Accrediting Commissions (C-RAC) earlier this year.

The institution evaluates the effectiveness of its online offerings, including the extent to which the online learning goals are achieved, and uses the results of its evaluations to enhance the attainment of the goals (MSCHE, 2011).

As institutions seek to develop a culture of assessment that meets increasingly stringent accreditor requirements, a myth prevails that a pre-defined template exists that elegantly solves this ill-structured problem. The truth is that accreditors defer most of the responsibility to the institution who must set their own mission (Hallmark #1), program goals, and individual course outlines that provide the learning experience required for students to demonstrate mastery of the curriculum. They evaluate the extent to which a school has developed an assessment approach that measures curricular and instructional effectiveness and shows how data is used to further the continuous improvement of student learning.

While this may be frustrating to read, there are definitely patterns and best practices that scholars of teaching and learning have developed which synthesize characteristics of successful accountability programs.

First, institutions must be purposeful in their assessment program which means there is a plan for what data to collect and how it will be used to improve student learning. A holistic assessment approach includes both formative and summative assessment within courses and at the program level so students have the ability to remediate their weaknesses before it’s too late. Programs new to assessment usually begin with evaluation of program level goals and move into course level assessment as they mature. Ideally, most assessment can be embedded within the course so faculty of record can gather the data as part of their ongoing student assessment workflow.

This leads to a second major challenge in that perfection can be the enemy of good – or even the ability to get better. Our partners often tell us they’re not ready for assessment and we see academic leaders go through numerous models in their heads without ever actually implementing anything. Getting started creates informed use which yields better questions and action plans going forward.

As we consult on assessment and methods to integrate technology into the outcome management process, we nearly always expose what seem like obvious gaps in curriculum and instruction. This is part of the continuous improvement process and the important thing is to remedy that gap and to then look for the next most critical issue to resolve.

Finally, I’ve often heard assessment experts encourage academic leaders to actually scale back the volume of data they’re collecting. As mentioned earlier, data is meaningless unless you take the time to analyze what you’ve gathered to diagnose gaps and to implement improvement action plans to address the gaps. So, you might consider assessing random samples of student artifacts instead of trying to assess every student each term or you can assess all students against an outcome but only evaluate the outcome every two years.

Our consultants have developed the following modules to support educators in meeting requirements for Hallmark #5.

  • Creating a Culture of Assessment
  • Writing Quality SLOs
  • Rubric Design
  • Curriculum Mapping (Institution > Program > Course)
  • SLOs and Impact on Course Design (Curriculum mapping within a course)
  • Fostering Faculty Ownership of Campus Assessment Culture
  • Closing the Loop - Ensuring that SLO Data Impacts Curriculum & Instruction

In addition to the purposeful management of student learning, Hallmark #5 also requires institutions to monitor and set goals for both in-course retention and student persistence through a degree program along with the effectiveness of an institution’s academic and support services (MSCHE, 2011). Again, our consultants can work with you to develop custom reports to track and monitor progress for retention and persistence with student activity and completion data from the LMS. We can also help to identify at-risk students to support the requirement to measure effectiveness of academic and support services although this component certainly requires additional offline analysis of process and services at the institution.

Let us know if you have recommendations for any additional content area we should develop or if you’d like more information on our consulting services.

Works Cited

Middle States Commission on Higher Education (MSCHE). (2011, February). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved July 18, 2011 from

Brian Epp, M.Ed. | Assessment & Analytics Group, Academic Training & Consulting | Pearson eCollege


Hallmark #3 – Governance and Academic Oversight

Third in a series of nine blogs.

The Hallmarks of Quality # 3

Online learning is incorporated into the institution’s systems of governance and academic oversight.

While my colleagues have written “no brainers” of Hallmarks #1 and #2, mission and planning, #3 begins to become more challenging. Many will cringe, when reading this Hallmark, as I did. The words governance and academic oversight sound too invasive from my perspective but who among us would not want to ensure the institution we teach for protects the integrity of its curriculum and ensures students are offered a valued education? The challenge however with this assurance is in the execution.

Academic governance takes many different forms among colleges but I firmly believe the strongest and most highly regarded schools are those with full transparency of what occurs within the Ivory tower and the classrooms. Instructors and administrators sharing resources, collaborating on essential student learning outcomes and then allowing the creativity of the instructors to emerge makes us all stronger.  The concept of online learning “shared governance” provides for individual and collective voices that matter.

We can retain academic freedom and still preserve quality education. This Hallmark is in place for on ground classes and absolutely needs to be the same for online courses. Maybe I am naive but does anyone really believe the standards of quality education should be different for the two modalities of instruction? While the debate rages on over the academic rigor of online classes it seems to me it all boils down to the quality of the faculty. What good teachers do.... they do online and face to face. So, it is these faculty members in conjunction with the administration that need to be the ones monitoring the academic oversight.

Middle States Commission on Higher Education (MSCHE) suggest the analysis and evidence of sustaining this Hallmark may be provided by:

  • An institution’s faculty having a designated role in the design and implementation of its online learning offerings;
  • An institution ensuring the rigor of the offerings and the quality of the instruction;
  • Approval of online courses and programs following standard processes used in the college or university;
  • Online learning courses and programs being evaluated on a periodic basis;
  • Contractual relationships and arrangements with consortial partners, if any, are clear and guarantee that the institution can exercise appropriate responsibility for the academic quality of all online learning offerings provided under its name.

As Gary A. Olson (2009), provost and vice president for academic affairs at Idaho State University succinctly summarized:

"Clearly, when it comes to university governance, "shared" is a much more capacious concept than most people suspect. True shared governance attempts to balance maximum participation in decision making with clear accountability. That is a difficult balance to maintain, which may explain why the concept has become so fraught. Genuine shared governance gives voice (but not necessarily ultimate authority) to concerns common to all constituencies as well as to issues unique to specific groups.

The key to genuine shared governance is broad and unending communication. When various groups of people are kept in the loop and understand what developments are occurring within the university, and when they are invited to participate as true partners, the institution prospers. That, after all, is our common goal". (para. 17 & 18)

Let's embrace the opportunity!

Karen R. Owens, Ph.D.
Academic Assessment Consultant


Middle States Commission on Higher Education (MSCHE).  (2011, February). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved July 18, 2011 from

Olson, G. (2009, July 23). Exactly what is 'shared governance' ? The Chronicle of Higher Education Retrieved July 18, 2011 from


Hallmark #2 – Planning

Fasten your seatbelt and hold on to your hat! This week we are going to talk about planning in regards to the Middle States Accreditation plan. While I say that a bit facetiously it is actually a little piece of the canvas which is part of a bigger more exciting piece of work.  By standardizing accreditation requirements nationwide for higher education online learning programs, those of us firmly planted in online learning programs can take a huge leap forward to demonstrate (with statistics, research and data) that what we are doing is not only catering to a growing market’s demands but doing so because the pedagogy and statistics show that our students are learning and competing and often exceeding their counterparts in fully online programs.

There are 9 hallmarks in the Middle States Accreditation plan and today we look closely at #2-Planning. On a side note, I will give you some background into this series of blogs. After an introduction to the overall Distance Education Programs--Interregional Guidelines for the Evaluation of Distance Education (Online Learning) each person on our team (the Academic Consulting team at Pearson eCollege) took a hallmark to focus on and fully explain. In the draw, I drew #2 Planning.

Now, as I plan for this blog (I deliberately chose the word plan in case you missed that) I can see how apropos it is that I have the planning topic. I am a planner to the point of a clinical neurosis some might say. I am the person who, when the seatbelt light goes off on an airplane as we pull into the gate, I get up and find my car keys and my credit card so when I get off the plane and get to the end of the very long walk to my car, I can jump in, start the car and proceed to pay for parking. Downtime is used for reflection and analysis but it is also a moment or two that can be used to take care of details and save time later on. So from the planner’s perspective, let’s look at hallmark #2.

With that statement of credibility (I am qualified to talk about planning because I am a neurotic planner in my day to day life), let us take a look at how EduKan, the consortium of online campuses for 6 Kansas community colleges, leads by example when it comes to these accreditation hallmarks. Some institutions will fret and have to hire consultants to comply when this becomes standard whereas other institutions, such as EduKan, will simply look at the list and say: “we already do that.”

Hallmark #2 reads:
The institution’s plans for developing, sustaining, and, if appropriate, expanding online learning offerings are integrated into its regular planning and evaluation processes (MSCHE Standard 2).

From the guidelines, analysis and evidence of this hallmark will review:

  • Development and ownership of plans for online learning extend beyond the administrators directly responsible for it and the programs directly using it;
  • Planning documents are explicit about any goals to increase numbers of programs provided through online learning courses and programs and/or numbers of students to be enrolled in them;
  • Plans for online learning are linked effectively to budget and technology planning to ensure adequate support for current and future offerings;
  • Plans for expanding online learning demonstrate the institution’s capacity to assure an appropriate level of quality;
  • The institution and its online learning programs have a track record of conducting needs analysis and of supporting programs.

So in asking how EduKan’s director Mark Sarver addresses the topic of planning, he replied that all aspects of the planning guideline are addressed through their Strategic Planning committee. The Strategic Planning committee for EduKan includes representatives from all jobs and roles within the organization. The group includes but is not limited to: academic deans, advisors, instructors, registrars, other administrators et. al. They devise a 3 year strategic plan which is created and agreed upon by all members of the committee. It is all encompassing to include goals, budget planning, technology planning, and indicators of success. The stakeholders on the committee then take the plan back to their respective groups and gain approval from those groups. As the committee meets every three years, they check the indicators of progress, document successes and adjust or re-define goals for the next three year plan. Statistics, reporting and data analysis provide the documentation needed to assure the required appropriate level of quality. The process is ongoing and it includes every role in the EduKan system to gain buy-in from all those with a role in the success of the online program and the consortium as a whole.

EduKan is not unique in this process. All institutions have a similar program or committee that examines, develops, implements and then reviews their overall plan for successfully educating the students who attend their institution and enrolls in their courses. If they have always been a traditionally on ground campus, this will have to expand to include the online goals above. If they already have an online component to their offerings, they will have to be sure they can document that they are addressing the analysis components above. Of the 9 hallmarks soon to be part of the accreditation process for online learning programs, number two might be one that you can check off as already being in place. Good luck!

-Pamela Kachka, M.A.Ed.-
Academic Consultant


Hallmark #1

As Jeff Borden mentioned last Wednesday, this week marks the first blog in a nine week series where the ATC Team will highlight each of the 9 Hallmarks for Quality Online Education.  If you missed it last week, the regional and national accreditors have agreed on a set of outcomes they will use to evaluate online institutions. So let’s take a look at the first hallmark: 

 Hallmark 1

Online learning is appropriate to the institution’s mission and purposes

As I sat down to think about what I would write, I found myself stumped. What could I write about?  This one‘s a “no-brainer”.  We’ve all been taught that, no matter what the industry, corporations, non-profits, all organizations MUST have a mission.   But as I continued to read the analysis/evidence section, I realized this is about more than simply having a mission, it’s about making sure online learning fits into the overall institutional mission.

In other words, quality online programs aren’t just a whim.  They aren’t implemented because every other college has online courses.  Quality programs are not quick money-making ventures designed to support the REAL programs.  Quality programs require extensive planning where the leadership answers questions like:

  •  How will online courses integrate with the current offerings?
  • How will online courses impact the student experience? 
  • Do we want online courses to attract new students to our programs or will we design them to support current student needs?
  • What will be the look and feel of our online environment?  How does this fit into our current environment?

 Most of us have at least heard about online programs where these questions likely were not considered prior to implementation - programs that offer a certain online course once every two years and students just have to wait.  Or we’ve heard about the institution known for its liberal arts education that suddenly offers an online MBA program for Executives.  Neither example assumes bad programs, but Hallmark #1 provides the guidance to help insure that online programs are properly incorporated into the big picture.

In some parts of this country, Chick-Fil-A is a fast food tradition.  Its most popular menu option being a chicken breast deep-fried in a pressure cooker and served a variety of ways: in a salad, as a sandwich, etc.  Chick-Fil-A is also known for their advertising campaigns where cows advocate that we all “Eat More Chikin”.  If you aren’t familiar with the restaurant and their ad campaign, visit the Chick-fil-a Cow Campaign.  This campaign has a national footprint and a 20 year history.  As a Chick-fil-a fan, I would be extremely concerned if the corporation suddenly decided to sell hamburgers. Such a move would cause me to question the leadership.  I’d wonder whether Chick-fil-a can cook a burger?  I would be concerned about the cow campaign.  What about the name of the restaurant? But the Chick-fil-a mission is to “Be America's Best Quick-Service Restaurant”.  So if they did decide to sell beef, they’d have to do extensive planning to address concerns like mine, but they could certainly make a case for it.  The point being, the accreditors are not concerned that institutions with online programs have a mission.  They are concerned that the program fits in, that it has a place in the big picture.  Even when it’s a reach, like beef at Chick-fil-a, that’s OK, as long as the planning work gets done and everyone can explain how all of the pieces fit together!

Kimberly Harwell

Reporting Analyst and Consultant


National Survey on Program Level Assessment Practices

A year ago I blogged a summary of the National Institute for Learning Outcomes Assessment (NILOA’s) survey of chief academic officers and their perceptions about the status of learning outcome assessment at the institutional level. Today they released results of a follow-up survey in a report titled “Down and In: Assessment Practices at the Program Level”.

There are several salient conclusions that come out of the data they collected. First, disciplines with specialized program accreditation (like education, nursing, and business) had more mature assessment practices than those covered only by institutional accreditation. In fact, institutions that had some programs with specialized accreditation actually tended to show more developed assessment methods in departments on campus with no program level accreditation. This suggests that there was a benefit across the campus compared to institutions that rely solely on regional accreditation.

Not surprisingly, a second major finding is that resources are scarce with less than 20% of specialized programs having a full-time staff person assigned to assessment activities. Most campuses rely on volunteer committees or part time resources with course release being another option in some instances. To deal with resource constraints, creative solutions included modest stipends to support faculty in the development of course embedded assessments or common department-wide capstone assignments with corresponding rubrics which could be deployed across all students in a program.

Interestingly, while I had the impression that portfolios have been the most common example of direct performance measurement at the program level, they actually ranked seventh in a list of most frequently used approaches. The leaders in rank order are capstones, rubrics, performance assessment, final projects, local tests, and external exams.

One final point I’d like to highlight was that NILOA called for itself and others to produce more case studies that highlight successful assessment practices. We’ve heard the same thing from our partners at Pearson eCollege and are currently working on developing several examples highlighting best practices in a variety of institution types that we can share. We hope to have these published before the end of the year.

Works Cited
Ewell, P., Paulson, K., & Kinzie, J. (2011). Down and In: Assessment Practices at the Program Level.
National Institute for Learning Outcomes Assessment. Retrieved from

Brian Epp, M.Ed. | Assessment & Analytics Group, Academic Training & Consulting| Pearson eCollege


“One –minute” Papers in Online classes

As a formative assessment the “one-minute paper” has been well used by faculty. It can provide huge benefits for instructors to early identification of the definitions, concepts or theories that students do not understand. Cross and Angelo (1988) popularized this technique as one of a wide variety of quick “classroom assessment techniques” (CATs) —designed to provide instructors with anonymous feedback on what students are learning in class. Usually at the end of a class session or the beginning of the next session students are asked to write a one-minute paper in response to such questions as:

• What was the most important concept you learned in class today?
• What was the ‘muddiest’ or most confusing concept covered in today’s class?
• What do you still have questions about?

While this technique has been successfully used by on ground instructors for quite some time, to immediately alter course curriculum to clarify points for students, it has not commonly found its way over to the online environment. I believe the reason is that much of our online teaching is asynchronous and we have not been sure the technique would be as valuable or the process even feasible. It certainly would be difficult for the student to remain anonymous as initially designed. However, my belief is that with a slightly different work flow we can use the proven technique to great benefits for ourselves as instructors and our online students.

One common way we establish our online courses are through the use of modules or units. To integrate the “one minute paper” instructors can develop a small “one-five minute” quiz instead of a paper at the end of each section or module. You can use specific questions (fill in the blank, matching or multiple choice) to see if the students correctly understand surface level learning concepts and short answer questions to dive into deeper learning. The quiz may be set to a time limit of one to five minutes. The emphasis needs to be on immediate student reflection of learning. Try to use no more than 3-5 questions. This technique could certainly allow and should encourage students to briefly review their notes before proceeding to the “unit or section “quiz. You may also choose to place the quizzes before a new unit or multiple times within a module depending on your discipline and pedagogy.

It is important to explain to students that this is just one way for instructors to help ensure the knowledge opportunities provided to students are sufficiently meeting their learning needs. One-five minute knowledge checks on the concepts also provide a glimpse of what may appear on future course exams or required research papers and projects which could then lead to reduced student anxiety. As an incentive for the students to provide explicit and serious responses I would suggest some form of integration into your course grading schema.

In my online classes I have received very positive student feedback of the process and this has allowed me to regroup and provide multimodal learning opportunities. When we are face to face we can often look at the class and understand quickly the students that have no idea what we are trying to convey. It is even apparent at times that no students grasp the concept. Online this ability to perceive your students depth of learning is often not discovered until we issue summative assessments. Allowing formative assessment techniques to enhance and capture those “teachable moments” leads us all to greater real-time student success.

Angelo, T. A., & Cross, K.P. (1993) Classroom Techniques: A Handbook for College Teachers, Second Edition. San Francisco: Jossey-Bass.

Karen R. Owens, Ph.D. / Academic Assessment Consultant / Pearson eCollege


Closing the Assessment Loop

Because the accountability drums have been beating for well over ten years, most institutions are now collecting data on student performance toward mastery of learning outcomes. The question today is whether or not this data is actually being used to drive improvements in curriculum and instruction. The step of analyzing collected data, diagnosing gaps that need attention, and relating results back to student performance is referred to as closing the loop.

At Pearson eCollege we’ve been working with institutions for over three years on technology enhanced outcome management and how it can help educators make the shift from a teaching to a learning paradigm. We’re tackling issues like which metrics provide the best data for academic leaders as they work to improve student mastery of outcomes or how to document discussions that take place to support the assessment of student learning.

Clearly there isn’t a single right answer but it’s important that campus leaders participate vigorously in the debate on these issues. A common problem we find, in fact we’ve struggled with it ourselves, is that we get paralyzed by trying to achieve perfection before initiating a new process which only delays the iterative nature of continuous improvement.

The Pearson eCollege Assessment and Analytics Consultants will be hosting a workshop at the Pearson Cite conference next week in Denver on how to “close the loop”. We’d love to hear your comments or thoughts about additional key questions we should be considering and invite you to join us if you’re attending the conference.

Brian Epp | Assessment & Analytics Group Manager - Academic Training & Consulting | Pearson eCollege


Choice Architects

When you see a pizza on tv, then get hungry and order one, do you order toppings based on your own, unfailing desires?  Or, is it possible that since the commercial had extra cheese, pepperoni, and mushrooms that you just went with that?

I recently drove to Nebraska for an eLearning conference and since that gave me about 12 uninterrupted hours in the car, I got a book on CD.  Nudge, by Thaler and Sunstein (2008) is an interesting argument around choice.  While I liked some of the tangential thoughts brought out in the book a lot, the main premise from the authors is the idea that, "each of us thinks and chooses unfailingly well..." is a myth.

(Tangential idea I liked the most: Most people assume they are "better than average" at most things important to them.  Most people  feel they are "better than average" drivers, "better than average" parents, and "better than average" instructors...)

Thaler and Sunstein instead believe that the choices we make are based on one of two things: quick-reacting instinct, or deliberate reflection.  As such, the authors suggest that the way in which choices are offered makes a monumental difference in the choice that is actually picked.

For example, when a group of people was asked if they consider themselves to be "happy" they gave a fairly high number (on a scale of 1-10) on average.  If this question was followed up with, "Are you dating anyone" or "Are you intimate with your spouse regularly," the answer was quick and reflective of the first answer.

However, when the two questions were reversed, things changed.  When people first think about dating or intimacy with their partner, most people tend to feel that (objectively speaking) they are not getting enough dates or intimacy at all.  So, when this question is followed with, "Are you happy?", the average numbers fell drastically.

In other words, the way that we architect choices determines how those options will actually be chosen.

I am reminded of an instructor in my undergraduate program who crafted a final exam where every answer was 'B'.  It was the cruelest test I have ever taken.  Why?  Because the psychology of the test begins to overshadow the content!  Whether I knew the content or not didn't matter after a while.  Surely no professor would use the same dis-tractor as the answer for all 100 questions, right?  So, the choices that I made were not based on unbiased reasoning through the questions.  Instead, they were based on conspiratorial fear and mistrust.

That may be an extreme example, but it makes me wonder how many students succeed or fail in our online classes because we aren't very good choice architects?  How often do we assume that directions aren't required as students "get" online learning better than we do?  How often do students go unnoticed (and therefore drop out) in discussions because we don't pay attention week after week to which threads we respond to?  How often do we create software solutions that don't work for every student, thereby creating an unlevel playing field?

As instructors we are choice architects to be sure.  As ONLINE instructors, the choices we set up for our students grows almost exponentially.  Do we use blogs, wikis, journals, or discussions?  Why?  Do we ask students to participate in groups?  If so, do they self-select or do we assign?  Why? Do we use YouTube to present?  Do we use Quora to poll?  Do we use test banks to quiz?  Why?

Every choice we make impacts every choice they make. So, what choices have you given your students lately?

Good luck and good teaching...

Jeff D Borden, M.A.
Sr Dir of Teaching & Learning


Collaborate and customize!

As we work with our Educational Partners on implementing their outcome assessment plan, from the macro to the micro level in distinctive academic cultures, a common theme emerged. The focus was on organizing and collecting evidence but there was no formal plan for action steps.

Our Assessment & Analytics team worked closely with several of our Educational Partners to customize templates, devise online faculty discussion forums and offer other technology enhanced solutions to be a catalyst for improving curriculum and instruction. As discussion pursued it became evident that there was no “one way” to accommodate the unique requirements of all colleges and universities but we were providing methods that could be used and customized for most.

The next step for us grew into Assessment Consulting Modules that could be designed around the needs of our Educational Partners. This included a “backwards” planning method to help ensure the multitude of assessment data the academies collect would actually help answer their ultimate questions and lead to evidence driven action plans.

From beginning to end the Assessment Consulting Modules are designed to lead participants through best practices of student learning outcome (SLO) assessment. The series begins by exploring why we engage in assessment and by defining a roadmap to creating a culture of evidence on campuses. Participants will then have opportunities to develop and connect SLOs and rubrics that apply directly to their unique curriculum. Our end goal is to provide specific strategy and design suggestions that translate into meaningful and sustainable assessment plans.

Following is a brief description of a few of the modules:

Curriculum Mapping
This module looks at the relational aspect of student learning outcomes within an academic institution. Intentional and explicit alignment of each outcome from the discrete course level to the increasingly broader, program, department, campus or institutional level is examined. The mapping concept is designed to provide opportunities and evidence of student learning at various stages of the curriculum.

Rubric Design
Measuring and assessing students’ demonstration of learning through the use of rubrics is the focus of this module. Designing an assessment rubric using explicit criteria statements and identification of examples of student performance at varying mastery levels of each outcome is presented. Included are comprehensive and clear rubric exemplars.

SLOs and Impact on Course Design
Quality course SLOs are the foundation for assessing student learning. Quality assessment of student performance requires those SLOs to be purposefully aligned to the learning activities, assessment activities and schedule of the course. In this module participants use a modified card-sort method to analyze the relationship of the course’s design to the SLOs and inform design changes to optimize student learning.

Fostering Faculty Ownership of Campus Assessment Culture
Assessing students’ mastery of learning outcomes falls primarily within the scope of faculty responsibility so it is critical they be an integral stakeholder in the development of campus assessment plans. Faculty engagement is further fostered by focusing on improving the student learning experience. This module provides tangible actions for academic leaders working to integrate faculty into the development of campus assessment culture.

We are no longer living in academic silos and must use web services and other technology enhanced services (along with colleagues) to link information similar to using Lego blocks. The design can be simple or very complex. The creativity is unlimited if we begin by understanding the combination of links within assessment and then proceed connecting until we have designed an application that fits real-time teaching and learning. Collaborate and customize!

Karen R. Owens, Ph.D.
Higher Education Assessment Consultant
Assessment & Analytics Group
Pearson eCollege



A Choice Culture

I finally saw “The Kings Speech”.

The cajoling of family and friends who sing the praises of the film was a large factor in getting me into the seat; I’m usually more of a Tron: LegacyRemember the Titans or LOTR film kind of guy. But the ‘Speech’ was good; quite good actually. The pace of the film and honestly of the acting gracefully pulled me into 1930s England and into the enormity of the time.

However, it was the premise of another film that put me in the mindset to appreciate the ‘Speech’ more than I would have otherwise. A trailer for “The Adjustment Bureau” played just before the feature presentation. This film essentially asks if your path in life is fate, or free will. Do we direct our lives, or is our life directed? Are the choices that we make actually our own? Relating back to “The King’s Speech”, I found it interesting to witness the underlying theme of choice (or not) for a royal family throughout the film. Both these movies made me realize that choice seems to pervade our culture in a subtle, subversive way; it’s not something we think about on a daily basis, yet do constantly.

In investigating the topic of choice I found that some feel that in our American, capitalist culture we’re quite infatuated with choice. Sheena Iyengar gave a great presentation at TEDwhere she asserts that in US we seem to think that we’ve reached the pinnacle of choice. Further, she states that in America we make three assumptions about choice, almost without thinking: If a choice affects you, you should be the one to make it; the more choices you have, the better choice you will make; and lastly, you must never say “No” to choice. And all this choice is a good thing, right? After all, if happiness is a choice, then don’t more choices bring more happiness? Iyengar offers that uber-choice doesn’t fit in all contexts, all situations. In fact, for many, too much choice begins to feel like “suffocation by meaningless minutia” (12:40).

Is this true of choice in higher education as well? Psychologist Barry Schwartz would argue so. Back in 2004 Schwartz wrote a very good article for The Chronicle entitled “The Tyranny of Choice” that not only summarized many of the finding and insights found in his book The Paradox of Choice, but walks through the effects that multi-choice education may be having on students, teachers, institutions and ultimately our culture. In answering his own question “What are the implications of an abundance of choice for higher education today?”, Schwartz asserts that “... the world of the modern college student is so laden with choice, much of it extremely consequential, that for many, it has become overwhelming”.

What about in your life? How many choices do you think you made today before choosing to read this blog? Perhaps 100 or 200 if it’s early in the day. Maybe 500 decisions by lunchtime? In a paper published by Concordia University, Saint Paul, Danielson is quoted as stating that “a teacher makes over 3,000 nontrivial decisions daily”. And these are just the decision made in the classroom, let alone for an entire day. Do you think about choice in your classroom? Do you leverage it? Regulate it? How much choice should we give students in: a) choosing the composition of their degree, b) their assignments, c) the delivery of their work and d) when and how they interact with the class. Are these choices ours to give?

There’s no doubt that choice is essential to our education, business and life. I think that even the movies would agree. In fact, one of my favorites, The Matrix Trilogy, is crafted entirely around the idea that choice is what makes us uniquely human. I love how the end scene of the final film crafts the role of choice in our humanity. The power we have as people is in our choice: power to love to learn to create.

When we choose to give choice; we must choose carefully. Perhaps choice should not come without limits. Schwartz ends his article with the poetic phrase “Freedom within limits, choice within constraints, is indeed liberating.” I tend to agree: there can be too much of a good thing.

How do you view choice in our culture? In our education system? Is too much choice really an issue?

Luke Cable
Assessment Consultant
Academic Training & Consulting
Pearson eCollege