Online Blogucation
26Sep/120

Flipping The Mooc?

140,000 students in a single course?  C'mon...there's no way!  Or is there?  A LOT of people have taken notice of MOOCs (Massive Open Online Courses) in the past few months.  And when I say people, I mean highly positioned, well respected, very powerful people in the education sector.  People like Presidents, CEOs, Provosts, etc., of places like Harvard, MIT, Stanford, Carnegie Mellon, and many more have at least publicly inquired about MOOCs if not actually starting programs to invest in their own.

A respected colleague of mine who talks almost exclusively to C-level educators put it simply but effectively, "...the genie is out of the bottle."

Of course, this is exciting.  Anything to further the discussion around eLearning is wonderful from my perspective.  The ridiculous, antiquated, fallacious arguments about leaving teaching and learning alone are growing tiresome.  So anything that promotes the use of technology to enhance and augment learning is a powerful thing.  But with that in mind, and as an "early-adopter" much of the time, my next statement might surprise you.

We need to blow up the MOOC.

No, not blow up as in destroy.  Blow up as in, let's get to v2 as fast as humanly possible because v1 is NOT a good poster child for online education.  Why?  Simple.  Today's MOOC takes many of the worst elements of teaching, instruction, assessment, etc., and simply presents them over the Internet.  For instance:

Lectures - A big name in both the MOOC world as well as his discipline (aka, the smartest guy in ANY room) was describing the process he used to create his MOOC.  He said, "I was shocked when I started researching ways to disseminate information to find that lecturing is actually a really bad way to present information.  I have been lecturing for over 40 years and didn't know that..."  And yet, this great scholar and innovator did exactly that in his MOOC.  He simply recorded himself lecturing, put it on YouTube, and tied it to his MOOC.  Eric Mazur talks about a fantastic study he did at Harvard where students had their brains continually monitored for a week.  EVERY single student had similar brain patterns with regard to class (lecture) time.  Their brain waves were almost completely flat.  That's right - no activity.  The only other time in the week their brains were that inactive?  When watching tv.  Even when sleeping, the human brain is more active than during a lecture.  And yet the lecture is still the predominant means of "teaching" students today.  So, if MOOCs are to "change the world" for the better...we have to figure out how to incorporate much better ways of teaching and learning through them.

Learning - What is learning, really?  Isn't it the acquisition of information and then the assimilation of that information?  If we agree that it is, at its core, those two things, then I would bet we could also agree which of the two things is harder.  Dissemination of information is easy.  It can be done through a book, a lecture, etc.  The HARD part is actually making sense of it in a contextual, meaningful, connected way.  Yet for decades (if not centuries) educators have performed the easy part, while leaving the hard part to students.  (Actually to students who are alone, at home, with only a book...)  The flipped classroom, which is a remixed way of talking about what educational psychologists have known for decades, is finally starting to shine a light on the notion that the hard conversations should take place in class, while the dissemination activities happen at home.  MOOCs, as they exist today, do not even approach this.

Assessment - We can create objective tests that are manually graded and start to identify what a student does or does not understand.  In fact, a few MOOCs in the past month have finally started to do just that.  (This is why the very first MOOCs were not taken seriously - they really had little to no meaningful assessment.)  However, even with such heavy reliance on standardized assessments in our Universities today, most professors still agree that much of the way we know if our students do "get it" is through interaction, conversation, dialogue, and transference of ideas.  This can happen in discussions (before, during, and after class), as well as through ideas presented in papers, etc.  However, the only real way to even approach this in a MOOC is through peer review and peer assessment.  And that is a tough one for a lot of people.  For example, I recently took a Udacity MOOC on statistics.  I had opportunity to join a discussion group that I found purely by happenstance, with others from the class.  It was a study group of sorts.  However, after asynchronous discussions with about 10 peers, I soon realized that I was likely the most knowledgeable person in our group when it came to statistics.  (My mother and father are giggling right now...)  In other words, nobody had anything of value to bring to the table.  Social learning is indeed a powerful thing, but without what Vygotsky would call the "More Knowledgeable Other" in the group, it starts to break down quickly.  MOOCs could rely solely on high stakes, standardized, auto-graded tests, but again, that would simply perpetuate a bad practice from face to face teaching in the online realm.

There are others here, but I think you get my point.  The MOOC as it exists today, with millions of dollars being poured into figuring out how, when, and where to use it, needs a quick overhaul.  I am hopeful that it will happen sooner rather than later as (hopefully) it hasn't become an "institution" to anyone yet.  Hopefully nobody is so tied to the notion of something that didn't really even exist until less than a year ago that they can retool, reconfigure, and rethink the MOOC.  Because a MOOC has tremendous possibility.  Delivering global education at scale with ties to real-world competencies...that could be a game changer.  So let's make sure we get it right.  Let's flip the MOOC.

Good luck and good teaching.

6Sep/120

Building a Community of Success

It is probably safe to say that all of us have taken a brick and mortar class that stood out above all others and exemplified a positive learning environment. The class likely stands out to us because we felt that we belonged to a learning community, shared similar goals and interests with our peers, and felt supported in our learning goals. Research suggests that this carries over to the online environment as well, with greater student satisfaction when students have a sense of community (Outz, 2006; Philips & Peters, 1999; Rovai, 2002a; Swan, 2002; & Woods, 2002). This feeling keeps students motivated to go to class and to share their interests with their peers.

Do you remember the student who went to class and only engaged in discussions if required? This assessment allows an instructor to gauge comprehension and guide the student through their learning experience. Whereas traditional classroom discussions can easily be dominated by vocal students, allowing the quieter students to stay disconnected, the online environment can easily require all students to engage through required threaded discussions. Technology enables easy measurement of involvement that would not be practical in the traditional classroom, meaning all students are engaged with an equal opportunity to share their voice and can be assessed for understanding earlier in the learning process. This is extremely important because learning and development is a social and collaborative experience that all students should benefit from.

"Learning would be exceedingly laborious, not to mention hazardous, if people had to rely solely on the effects of their own actions to inform them what to do."
-Albert Bandura, Social Learning Theory, 1977

So, while the benefits of interactivity and fostering learning communities are clear, getting there can be a challenge in an online environment. The first week of class is extremely critical for establishing a sense of connection within the course. During this week, students should be encouraged to engage with their peers and professors via threaded discussions and other social tools, sharing their work and life experiences to make connections. Some of the first assignments for the course should promote interaction when possible. This engagement should be encouraged throughout the duration of the course so students have a desire to succeed each and every week, and share in their successes with their learning community. Leverage reporting to identify those students not engaging and reach out to them, and to identify strong classes to model your programs from. The data helps identify the anomalies sooner and to react to them appropriately.

If students gain a sense of community early and see it is going to play a major role within the course then they will most likely embrace it and become part of a learning community who takes greater ownership of their collective learning experience. Leverage the tools to identify those courses not meeting standards and bring them up to par. If students begin to feel this theme from course to course then you will have a much stronger, more persistent student community.

References:

Drouin, M. (2008, Fall). The relationship between students’ perceived sense of community and satisfaction, achievement, and retention in an online course. Quarterly Review of Distance Education, 9(3), 267-284.

Bandura A. (1977). Social Learning Theory. Englewood Cliffs, NJ, Prentice Hall.

Ouzts, K. (2006). Sense of community in online courses. The Quarterly Review of Distance Education, 7(3), 285-296.

Phillips, M. R., & Peters, M. J. (1999). Targeting rural students with distance learning courses: A comparative study of determinant attributes and satisfaction levels. Journal of Education for Business, 74(6), 351-356.

Woods, R. H. (2002). How much communication is enough in online courses? Exploring the relationship between frequency of instructor-initiated personal email and learners’ perceptions of and participation in online learning. International Journal of Instructional Media, 29(4),377-394.

Deb Corso-Larson | Assessment and Analytics Group | Pearson eCollege

3Jul/120

The Data Commitment

Over the last decade Higher Education has become fascinated with data. The data we collect can provide insight into student achievement, faculty effectiveness, and many other topics. We have sophisticated business intelligence tools and technology to give us more data than we could have imagined a decade ago, but why do we collect data in the first place? While there may be many perspectives about the nuances, I think we can all agree that the purpose is to improve. But if we collect all this data and fail to act upon it then where is the value-add for those involved? We must focus on completing the assessment cycle from goal to actionable results, which can be leveraged to drive change to benefit our students and positively impact their learning experience.

Hatfield (2009) in describing this common breakdown in the process:

Many of the benefits of engaging in assessment are the results of focused discussion about student achievement of the program’s learning outcomes. Yet it is not uncommon for data to be collected only to be ignored thereafter. It is not until the data has been analyzed, discussed, and used as a basis for further program improvement that assessment has taken place (p.6).

There is a process that must occur beyond the collection of data that is often forgotten. This may mean focusing data collection efforts rather than spreading resources thin to gather more data than you know what to do with. Choose one or two short term goals that you know can be successfully completed, and will have impact on your students and stakeholders. Provide immediate value for the efforts of all involved and gain their buy in for future, long-term assessment goals.

As you approach data collection, remember your goal: driving change and improvement. The tools available to us are wonderful and make collection so much easier and robust. The data will give us insight that we couldn’t have achieved previously. However, it is up to us to use this data to drive improvements, as data doesn’t act upon itself!

References:
Hatfield, S. (2009). Assessing Your Program-Level Assessment Plan. The IDEA Center, IDEA Paper, 45. Retrieved from http://www.theideacenter.org/sites/default/files/IDEA_Paper_45.pdf

Deb Corso-Larson | Assessment and Analytics Group | Pearson eCollege

27Jun/121

What we can learn from peer-to-peer learning…

Recently, I read an article in the Chronicle of Higher Education about "P2PU," a rather unfortunate-sounding abbreviation for Peer 2 Peer University. (Here's a link to the article.) But despite the abbreviation, there is a lot we can learn in online education from what the founders of P2PU are trying to accomplish.

The long and the short of peer-to-peer learning is that students can collaborate on learning in such a way so that each brings his or her expertise to bear in solving a problem or answering a question. For example, one person who studies sociology might team up with another person who studies mathematics, and between them, they have the requisite knowledge to teach themselves statistics for the social sciences.

At P2PU, and largely in the open-course environment, the idea is to have this kind of learning taking place without the confines of a traditional institution of higher education. I understand where they're coming from. Professors from traditional universities are branching out into MOOCs -- "massive open online courses." (There's a great article on these in Wired Magazine from a few months ago. You can find it here.) The professors who run such courses -- at schools such as MIT, Stanford, Princeton, and Michigan -- are struggling within their own institutions to determine how to provide some sort of useful credential for those who pass their courses, which in and of itself requires some bona fide way to assess all the students who participate in the course.

P2PU is attempting to solve the credentialing problem by offering badges for the completion of a course, much like the social badges many of us earn when reaching a new achievement in an online game. But don't take that as an indication that the work required for such a badge is trite; far from it, there is a lot of learning to be done to achieve badges such as "Critical Thinking" or "Javascript Expert." The challenge faced by P2PU is whether anyone will recognize those badges as legitimate indicators of knowledge and skills.

For those of us who do teach at traditional institutions of higher education, there is a lot to learn here. Social learning is becoming the canon of online education, and just within the last 12 to 18 months. It's the foundation of peer-to-peer learning, and open-universities like P2PU recognize that. MOOCs also recognize the value of a facilitating professor. And employers -- the vast majority of them still -- recognize the value of a college degree or certificate.

I don't see that any of these strategies or perspectives is right all by itself. But, taken together, we have the ingredients for a successful online program. If you have your own ideas on this topic, please comment -- I'd appreciate some peer-to-peer learning with this blog post!

9May/120

Inter-Institutional Collaboration Project

At the Pearson Cite conference several weeks ago I met with representatives from several colleges who were interested in piloting a Pearson CourseConnect Analytics Edition (CoCo AE) course. The Analytics Edition versions are currently intended for LearningStudio customers who are also using the Learning Outcome Manager (LOM) tool to manage and track student progress toward mastery on learning outcomes.

CoCo AE courses come with student learning outcomes pre-mapped to presentation content and assignments. This mapping also includes assessment rubrics which have both content and assignment type criteria. Pearson’s course authors wrote performance level descriptors for the assignment type rubrics which can be modified if desired. The course design team decided it would be best to allow individual instructors the ability to define their own performance level descriptors for content type criterion (see rubric example below from the American Government course).

Sample CoCo Analytics Edition American Government Outcome Rubric

Part one of the collaboration will be to get teaching faculty together from participating institutions to work on group authorship of the content type rubrics. We’ll create criteria banks by outcome that all faculty can choose from or adapt for their own instances of a course. Colleges will then run the selected CoCo AE course in their Winter/Spring 2013 terms. For the second collaboration component we intend to work on a data sharing project that will allow peer institutions to see anonymized data on aggregated student performance against commonly taught outcomes. Our hope is to present this project at Pearson Cite 2013 in Chicago.

Another possibility for this type of collaboration is the ability to provide institutions with a new option for the inter-institutional comparability requirements that often accompany professional program accreditor reaffirmations. Historically, the most common compliance method is a standardized assessment measure along with student surveys like CCSSEE or NSSE.

Imagine a future where you could define peer groups and then compare your students’ performance towards mastery on commonly defined learning outcomes against those of your peer institutions along with the very best even if they weren’t in your peer group. My hope is that part of that future involves more inter-institutional collaboration among faculty and content providers like Pearson to create stronger, more effective curriculum that can proves its effectiveness.

While comparison on its own is interesting, an extension would be to consider external benchmarking where an institution could identify peer(s) whose students tend to outperform those of the home institution. Jeffrey Alstete from the George Washington University Graduate School of Education and Human Development explains that the benchmarking strategy can be used to improve teaching and learning by studying processes and practices at institutions that excel and then adapting their methods to the context of the home institution (1995).

A key value of benchmarking is that all institutions involved in the study expect something in exchange for participating in the project; even those institutions who are recognized as best in class. (Management Consulting Partners, 2008). This is not a passive endeavor for any participant so, if you’re interested in benchmarking, it’s important to obtain support from senior leadership and to recognize that this effort will require a significant time investment.

Benchmarking is yet another strategy available to add to the assessment toolkit for higher education. We’re excited to engage with our partners to provide direct data on student mastery of learning outcomes and welcome your feedback on additional ways that we can support continuous improvement efforts on your campus.

Works Cited

Alstete, J. W. (1995). Benchmarking in Higher Education: Adapting Best Practices To Improve Quality. ERIC Digest. Retrieved May 7, 2012 from http://www.eric.ed.gov/PDFS/ED402800.pdf.

Management Consulting Partners. (2008). Applying Benchmarking to Higher Education. 1 (2). Retrieved May 7, 2012 from http://www.mcpartnersllc.com/download/Applying%20Benchmarking.pdf

Brian Epp | Assessment and Analytics Group Manager | Pearson eCollege

12Apr/123

Live from Pearson Cite 2012!

This is Rob Kadel, your on-the-scene reporter, coming to you live from the site of Cite! This is the Pearson Cite 2012 Conference, being held at the J.W. Marriott Grande Lakes in Orland, April 10-13. Distinguished lecturers and speakers, presenters from some 65 Pearson Education Partners, 500 attendees, and 200 Pearson employees have gathered together for four days of discussions and collaborations on online learning. And we’re loving it.

On Tuesday afternoon, Cite opened with a special, fun treat – an iBand composed of several Pearson employees (yours truly included) playing a medley of songs all from our iPhone and iPad instruments. Silly, yes, but we enjoyed getting the crowd revved up for the conference.

The highlight that afternoon, of course, was an excellent keynote presentation by Dr. Mark Milliron, and author and educational technology consultant currently working with Western Governors University. Dr. Milliron discussed technology as a solution toward increase college enrollments and matriculation, especially among those living in low-income households who need education to break out of the cycle of poverty. But he also challenged us to go further in our thinking, to recognize that simply fitting new technology into an old mold of education may not be the most effective way to deliver learning. We need new ideas about the actual structure of the educational experience to take advantage of technological tools and reach the students who need education the most.

With concurrent sessions focused on everything from mobile learning to assessment and analytics, there was no shortage of discussions around the trends in online higher education. Student want information not only when they need it, but also where they need it. And institutions are getting into a groove now recognizing the potential for data not only to describe their current students, but to prescribe new directions for future cohorts. Dr. Marilee Bresciani’s keynote address on Wednesday took such discussions further to show us how outcomes-based assessment can help to identify where true creativity and critical thinking are taking place.

On Thursday morning, Dr. John Medina treated us to a keynote presentation entitled Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School. Dr. Medina presented us with neurological research on how our brains actually process information as we learn and what the critical points are in instruction to ensure that students learn.

Dr. Medina Wows the Crowd at Pearson Cite 2012

Overall, it’s been a great conference and a great experience. I’m already looking forward to Pearson Cite 2013 in Chicago! (Look for additional information here in the coming months.) I hope to see you there!

--
Rob Kadel, Ph.D.
Academic Training & Consulting Manager
Pearson

29Feb/120

Actionable data for improving student learning and inter-institutional comparability – Can we have both?

An article titled Searching for the Holy Grail of learning outcomes from Inside Higher Ed (IHE) caught my attention last week. The article discusses the elusive quest for data that illustrate the value add provided by a student’s progression through a degree program at a particular institution.

Because the Collegiate Learning Assessment (CLA) purports to provide this value added score it is fast becoming the market standard and the author of the article goes on to cite a number of reasons why this coalescence should concern us.

First, here’s some background in case you’re not familiar with the CLA. The largest market for higher ed accountability is undergraduate general education where the focus is on things like a students’ ability to think critically, to read and write effectively, or to solve problems. As I summarized back in 2009, “we now have public accountability campaigns including the Voluntary System of Accountability (VSA), the University and College Accountability Network (U-CAN), and Transparency by Design which serve the public, private not for profit, and private for profit institution types respectively” (McKay Epp, 2009b).

Because the focus of the IHE article was on public institutions, the VSA is the accountability campaign that was highlighted. As background for those not familiar with the VSA, it

allows participating schools to choose among three assessment instruments that are administered to students with the goal being to indicate student proficiency in the areas of reading, writing, and critical thinking. One of these tests, The Collegiate Learning Assessment (CLA), actually tests a sampling of entering freshmen and graduating seniors and correlates student scores to incoming student SAT or ACT scores in an attempt to show the value add provided by an institution over the course of a degree program. (McKay Epp, 2009a).

One of the most interesting critiques of the CLA in the article is the high correlation between it and the SAT. Olds states that “the amount of variance in student learning outcomes after controlling for SAT scores is incredibly small” (2012). The author goes on to say that “Most institutions’ value-added will simply be in the expected range and indistinguishable from each other. Hence, why bother with the CLA” (Olds, 2009).

While the author lists several alternatives to the CLA (which are worth reading), what I found most interesting was the discussion about the struggle that institutions have to find data that create actionable insights for improving student learning. For nearly four years I’ve been working with institutions to help them implement Pearson’s Learning Outcome Manager (LOM) which is a technology enhanced outcome management system.

LOM does an excellent job of providing actionable data to faculty and administrators on student performance against established learning outcomes for online and blended courses and programs. Because outcomes are associated to graded course assignments, it helps ensure that evaluators are seeing students’ best effort and when done well it minimizes additional workload for faculty. The challenge is that LOM generated data is so targeted to individual professors or to a particular course that its results can’t easily be used for inter-institutional comparability.

While I believe a majority of educators would agree that the most important reason to work in assessment is the desire to improve student learning, I also recognize that the demand for data on inter-institutional comparability will not go away. This article provides some interesting alternatives to standardized assessments such as the CLA which I think could work in tandem with data generated from systems like Pearson’s LOM to provide a win-win for the assessment community.

Works CitedMcKay Epp, B. (2009a). Implementing a Technology Enhanced Outcome Management Strategy on Campus that Produces Substantive Improvements in Curriculum, Instruction, and Student Learning. EDULEARN09, Barcelona. Retrieved from http://library.iated.org/ view/MCKAYEPP2009IMP

McKay Epp, B. (2009b). Improving Student Learning: Thoughts and Reflections. Educator's Voice, 10 (3). Retrieved from http://www.pearsonecollege.com/Newsletter/EducatorsVoice/EducatorsVoice-Vol10Iss3.learn

Olds, K. (2012). Searching for the Holy Grail of learning outcomes. Inside Higher Ed. Retrieved February 20, 2012 from http://www.insidehighered.com/blogs/globalhighered/searching-holy-grail-learning-outcomes

Brian Epp | Supervisor, Assessment and Analytics Group | Pearson eCollege

11Jan/120

Try a Syllabus Quiz

I’ve been teaching online or in a hybrid format for about 12 years now. I’ve been teaching in the classroom for even longer. But when I teach online there’s always something that seems quite trivial that I actually miss. I like to refer to this as one-of-those-go-over-the-syllabus-days, and it’s usually the first day of class.

Of course, students find this pointless and boring, generally, but I’ve found that they do tend to pay attention. I don’t read the syllabus word-for-word, but I at least point out each section, what it means to them, and what they are required to know or do.

There’s a real purpose for this. By my way of thinking, a syllabus is like a contract. It’s a contract I make with students that says, “This is what I’m going to provide to you [knowledge and guidance] and in return, this is what you’re going to provide to me [effort, study, work products, etc.].” By doing this in class (in an on-ground course), I can ask if anyone has any questions. I can look for nodding heads or confusion on faces, and I can address any issues in class. And if I really want to be a stickler, I can have students sign a page stating that they have read and understand the syllabus.

Online, I don’t have that same kind of forum. I do use, for example, a Virtual Office. This is just a discussion forum that I’ve renamed, and I ask students to post any questions about the course there. They can then learn from each others’ questions without sending me the same question via email 20 times. (Although, if it’s a question about a grade or other individual work in the course, then I ask them to email me.) So, if students have questions about the syllabus, they can ask them there.

That isn’t good enough, I’ve found. I still have students come to me saying that they didn’t realize such-and-such and that it would affect so-and-so. So, I also like to include a syllabus quiz. I write up 10 or 15 questions based on the information provided in the syllabus. They are specific questions, e.g., “Which of the following are our course objectives?” and I throw some irrelevant answer choices in there to make sure that they’ve read the course objectives before they can answer that question correctly. And, if I have assignments of different point values, I add a matching question: “Match the assignment on the right to its respective point value on the left.” That way, students will realize that, yes, the final research paper does make up one-third of their grade, and therefore, it is important.

In effect, this is like asking the students to sign off that they have read and understand the syllabus. Heck, I could use a syllabus quiz in my on-ground courses, too. In online courses, it’s particularly important.

In Pearson Learning Studio, I can also enable the Path Builder tool (found under Course Admin --> Enable/Disable Tools), and then use the Course Scheduler link to access Path Builder. I can use Path Builder to “gate” the rest of the course content around the syllabus quiz. If students don’t pass the syllabus quiz, they can’t even move forward to the first Unit. No Christmas-treeing that quiz!

Consider using a syllabus quiz in your own course. It might just make the difference between a smooth-running course and a bunch of Hey-I-didn’t-know-that! excuses later.

-------------------

Rob Kadel, Ph.D.
Supervisor, Academic Training & Consulting
Pearson eCollege

7Dec/111

Up In Lights

I just got back from performing a keynote address in Berlin at Online Educa.  It was an amazing experience.  Not only was the conference packed with over 2000 people, but the city of Berlin was quite breathtaking this time of year.  Everywhere you look in Berlin there is some kind of Christmas decoration, tradition, or ornamentation.  People gather together at the Christmas markets to drink Gluehwein (a spiced, boiled wine drink that smelled delicious) and sales abound in the shopping areas.

So as I was walking through one of the markets with some friends, I thought back to the decorating of my own tree just a few weeks ago, which led to thoughts of…instructional design!  (Seriously, I need a break).  With a four year old, Christmas came early this year and we had our tree up on Thanksgiving day!

But the lights on the tree, specifically, were quite an ordeal.  Actually they still are.  See, last year we bought a new tree.  We took our daughter down to “St Nick’s” Christmas store (no joke) and asked for a guided tour of the new trees.  While the trees look amazingly real, they ALL – 100% - had a major flaw.  It was impossible to buy a tree without pre-decorated lights!  And not just pre-decorated, but all white lights.  Ugh.

Of course, I get why they do it.  Most people hate lighting the tree.  It’s time consuming, you end up missing spots, and the only thing worse than getting them on is taking them off.  But, I knew then what proved to be true this year.  Pre-lit trees are not what they appear to be.  See, this year, I had happen EXACTLY what I asked the sales-elf about last year:

ME: “What happens if a light goes out?”
ELF: “That hardly ever happens!”
ME: “Okay, but what if it does?”
ELF: “Well, the lights aren’t connected like they used to be.  If one goes out, it doesn’t affect the others, it just goes out.  You can replace it or leave it, but the rest of the lights will shine.”
ME: “Riiiiiiiight….”

You can probably see where I’m going with this.  This year, just as I suspected, we got the tree up, plugged it in, and yep, you guessed it – the entire middle of the tree was black.  So, I got to spend about an hour, finding, unplugging, and re-plugging new lights into the old sockets, hoping each one would light the strand back up.  (I never got more than 4 in a row to light up with any new bulb…)

Alright, enough about my holiday nightmare.  So what does this have to do with Instructional Design?  Well, as I stood there checking bulb after bulb, I realized that some schools are taking this approach to their online courses.  The premise is simple:  Most instructors don’t have any education around teaching.  Instructional designers know how to design quality courses.  So, create a course with a group of designers and let a dozen different faculty teach it.  Done and done!

But, of course the analogy then starts to take over.  What if you allow instructors to change the course?  Some of those new courses will be awesome – amazing even!  Others, will be like a darkened bulb bringing down the outcomes average for the department.  What if it’s a blinking strand kind of course?  In other words, what if it has all kinds of whiz bang media and social interaction?  The answer there is that most faculty would need a boat load of instruction just to teach it.  (This is why most standardized courses don’t have cool stuff…they just have text, pictures, and some videos.  It’s easier to deliver, even though it’s not nearly as engaging for students.)  This straight forward approach to design for mass clusters of courses would be the equivalent of an all-white tree.  Guess what?  I don’t WANT an all-white tree.  That’s why last year I spent about 3 hours going through and changing out 4 out of 5 bulbs to a color.  I want color.  I LIKE color.

Ok you say - so let’s not use instructional designers.  Let’s let faculty design all of their own courses!  Guess what you get then?  You’ll get some lights perched perfectly on the limbs.  They will be unobtrusive, casting a healthy glow from the inside of the tree, almost as if the tree itself is on fire.  But you’ll also get…well, you’ll get the Griswald tree too.  You’ll get lights that look as if they were flung on the tree by a four year old with a slingshot, appearing as if they may fall off at any minute.  You will get some bulbs that are significantly dimmer than others.  You’ll get 5 reds in a row.  You’ll get classes that have nothing but text and no interaction with the professor except for an occasional rant and the final, posted grades at the end of term.

See, I’ve said it before and I’ll say it again.  There HAS to be a better way.  There has to be a healthy mix of instructional design, subject matter expertise, and personal touches that allow a class to be unique, engaging, and a quality experience in terms of assessment.  The school that figures out how to truly mix sound pedagogy with effective delivery and authentic assessment in a media rich, social environment will rule the world.

But until then, we’ll have to take it one light at a time.  We’ll have to create the best possible bulb section for our trees or try to create at least tri-color trees that are uniformed.  But one day…it will be different.

Oh, by the way, when I landed in Germany my daughter got on the phone.  She just HAD to tell me something.

ME: “Hey Peanut!”
ADDIE: “Hi Daddy.”
ME: “What’s going on sweet heart?”
ADDIE: “The middle of the tree is dark again Dad…”
ME: Guttural moaning...

Happy holidays and may your light shine brightly on whatever educational environment in which you teach.  Good luck and good teaching.

Dr. Jeff D Borden
Sr Director of Teaching & Learning

2Nov/110

The Buzz on Assessment

I had the pleasure of attending the 2011 Assessment Institute in Indianapolis this week. The conference is the nation’s oldest and largest event focused exclusively on outcomes assessment in higher education. Administrators, Faculty and Student Affairs professionals convened this week to discuss techniques and approaches across outcomes assessment areas. This year, the event featured tracks on Capstone Experience, ePortfolios, and Faculty Development, among others.

I’d like to share with you a few of the recurring themes I heard and will take with me from the keynotes, workshops and best practice sessions. I will share specifically three themes and considerations. These few points may serve as a marker for some of the noteworthy issues and considerations in the higher education outcomes assessments landscape.

The first two themes are indeed linked in both process and practice, so I will identify both of them at this point. They are: 1) Faculty Engagement and 2) Using Results to Inform Improvement Processes. For those of us who have been doing outcomes assessment for any extended period of time, these themes may echo many of the questions and issues as well as the successes we have faced.

The engagement of faculty in the assessment process is certainly not a new issue in the practice of assessment. Notwithstanding, faculty engagement in the process of outcomes assessment is a reality many institutions are still desiring and even stretching to achieve. The corporate understanding among practitioners gathered at the event appears to reveal an arrival, or perhaps a standstill in some cases, at a place of resounding confirmation, one that points to faculty engagement in the assessment process as a critical component to successful assessment. In her 2010 paper entitled “Opening Doors to Faculty Involvement in Assessment”, Pat Hutchings wrote:

“As Peter Ewell (2009) points out in another NILOA paper, from its early days in higher education, assessment was “consciously separated from what went on in the classroom,” and especially from grading, as part of an effort to promote “objective” data gathering (p. 19). In response, many campuses felt they had no choice but to employ external tests and instruments that kept assessment distinct from the regular work of faculty as facilitators and judges of student learning. In fact, the real promise of assessment—and the area in which faculty involvement matters first and most—lies precisely in the questions that faculty, both individually and collectively, must ask about their students’ learning in their regular instructional work: what purposes and goals are most important, whether those goals are met, and how to do better. As one faculty member once told me, “assessment is asking whether my students are learning what I am teaching.”

Further, the notion was submitted that seeking faculty engagement should not be seen as a one-time achievement but as an ongoing and evolving effort that characterizes a campus assessment strategy. Inasmuch as the issue is not a new one for assessment, the corporate sentiment among conference participants is that garnering this engagement remains a key dynamic and often great challenge. Several presenters admonished institutions represented at the conference to engage in cross-institutional dialogue to share strategies on how to foster a deeper degree of faculty engagement.

The second recurring theme centers on a question of the value, strategy and purpose of assessment efforts, asking What’s it all for? Assessment is hard work. And the growing sentiment appears to be a desire to see campus assessment efforts translate into actual impact on student learning, beyond the collection of data and documentation for accreditation and/or certification. This pull for results that impact student learning is a call to move beyond data collection and planning of assessment to the informed and strategic improvement of teaching and learning based on the data. To make assessment more useful, we must include within our strategy an intentional approach to leverage data and documentation to help bridge the gaps between our current and improved realities. This process must be ongoing. And it undoubtedly must include faculty.

Finally, the third takeaway comes in the form of a resource. The National Institute for Learning Outcomes Assessment (NILOA) had a strong presence at the 2011 Assessment Institute. Several of the organization’s staff and associates were keynote presenters and include a notable group of internationally recognized experts on assessment. NILOA presenters pointed conference participants to what they called the ‘crown jewel’ of the organization’s efforts, a recently-enhanced and robust website featuring a collection of papers, articles, presentations, websites and survey results compiled in alignment with the organization’s vision for discovering and adopting promising practices in the assessment of college student learning outcomes. Reviewing the organization’s website will quickly reveal its valuable contribution to the field of assessment and current issues, including those I’ve highlighted from the conference. Take a moment to explore this great resource by visiting www.learningoutcomeassessment.org.

It was certainly a rich experience to attend the conference and have the opportunity to share with institutions and hear the collective voice of higher education assessment practitioners.

Rachel Cubas
Academic Trainer & Consultant
Assessment & Analytics Group | Academic Training & Consulting (ATC)

References

Hutchings, P. (2010) Opening Doors to Faculty Involvement in Assessment. National Institute for Learning Outcomes Assessment.