Online Blogucation
26Sep/120

Flipping The Mooc?

140,000 students in a single course?  C'mon...there's no way!  Or is there?  A LOT of people have taken notice of MOOCs (Massive Open Online Courses) in the past few months.  And when I say people, I mean highly positioned, well respected, very powerful people in the education sector.  People like Presidents, CEOs, Provosts, etc., of places like Harvard, MIT, Stanford, Carnegie Mellon, and many more have at least publicly inquired about MOOCs if not actually starting programs to invest in their own.

A respected colleague of mine who talks almost exclusively to C-level educators put it simply but effectively, "...the genie is out of the bottle."

Of course, this is exciting.  Anything to further the discussion around eLearning is wonderful from my perspective.  The ridiculous, antiquated, fallacious arguments about leaving teaching and learning alone are growing tiresome.  So anything that promotes the use of technology to enhance and augment learning is a powerful thing.  But with that in mind, and as an "early-adopter" much of the time, my next statement might surprise you.

We need to blow up the MOOC.

No, not blow up as in destroy.  Blow up as in, let's get to v2 as fast as humanly possible because v1 is NOT a good poster child for online education.  Why?  Simple.  Today's MOOC takes many of the worst elements of teaching, instruction, assessment, etc., and simply presents them over the Internet.  For instance:

Lectures - A big name in both the MOOC world as well as his discipline (aka, the smartest guy in ANY room) was describing the process he used to create his MOOC.  He said, "I was shocked when I started researching ways to disseminate information to find that lecturing is actually a really bad way to present information.  I have been lecturing for over 40 years and didn't know that..."  And yet, this great scholar and innovator did exactly that in his MOOC.  He simply recorded himself lecturing, put it on YouTube, and tied it to his MOOC.  Eric Mazur talks about a fantastic study he did at Harvard where students had their brains continually monitored for a week.  EVERY single student had similar brain patterns with regard to class (lecture) time.  Their brain waves were almost completely flat.  That's right - no activity.  The only other time in the week their brains were that inactive?  When watching tv.  Even when sleeping, the human brain is more active than during a lecture.  And yet the lecture is still the predominant means of "teaching" students today.  So, if MOOCs are to "change the world" for the better...we have to figure out how to incorporate much better ways of teaching and learning through them.

Learning - What is learning, really?  Isn't it the acquisition of information and then the assimilation of that information?  If we agree that it is, at its core, those two things, then I would bet we could also agree which of the two things is harder.  Dissemination of information is easy.  It can be done through a book, a lecture, etc.  The HARD part is actually making sense of it in a contextual, meaningful, connected way.  Yet for decades (if not centuries) educators have performed the easy part, while leaving the hard part to students.  (Actually to students who are alone, at home, with only a book...)  The flipped classroom, which is a remixed way of talking about what educational psychologists have known for decades, is finally starting to shine a light on the notion that the hard conversations should take place in class, while the dissemination activities happen at home.  MOOCs, as they exist today, do not even approach this.

Assessment - We can create objective tests that are manually graded and start to identify what a student does or does not understand.  In fact, a few MOOCs in the past month have finally started to do just that.  (This is why the very first MOOCs were not taken seriously - they really had little to no meaningful assessment.)  However, even with such heavy reliance on standardized assessments in our Universities today, most professors still agree that much of the way we know if our students do "get it" is through interaction, conversation, dialogue, and transference of ideas.  This can happen in discussions (before, during, and after class), as well as through ideas presented in papers, etc.  However, the only real way to even approach this in a MOOC is through peer review and peer assessment.  And that is a tough one for a lot of people.  For example, I recently took a Udacity MOOC on statistics.  I had opportunity to join a discussion group that I found purely by happenstance, with others from the class.  It was a study group of sorts.  However, after asynchronous discussions with about 10 peers, I soon realized that I was likely the most knowledgeable person in our group when it came to statistics.  (My mother and father are giggling right now...)  In other words, nobody had anything of value to bring to the table.  Social learning is indeed a powerful thing, but without what Vygotsky would call the "More Knowledgeable Other" in the group, it starts to break down quickly.  MOOCs could rely solely on high stakes, standardized, auto-graded tests, but again, that would simply perpetuate a bad practice from face to face teaching in the online realm.

There are others here, but I think you get my point.  The MOOC as it exists today, with millions of dollars being poured into figuring out how, when, and where to use it, needs a quick overhaul.  I am hopeful that it will happen sooner rather than later as (hopefully) it hasn't become an "institution" to anyone yet.  Hopefully nobody is so tied to the notion of something that didn't really even exist until less than a year ago that they can retool, reconfigure, and rethink the MOOC.  Because a MOOC has tremendous possibility.  Delivering global education at scale with ties to real-world competencies...that could be a game changer.  So let's make sure we get it right.  Let's flip the MOOC.

Good luck and good teaching.

23May/120

Proactive plagiarism prevention

goblin plagiarism gameMany of us don’t think about academic dishonesty until we are confronted with it. But why not be proactive and try to prevent academic dishonesty in the first place? Here are several proactive ways to prevent plagiarism or cheating, starting with when you’re planning your course, and then considering important communication to have with students.

Course Planning
To help reduce plagiarism or cheating, here are several avenues to think about when planning your course:

  • Plan for multiple small-stakes assignments, instead of one larger assignment worth a significant portion of the grade. This reduces the incentive to cheat and also makes it logistically harder for students to purchase papers or have someone else do the work for them.
  • Scaffold assignments to have multiple pieces of an assignment building throughout the semester. This helps you see the developing assignment and learn the student’s voice, so you are better able to determine a final product that doesn’t fit with the previous work.
  • Create new assignments each term. This reduces the chance that work can be resubmitted term after term.
  • Create unique assignments that students are less likely to be able to find directly on the internet. For example, Google your assignments- if you can find resources that directly address the topic, then your students can as well.
  • Finally, I won’t discuss exams in detail here, but this article is a good place to start for more information on how to prevent cheating in online exams.

Clearly communicate expectations to students
Another important way to set the stage is to proactively communicate your expectations to students. Here are some specific areas to focus on:

  • Make sure you have clearly written policies – AND penalties for what happens if those policies are not followed – in your course syllabus, and repeated other places in the class (such as announcements, introductory areas for the assignments or exams, etc.).
  • Educate your students on what plagiarism (or cheating) is, and what behaviors are or are not ok in your class. Some students don’t know when it’s ok to work with other students and when it’s not, and there may even be differences between their classes on this point. They may not know how to cite sources, or when to cite sources, why it’s not ok to cut and paste off the internet, etc. You could have an introductory discussion around plagiarism or academic integrity, or refer students to many relevant resources online. For example, a fun game to check out is the “Goblin Threat” plagiarism game by Lycoming College.
  • • Discuss your institution’s academic honesty policy with students. Here’s an interesting finding: “Students cheat. But they cheat less often at schools with an honor code and a peer culture that condemns dishonesty” (McCabe and Trevino). Other important aspects of this finding include the institution clearly communicating that academic integrity is a top institutional priority, and also students having a role in the judicial processes evaluating alleged infringements. You alone can’t change the institutional culture to make these things happen, but you can make sure to discuss any existing policy with your students and let them know that you expect it to be upheld. You could also do an assignment where students “sign” an academic integrity contract with you at the beginning of class.

Of course there is no guarantee that these efforts will prevent all attempts at academic dishonesty. However, they should help reduce the frequency. So try to work these items in the next time you revise your class, and post a comment on how it goes (or other thoughts on this topic)!

– Gail E. Krovitz, Ph.D. –
Director of Academic Training & Consulting

Sources referenced:

Krovitz G. 2007. Ways to prevent cheating in online exams. Educator’s Voice 8(6). Accessed online at http://www.ecollege.com/Newsletter/EducatorsVoice/EducatorsVoice-Vol8Iss6.learn

Lycoming College. Goblin Threat Plagiarism Game. http://www.lycoming.edu/library/instruction/tutorials/plagiarismGame.aspx?goback=.gde_52119_member_106954972

McCabe D. and L.K. Trevino. 2002. Honesty and honor codes. Academe January-February. Accessed online at: http://www.aaup.org/AAUP/pubsres/academe/2002/JF/Feat/mcca.htm

9May/120

Inter-Institutional Collaboration Project

At the Pearson Cite conference several weeks ago I met with representatives from several colleges who were interested in piloting a Pearson CourseConnect Analytics Edition (CoCo AE) course. The Analytics Edition versions are currently intended for LearningStudio customers who are also using the Learning Outcome Manager (LOM) tool to manage and track student progress toward mastery on learning outcomes.

CoCo AE courses come with student learning outcomes pre-mapped to presentation content and assignments. This mapping also includes assessment rubrics which have both content and assignment type criteria. Pearson’s course authors wrote performance level descriptors for the assignment type rubrics which can be modified if desired. The course design team decided it would be best to allow individual instructors the ability to define their own performance level descriptors for content type criterion (see rubric example below from the American Government course).

Sample CoCo Analytics Edition American Government Outcome Rubric

Part one of the collaboration will be to get teaching faculty together from participating institutions to work on group authorship of the content type rubrics. We’ll create criteria banks by outcome that all faculty can choose from or adapt for their own instances of a course. Colleges will then run the selected CoCo AE course in their Winter/Spring 2013 terms. For the second collaboration component we intend to work on a data sharing project that will allow peer institutions to see anonymized data on aggregated student performance against commonly taught outcomes. Our hope is to present this project at Pearson Cite 2013 in Chicago.

Another possibility for this type of collaboration is the ability to provide institutions with a new option for the inter-institutional comparability requirements that often accompany professional program accreditor reaffirmations. Historically, the most common compliance method is a standardized assessment measure along with student surveys like CCSSEE or NSSE.

Imagine a future where you could define peer groups and then compare your students’ performance towards mastery on commonly defined learning outcomes against those of your peer institutions along with the very best even if they weren’t in your peer group. My hope is that part of that future involves more inter-institutional collaboration among faculty and content providers like Pearson to create stronger, more effective curriculum that can proves its effectiveness.

While comparison on its own is interesting, an extension would be to consider external benchmarking where an institution could identify peer(s) whose students tend to outperform those of the home institution. Jeffrey Alstete from the George Washington University Graduate School of Education and Human Development explains that the benchmarking strategy can be used to improve teaching and learning by studying processes and practices at institutions that excel and then adapting their methods to the context of the home institution (1995).

A key value of benchmarking is that all institutions involved in the study expect something in exchange for participating in the project; even those institutions who are recognized as best in class. (Management Consulting Partners, 2008). This is not a passive endeavor for any participant so, if you’re interested in benchmarking, it’s important to obtain support from senior leadership and to recognize that this effort will require a significant time investment.

Benchmarking is yet another strategy available to add to the assessment toolkit for higher education. We’re excited to engage with our partners to provide direct data on student mastery of learning outcomes and welcome your feedback on additional ways that we can support continuous improvement efforts on your campus.

Works Cited

Alstete, J. W. (1995). Benchmarking in Higher Education: Adapting Best Practices To Improve Quality. ERIC Digest. Retrieved May 7, 2012 from http://www.eric.ed.gov/PDFS/ED402800.pdf.

Management Consulting Partners. (2008). Applying Benchmarking to Higher Education. 1 (2). Retrieved May 7, 2012 from http://www.mcpartnersllc.com/download/Applying%20Benchmarking.pdf

Brian Epp | Assessment and Analytics Group Manager | Pearson eCollege

29Feb/120

Actionable data for improving student learning and inter-institutional comparability – Can we have both?

An article titled Searching for the Holy Grail of learning outcomes from Inside Higher Ed (IHE) caught my attention last week. The article discusses the elusive quest for data that illustrate the value add provided by a student’s progression through a degree program at a particular institution.

Because the Collegiate Learning Assessment (CLA) purports to provide this value added score it is fast becoming the market standard and the author of the article goes on to cite a number of reasons why this coalescence should concern us.

First, here’s some background in case you’re not familiar with the CLA. The largest market for higher ed accountability is undergraduate general education where the focus is on things like a students’ ability to think critically, to read and write effectively, or to solve problems. As I summarized back in 2009, “we now have public accountability campaigns including the Voluntary System of Accountability (VSA), the University and College Accountability Network (U-CAN), and Transparency by Design which serve the public, private not for profit, and private for profit institution types respectively” (McKay Epp, 2009b).

Because the focus of the IHE article was on public institutions, the VSA is the accountability campaign that was highlighted. As background for those not familiar with the VSA, it

allows participating schools to choose among three assessment instruments that are administered to students with the goal being to indicate student proficiency in the areas of reading, writing, and critical thinking. One of these tests, The Collegiate Learning Assessment (CLA), actually tests a sampling of entering freshmen and graduating seniors and correlates student scores to incoming student SAT or ACT scores in an attempt to show the value add provided by an institution over the course of a degree program. (McKay Epp, 2009a).

One of the most interesting critiques of the CLA in the article is the high correlation between it and the SAT. Olds states that “the amount of variance in student learning outcomes after controlling for SAT scores is incredibly small” (2012). The author goes on to say that “Most institutions’ value-added will simply be in the expected range and indistinguishable from each other. Hence, why bother with the CLA” (Olds, 2009).

While the author lists several alternatives to the CLA (which are worth reading), what I found most interesting was the discussion about the struggle that institutions have to find data that create actionable insights for improving student learning. For nearly four years I’ve been working with institutions to help them implement Pearson’s Learning Outcome Manager (LOM) which is a technology enhanced outcome management system.

LOM does an excellent job of providing actionable data to faculty and administrators on student performance against established learning outcomes for online and blended courses and programs. Because outcomes are associated to graded course assignments, it helps ensure that evaluators are seeing students’ best effort and when done well it minimizes additional workload for faculty. The challenge is that LOM generated data is so targeted to individual professors or to a particular course that its results can’t easily be used for inter-institutional comparability.

While I believe a majority of educators would agree that the most important reason to work in assessment is the desire to improve student learning, I also recognize that the demand for data on inter-institutional comparability will not go away. This article provides some interesting alternatives to standardized assessments such as the CLA which I think could work in tandem with data generated from systems like Pearson’s LOM to provide a win-win for the assessment community.

Works CitedMcKay Epp, B. (2009a). Implementing a Technology Enhanced Outcome Management Strategy on Campus that Produces Substantive Improvements in Curriculum, Instruction, and Student Learning. EDULEARN09, Barcelona. Retrieved from http://library.iated.org/ view/MCKAYEPP2009IMP

McKay Epp, B. (2009b). Improving Student Learning: Thoughts and Reflections. Educator's Voice, 10 (3). Retrieved from http://www.pearsonecollege.com/Newsletter/EducatorsVoice/EducatorsVoice-Vol10Iss3.learn

Olds, K. (2012). Searching for the Holy Grail of learning outcomes. Inside Higher Ed. Retrieved February 20, 2012 from http://www.insidehighered.com/blogs/globalhighered/searching-holy-grail-learning-outcomes

Brian Epp | Supervisor, Assessment and Analytics Group | Pearson eCollege

14Feb/122

Do As I Say, Not As I Do

Conference attendees sleeping

These people still clapped at the end of the session...

How low is your conference bar set these days?  What does it take to make your experience "worth it" anymore?  Is it 1 good keynote and 1 good session?  Is that enough?  Maybe it's a solid pre-conference workshop and two good sessions.  Or is it even less?

I go to 30-40 conferences (led by educators) each year.  Typically I present a keynote address, a few workshops, or possibly a pre-conference session, but I certainly have plenty of time to see and hear a lot of other presenters.  This also means that I end up eating lunch or an occasional dinner with dozens, if not hundreds of strangers.  So, I've been doing some research around the gambling that takes place at conferences. No, not dice in the back of the kitchen or inviting strangers back to a hotel room...(Those are the tech conferences.)  I'm talking about the conference session roulette that everyone takes part in.  Come on lucky session #4...daddy needs a new educational game!

Some conference attendees "double down" on their bets.  Good move.  I watch as more and more often, session participants sit in the back of the room.  They give the presenter(s) about 3 minutes to "hook" them.  If there is no "hook" then out the door and off to another session they go!  Two for one sessions - nice!  And, most conference presenters are making it hard too.  It seems that the "catchy title" is the order of the day, regardless of whether or not the session will actually provide value.  Sprinkle in Web 2.0, or YouTube, or Serious Game and you've got a session title that will make people do a double take!  Come on Serious Games for YouTube and Facebook via Web 2.0 in the Classroom...Daddy needs a new assessment idea!

In my extremely unscientific research, here is what I looked for.  Great sessions (regardless of the identified mode), meaning keynote addresses, workshops, pre-conference, poster, and panel sessions were all game.  I looked for a few simple indicators to determine a successful presentation.

  1. Great content - this is usually determined by the "buzz" after the session and often corresponds with the number of questioners who stick around to talk.  (My personal research seems to indicate that 3 people will stick around regardless of how good or bad a session is.)  This also includes "buzzing" conversations that follow the session to lunch.
  2. Great presenters - these are definitely harder to find, but my indicator here was pretty simple.  Who, or better, how many (in the audience) was paying attention to the presentation?
  3. Great interaction - this one is tough for me.  A lot of conferences are demanding audience "participation" these days.  My problem?  Often the audience members are not subject matter experts, they are simply professors who enjoy sharing their opinions (which is why we're professors, right?) or worse, they simply want to play devil's advocate throughout the session.  So, in both of those situations, other audience members come away feeling like the session was useless.  However, when interaction with multiple audience members takes place regularly (not simply because an audience member forced a question in), it should be noted.

So, after months of tallying on my iPad or iPhone -I love you Evernote - I have some informal numbers.  This is from 22 conferences, 103 sessions, and includes a lot of conference goers...I have no idea how many.  I should also mention that if I didn't go to the presentation, but simply heard about the presentation after the fact, it was not included here.  (I wonder sometimes if those conversations are legitimate...it's like the guy in high school who was always trying to convince you the swimsuit models showed up to every party JUST after you left...)  Anyway, here you go:

  • 92/103 sessions had poor content, which means 11 sessions had great content.
  • 99/103 sessions had poor presenters, which means 4 sessions had great presenters.
  • 99/103 sessions had no audience interaction, which means 4 session had great interaction.
  • 2 sessions had both a great presenter AND great content (although no interaction).

For those of you scoring at home, that does not even begin to approach an 'F'. Even in aggregate, less than 16% of the presentations I attended were...well, quite frankly they were pretty bad.

Conference attendees paying attention to everything but the speaker

At least I got all of my email answered during this session

Let me give you one fresh example from a conference I attended in December.  There were 75-100 people in the lecture style, tiered room.  I was in the very back, at the top, looking down on the presenters and audience members (I was preparing for my session in that same room, which was next.)  Let me describe for you the middle row of about 25 people.

  • 3 were visibly asleep
  • 4 were checking email on their laptops
  • 6 were checking sports sites - mostly fantasy football on their laptops
  • 10 were using their phones (texting for help perhaps?)
  • 1 was writing on a notepad
  • 2 were passing notes back and forth to each other

It doesn't seem to matter what the topic is, what kind of conference it is, or who the speaker / audience members are, these sessions don't seem to be very helpful.  When I attended my own discipline's Communication conference last year, with people who explain to college students how to effectively communicate a message, there was no difference. When I went to a K-12 conference with teachers who certainly need more energy and enthusiasm to reach younger people, it was no different.  When I went to International conferences, it was no different.  (In fact, it was often worse as many of those conferences are made up of "conference papers" - essentially a person sitting in front of the audience reading a research paper out loud...seriously.)

OK...so, enough of the agonizing landscape.  You get it.  In fact, many of you are probably starting to develop a twitch as I've reminded you of things you would prefer to forget.  But here is my big question.

Why is it a surprise that education is having such trouble reaching students?

Apparently, we (educators) have a difficult time communicating with each other.  How can we possibly expect to communicate effectively with our 1, 2, and sometimes 3 generations younger students?  Why don't we apply what we know to work?  Why don't we use what we know to be helpful?

Tell, Show, Do, Review, and Ask in a multi-modal, multi-nodal way and we'll reach a LOT of people.  Why don't we do that?  Use ethos, pathos, logos, and mythos (if you're dying to think about it old-school) and we'll reach a LOT of people.  Why don't we do that?  Incorporate serious games, focus shifts, multimedia, and interactive strategies and we'll reach a LOT of people.  Why don't we do that?

I truly believe that we are our own enemy here.  I KNOW that there are some really creative, innovative, strategic instructors out there who are doing great things...but when they get to a conference to share it, they get very uptight.  The idea of presenting to peers is quite intimidating for many, so those ideas never really get a chance to shine.

Then, there are the conference submission boards who miss out on great stuff.  They don't seem to read or review survey results from previous conferences, giving preference to people who get super positive comments, having thereby illustrated that they have great content, are a great presenter, or include interaction effectively.  I watched a professor at Online-Educa Berlin present a fantastic workshop on rubrics.  She was poised, dynamic, and her content was top notch.  When I told her that she should give that session at some conferences back in the USA, she explained that she tried over a dozen times and never got accepted.  Something about the presentation just wasn't "sexy" enough for the committees, even though I watched her knock it out of the park in Germany.

So let me finish with this.  Let's change the way conference presentations currently run.  Let's all take a pact.  When we're given the opportunity to share our clever, creative, innovative, effective, or useful ideas from our classes with our colleagues...let's not blow off the performance until the plane rideLet's not forget what goes into a good presentation - effective nonverbals, logical reasoning, and passionate verbals.  Let's include some of the "cool" factor when we can, to illustrate the concept.  Let's not forget the power of storyLet's agree to NEVER, EVER, under ANY circumstances READ our notes or (worse) READ our PowerPoint to the audience again!

We can do this.  It's not like we don't know how audiences respond most effectively.  We know that the lecture is one of the poorest ways we can communicate if we want our audience to retain, comprehend, and be engaged.  We KNOW what it takes.  So, let's just change it.  Yes, that simply, let's change our conference behavior.  Let us never again imply that what we say and what we do are not supposed to be joined at the hip.

Good luck and good teaching...and good conference-going!

(BTW - did anyone notice the ironic metaphor for education here?  Boring lectures, audience members not paying attention, little audience interaction, etc?  Hmmm...I guess that's another blog.)

11Jan/120

Try a Syllabus Quiz

I’ve been teaching online or in a hybrid format for about 12 years now. I’ve been teaching in the classroom for even longer. But when I teach online there’s always something that seems quite trivial that I actually miss. I like to refer to this as one-of-those-go-over-the-syllabus-days, and it’s usually the first day of class.

Of course, students find this pointless and boring, generally, but I’ve found that they do tend to pay attention. I don’t read the syllabus word-for-word, but I at least point out each section, what it means to them, and what they are required to know or do.

There’s a real purpose for this. By my way of thinking, a syllabus is like a contract. It’s a contract I make with students that says, “This is what I’m going to provide to you [knowledge and guidance] and in return, this is what you’re going to provide to me [effort, study, work products, etc.].” By doing this in class (in an on-ground course), I can ask if anyone has any questions. I can look for nodding heads or confusion on faces, and I can address any issues in class. And if I really want to be a stickler, I can have students sign a page stating that they have read and understand the syllabus.

Online, I don’t have that same kind of forum. I do use, for example, a Virtual Office. This is just a discussion forum that I’ve renamed, and I ask students to post any questions about the course there. They can then learn from each others’ questions without sending me the same question via email 20 times. (Although, if it’s a question about a grade or other individual work in the course, then I ask them to email me.) So, if students have questions about the syllabus, they can ask them there.

That isn’t good enough, I’ve found. I still have students come to me saying that they didn’t realize such-and-such and that it would affect so-and-so. So, I also like to include a syllabus quiz. I write up 10 or 15 questions based on the information provided in the syllabus. They are specific questions, e.g., “Which of the following are our course objectives?” and I throw some irrelevant answer choices in there to make sure that they’ve read the course objectives before they can answer that question correctly. And, if I have assignments of different point values, I add a matching question: “Match the assignment on the right to its respective point value on the left.” That way, students will realize that, yes, the final research paper does make up one-third of their grade, and therefore, it is important.

In effect, this is like asking the students to sign off that they have read and understand the syllabus. Heck, I could use a syllabus quiz in my on-ground courses, too. In online courses, it’s particularly important.

In Pearson Learning Studio, I can also enable the Path Builder tool (found under Course Admin --> Enable/Disable Tools), and then use the Course Scheduler link to access Path Builder. I can use Path Builder to “gate” the rest of the course content around the syllabus quiz. If students don’t pass the syllabus quiz, they can’t even move forward to the first Unit. No Christmas-treeing that quiz!

Consider using a syllabus quiz in your own course. It might just make the difference between a smooth-running course and a bunch of Hey-I-didn’t-know-that! excuses later.

-------------------

Rob Kadel, Ph.D.
Supervisor, Academic Training & Consulting
Pearson eCollege

14Dec/110

They’ve left us, but that doesn’t mean they aren’t still students.

7Dec/111

Up In Lights

I just got back from performing a keynote address in Berlin at Online Educa.  It was an amazing experience.  Not only was the conference packed with over 2000 people, but the city of Berlin was quite breathtaking this time of year.  Everywhere you look in Berlin there is some kind of Christmas decoration, tradition, or ornamentation.  People gather together at the Christmas markets to drink Gluehwein (a spiced, boiled wine drink that smelled delicious) and sales abound in the shopping areas.

So as I was walking through one of the markets with some friends, I thought back to the decorating of my own tree just a few weeks ago, which led to thoughts of…instructional design!  (Seriously, I need a break).  With a four year old, Christmas came early this year and we had our tree up on Thanksgiving day!

But the lights on the tree, specifically, were quite an ordeal.  Actually they still are.  See, last year we bought a new tree.  We took our daughter down to “St Nick’s” Christmas store (no joke) and asked for a guided tour of the new trees.  While the trees look amazingly real, they ALL – 100% - had a major flaw.  It was impossible to buy a tree without pre-decorated lights!  And not just pre-decorated, but all white lights.  Ugh.

Of course, I get why they do it.  Most people hate lighting the tree.  It’s time consuming, you end up missing spots, and the only thing worse than getting them on is taking them off.  But, I knew then what proved to be true this year.  Pre-lit trees are not what they appear to be.  See, this year, I had happen EXACTLY what I asked the sales-elf about last year:

ME: “What happens if a light goes out?”
ELF: “That hardly ever happens!”
ME: “Okay, but what if it does?”
ELF: “Well, the lights aren’t connected like they used to be.  If one goes out, it doesn’t affect the others, it just goes out.  You can replace it or leave it, but the rest of the lights will shine.”
ME: “Riiiiiiiight….”

You can probably see where I’m going with this.  This year, just as I suspected, we got the tree up, plugged it in, and yep, you guessed it – the entire middle of the tree was black.  So, I got to spend about an hour, finding, unplugging, and re-plugging new lights into the old sockets, hoping each one would light the strand back up.  (I never got more than 4 in a row to light up with any new bulb…)

Alright, enough about my holiday nightmare.  So what does this have to do with Instructional Design?  Well, as I stood there checking bulb after bulb, I realized that some schools are taking this approach to their online courses.  The premise is simple:  Most instructors don’t have any education around teaching.  Instructional designers know how to design quality courses.  So, create a course with a group of designers and let a dozen different faculty teach it.  Done and done!

But, of course the analogy then starts to take over.  What if you allow instructors to change the course?  Some of those new courses will be awesome – amazing even!  Others, will be like a darkened bulb bringing down the outcomes average for the department.  What if it’s a blinking strand kind of course?  In other words, what if it has all kinds of whiz bang media and social interaction?  The answer there is that most faculty would need a boat load of instruction just to teach it.  (This is why most standardized courses don’t have cool stuff…they just have text, pictures, and some videos.  It’s easier to deliver, even though it’s not nearly as engaging for students.)  This straight forward approach to design for mass clusters of courses would be the equivalent of an all-white tree.  Guess what?  I don’t WANT an all-white tree.  That’s why last year I spent about 3 hours going through and changing out 4 out of 5 bulbs to a color.  I want color.  I LIKE color.

Ok you say - so let’s not use instructional designers.  Let’s let faculty design all of their own courses!  Guess what you get then?  You’ll get some lights perched perfectly on the limbs.  They will be unobtrusive, casting a healthy glow from the inside of the tree, almost as if the tree itself is on fire.  But you’ll also get…well, you’ll get the Griswald tree too.  You’ll get lights that look as if they were flung on the tree by a four year old with a slingshot, appearing as if they may fall off at any minute.  You will get some bulbs that are significantly dimmer than others.  You’ll get 5 reds in a row.  You’ll get classes that have nothing but text and no interaction with the professor except for an occasional rant and the final, posted grades at the end of term.

See, I’ve said it before and I’ll say it again.  There HAS to be a better way.  There has to be a healthy mix of instructional design, subject matter expertise, and personal touches that allow a class to be unique, engaging, and a quality experience in terms of assessment.  The school that figures out how to truly mix sound pedagogy with effective delivery and authentic assessment in a media rich, social environment will rule the world.

But until then, we’ll have to take it one light at a time.  We’ll have to create the best possible bulb section for our trees or try to create at least tri-color trees that are uniformed.  But one day…it will be different.

Oh, by the way, when I landed in Germany my daughter got on the phone.  She just HAD to tell me something.

ME: “Hey Peanut!”
ADDIE: “Hi Daddy.”
ME: “What’s going on sweet heart?”
ADDIE: “The middle of the tree is dark again Dad…”
ME: Guttural moaning...

Happy holidays and may your light shine brightly on whatever educational environment in which you teach.  Good luck and good teaching.

Dr. Jeff D Borden
Sr Director of Teaching & Learning

2Nov/110

The Buzz on Assessment

I had the pleasure of attending the 2011 Assessment Institute in Indianapolis this week. The conference is the nation’s oldest and largest event focused exclusively on outcomes assessment in higher education. Administrators, Faculty and Student Affairs professionals convened this week to discuss techniques and approaches across outcomes assessment areas. This year, the event featured tracks on Capstone Experience, ePortfolios, and Faculty Development, among others.

I’d like to share with you a few of the recurring themes I heard and will take with me from the keynotes, workshops and best practice sessions. I will share specifically three themes and considerations. These few points may serve as a marker for some of the noteworthy issues and considerations in the higher education outcomes assessments landscape.

The first two themes are indeed linked in both process and practice, so I will identify both of them at this point. They are: 1) Faculty Engagement and 2) Using Results to Inform Improvement Processes. For those of us who have been doing outcomes assessment for any extended period of time, these themes may echo many of the questions and issues as well as the successes we have faced.

The engagement of faculty in the assessment process is certainly not a new issue in the practice of assessment. Notwithstanding, faculty engagement in the process of outcomes assessment is a reality many institutions are still desiring and even stretching to achieve. The corporate understanding among practitioners gathered at the event appears to reveal an arrival, or perhaps a standstill in some cases, at a place of resounding confirmation, one that points to faculty engagement in the assessment process as a critical component to successful assessment. In her 2010 paper entitled “Opening Doors to Faculty Involvement in Assessment”, Pat Hutchings wrote:

“As Peter Ewell (2009) points out in another NILOA paper, from its early days in higher education, assessment was “consciously separated from what went on in the classroom,” and especially from grading, as part of an effort to promote “objective” data gathering (p. 19). In response, many campuses felt they had no choice but to employ external tests and instruments that kept assessment distinct from the regular work of faculty as facilitators and judges of student learning. In fact, the real promise of assessment—and the area in which faculty involvement matters first and most—lies precisely in the questions that faculty, both individually and collectively, must ask about their students’ learning in their regular instructional work: what purposes and goals are most important, whether those goals are met, and how to do better. As one faculty member once told me, “assessment is asking whether my students are learning what I am teaching.”

Further, the notion was submitted that seeking faculty engagement should not be seen as a one-time achievement but as an ongoing and evolving effort that characterizes a campus assessment strategy. Inasmuch as the issue is not a new one for assessment, the corporate sentiment among conference participants is that garnering this engagement remains a key dynamic and often great challenge. Several presenters admonished institutions represented at the conference to engage in cross-institutional dialogue to share strategies on how to foster a deeper degree of faculty engagement.

The second recurring theme centers on a question of the value, strategy and purpose of assessment efforts, asking What’s it all for? Assessment is hard work. And the growing sentiment appears to be a desire to see campus assessment efforts translate into actual impact on student learning, beyond the collection of data and documentation for accreditation and/or certification. This pull for results that impact student learning is a call to move beyond data collection and planning of assessment to the informed and strategic improvement of teaching and learning based on the data. To make assessment more useful, we must include within our strategy an intentional approach to leverage data and documentation to help bridge the gaps between our current and improved realities. This process must be ongoing. And it undoubtedly must include faculty.

Finally, the third takeaway comes in the form of a resource. The National Institute for Learning Outcomes Assessment (NILOA) had a strong presence at the 2011 Assessment Institute. Several of the organization’s staff and associates were keynote presenters and include a notable group of internationally recognized experts on assessment. NILOA presenters pointed conference participants to what they called the ‘crown jewel’ of the organization’s efforts, a recently-enhanced and robust website featuring a collection of papers, articles, presentations, websites and survey results compiled in alignment with the organization’s vision for discovering and adopting promising practices in the assessment of college student learning outcomes. Reviewing the organization’s website will quickly reveal its valuable contribution to the field of assessment and current issues, including those I’ve highlighted from the conference. Take a moment to explore this great resource by visiting www.learningoutcomeassessment.org.

It was certainly a rich experience to attend the conference and have the opportunity to share with institutions and hear the collective voice of higher education assessment practitioners.

Rachel Cubas
Academic Trainer & Consultant
Assessment & Analytics Group | Academic Training & Consulting (ATC)

References

Hutchings, P. (2010) Opening Doors to Faculty Involvement in Assessment. National Institute for Learning Outcomes Assessment.

31Aug/110

Hallmark #9: Integrity / A Look Back

Well, here we are. Nearly to the end of our journey. Almost there. We’ve finally made it to Hallmark #9. It feels a bit like we’ve been climbing a Colorado 14ner and we’re nearly at the top. We have discussed the first eight Hallmarks of Quality from the Interregional Guidelines for the Evaluation of Distance Education (Online Learning) so we have only the final Hallmark to get through before our ascent to the peak is complete: Integrity.

When I first saw that integrity was the last hallmark my first thought was “Why list Integrity last?”. Was it an afterthought? Doubtful. Was it just part of a natural progression that landed it at the end? Maybe. Or was it specifically selected to be the end cap of what is seen as important, integral and paradigm shifting set of Hallmarks? This is where I place my bet. If for no other reason than this is the only Hallmark with both an asterisk to further information AND an emboldened note within the (already lengthy) first point of analysis/evidence.

While the official language of the hallmark is simply “The institution assures [ensures?] the integrity of its online learning offerings*”, let us make no mistake: this is not just about integrity in the general sense, meaning whole, undivided, unified, consistent or sound. This is about academic integrity.

So let’s take a look at what’s contained in Hallmark #9. First, the asterisk in the statement refers to the WCET paper Best Practice Strategies to Promote Academic Integrity in Online Education. This paper lists academic integrity best practice strategies in categories concerning the institution, curriculum and assessment as well as faculty and student support. It’s worth the quick read for high level tips that can be used to get you going or to validate what you may already be doing. Three of the hallmark’s analysis/evidence items are short and essentially mention that faculty, online orientations and institutional policies should emphasize and integrate academic integrity into their teachings and practices.

The first, most prominent and most impacting  analysis/evidence item is this one:

“The institution has in place effective procedures through which to ensure that the student who registers in a distance education course or program is the same student who participates in and completes the course or program and receives the academic credit. The institution makes clear in writing that these processes protect student privacy and notifies students at the time of registration or enrollment of any projected additional costs associated with the verification procedures. (Note: This is a federal requirement. All institutions that offer distance education programs must demonstrate compliance with this requirement.);”

The statement is composed of two parts: a) verifying the student and b) making the verification process, procedures and costs known. In part a), the key words are “effective” and “ensure”. “Effective” is a word that is open to interpretation by that fact that it doesn’t imply perfection, but only a high level of efficacy. And this is appropriate; pretending perfection is attainable is a way to quickly not get anything done. In contrast, “ensure” does carry the weight of making certain without exception or with guarantee. While these two can seem to be in opposition what we can strive for here is when we believe we’ve determined that the student registered for learning is the student who performed the work, that this is indeed true. No false positives or false negatives. Part b) strikes me as an attempt to included students in the effort of keeping academic integrity. The phrase “makes clear in writing that these processes protect student privacy” implies to me that there is a specific effort to communicate to the students that academic integrity is not solely about catching the “bad” cheaters, but protecting the quality, original work that many students choose to do. And that many times protecting good things has a cost. Most importantly though, it needs to be shown and communicated (and be true) that academic integrity is not first a matter of punishment but rather a matter of having a posture of quality between the students and the institution.

This hallmark can be a tall order, potentially a costly order. (For information on types, costs and thoughts on some student authentication / verification systems, see Jennifer’s blog on the student-centric Hallmark #7.) But this hallmark has merit and it makes sense. If students are not who they say they are and their work is not what they say it is, then where’s the purpose in anything that we’re doing (be it on-ground or on-line or somewhere in between)?

Imagine for a moment that you had just been given mid-field season tickets to your favorite sports team. And there you sit, at the season opener, soaking in the atmosphere: 70,000 excited people coming together to cheer a common cause; an immense venue where little expense was spared; the long tradition of the team and sport displayed; players, coaches, owners, concession workers, and so many other people who have worked so hard in the preceding weeks and days to prepare for this moment in time: The Game.

But now imagine that as the teams take the field and the crowd is roaring, the players do only as they please. They ignore the rules and the referees. They high-jack the scoreboard, steal the ball and generally create helter-skelter. Wouldn’t that be maddening? You might think “What in the world was all the effort and preparation for?”. And, truly, it would make a mockery of the sport, the event and everyone involved.

So too it is when learners choose to (or inadvertently do) cheat; it makes a mockery of the class, their peers, the institution and most importantly, their own learning. Is it the truth that no matter what we do, students are still going to try to (and find ways to) cheat? Definitely. Will students compromise their academic integrity by ignoring what resources are made available to them? Probably some. Do either of these things mean that we should stop (or lessen our efforts in) striving for academic integrity? Absolutely not. I think this line from Martin Luther King Jr.’s book Strength to Love says it well: “The ultimate measure of a man[/woman] is not where he[/she] stands in moments of comfort and convenience, but where he[/she] stands at times of challenge and controversy.” If we were to stop furthering our efforts toward academic integrity, even as it becomes more difficult to do, then we risk making a mockery of it all. Hallmark #9 reminds us that we must continue to make purposeful efforts to do establish cultures of academic integrity on our campuses.

------------------

Before we sign off from our ten week foray into The Nine Hallmarks of Quality, it seems only fitting that we should take a quick look back over where we have been and what the big picture looks like.

It was back at the end of June when Jeff first introduced the discussion of the Hallmarks and what they might mean for online education. Jeff talked about the consistency and transparency that these Hallmarks can bring. He asserted that these Hallmarks will give online educators another strong foundation to stand upon in the debate around efficacy of online vs. on-campus. But I think the most foretelling statement of Jeff’s was this: “They will illustrate what many of us have already researched and know: Online learning works when it is strategic, designed effectively, and measured evenly.” And as we’ve gone through the Hallmarks in detail I think we can see how they echo this underlying paradigm. One could say that the essential thought running through the Hallmarks is: Be purposeful and make it known.

When I look at the list of hallmarks as the pieces they are and how they come together as a whole, I see them fitting into four categories:

Integral
Hallmark 1, Hallmark 2 and Hallmark 3 fall into this category. Notice some of the words in the definitions of these Hallmarks: incorporated, appropriate, integrated. The theme here is for online education to be a part of who you are, for it to be infused in your character. It’s not an addition nor an appendage.

Do It Well
Hallmark 4 asks for rigor in creating the curriculum for online learning (why would we do it any other way?)  and Hallmark 5 asks for continual improvement of the curriculum. If you’ve made the decision to have online learning interwoven into the fabric of your institution, then you need to make the purposeful choice to do it well. Saint Francis de Sales said it succinctly: “Be who you are and be that well.”

Set Your Team Up for Success
Where faculty meets students is where the rubber hits the road of learning. To this end, Hallmark 6 and Hallmark 7 are directed toward the critical subject of supporting our faculty and students with resources, training and information. Give them the things they need; get rid of things that will get in their way.

Integrity
As we’ve mentioned in the first part of our Hallmark 9 blog today, if all the other Hallmarks are fulfilled (the set up for “game time” is done well), but learning doesn’t happen with integrity, then it can all be for naught.

These categories and these Hallmarks together form a cohesive picture of successful online learning that is strategic, designed effectively and evenly measured.

On behalf of the Academic Training and Consulting team, we hope that this blog series has been beneficial to you, give you some insights, and helped to frame the future of online education as education that will lead the future of learning. Next week we’ll be back to our regularly scheduled blogging, so look for some exciting topics in the world of education and technology in the coming weeks!

Luke Cable
Academic Trainer & Consultant