Online Blogucation
13Jul/110

Hallmark #2 – Planning

Fasten your seatbelt and hold on to your hat! This week we are going to talk about planning in regards to the Middle States Accreditation plan. While I say that a bit facetiously it is actually a little piece of the canvas which is part of a bigger more exciting piece of work.  By standardizing accreditation requirements nationwide for higher education online learning programs, those of us firmly planted in online learning programs can take a huge leap forward to demonstrate (with statistics, research and data) that what we are doing is not only catering to a growing market’s demands but doing so because the pedagogy and statistics show that our students are learning and competing and often exceeding their counterparts in fully online programs.

There are 9 hallmarks in the Middle States Accreditation plan and today we look closely at #2-Planning. On a side note, I will give you some background into this series of blogs. After an introduction to the overall Distance Education Programs--Interregional Guidelines for the Evaluation of Distance Education (Online Learning) each person on our team (the Academic Consulting team at Pearson eCollege) took a hallmark to focus on and fully explain. In the draw, I drew #2 Planning.

Now, as I plan for this blog (I deliberately chose the word plan in case you missed that) I can see how apropos it is that I have the planning topic. I am a planner to the point of a clinical neurosis some might say. I am the person who, when the seatbelt light goes off on an airplane as we pull into the gate, I get up and find my car keys and my credit card so when I get off the plane and get to the end of the very long walk to my car, I can jump in, start the car and proceed to pay for parking. Downtime is used for reflection and analysis but it is also a moment or two that can be used to take care of details and save time later on. So from the planner’s perspective, let’s look at hallmark #2.

With that statement of credibility (I am qualified to talk about planning because I am a neurotic planner in my day to day life), let us take a look at how EduKan, the consortium of online campuses for 6 Kansas community colleges, leads by example when it comes to these accreditation hallmarks. Some institutions will fret and have to hire consultants to comply when this becomes standard whereas other institutions, such as EduKan, will simply look at the list and say: “we already do that.”

Hallmark #2 reads:
The institution’s plans for developing, sustaining, and, if appropriate, expanding online learning offerings are integrated into its regular planning and evaluation processes (MSCHE Standard 2).

From the guidelines, analysis and evidence of this hallmark will review:

  • Development and ownership of plans for online learning extend beyond the administrators directly responsible for it and the programs directly using it;
  • Planning documents are explicit about any goals to increase numbers of programs provided through online learning courses and programs and/or numbers of students to be enrolled in them;
  • Plans for online learning are linked effectively to budget and technology planning to ensure adequate support for current and future offerings;
  • Plans for expanding online learning demonstrate the institution’s capacity to assure an appropriate level of quality;
  • The institution and its online learning programs have a track record of conducting needs analysis and of supporting programs.

So in asking how EduKan’s director Mark Sarver addresses the topic of planning, he replied that all aspects of the planning guideline are addressed through their Strategic Planning committee. The Strategic Planning committee for EduKan includes representatives from all jobs and roles within the organization. The group includes but is not limited to: academic deans, advisors, instructors, registrars, other administrators et. al. They devise a 3 year strategic plan which is created and agreed upon by all members of the committee. It is all encompassing to include goals, budget planning, technology planning, and indicators of success. The stakeholders on the committee then take the plan back to their respective groups and gain approval from those groups. As the committee meets every three years, they check the indicators of progress, document successes and adjust or re-define goals for the next three year plan. Statistics, reporting and data analysis provide the documentation needed to assure the required appropriate level of quality. The process is ongoing and it includes every role in the EduKan system to gain buy-in from all those with a role in the success of the online program and the consortium as a whole.

EduKan is not unique in this process. All institutions have a similar program or committee that examines, develops, implements and then reviews their overall plan for successfully educating the students who attend their institution and enrolls in their courses. If they have always been a traditionally on ground campus, this will have to expand to include the online goals above. If they already have an online component to their offerings, they will have to be sure they can document that they are addressing the analysis components above. Of the 9 hallmarks soon to be part of the accreditation process for online learning programs, number two might be one that you can check off as already being in place. Good luck!

-Pamela Kachka, M.A.Ed.-
Academic Consultant

6Jul/110

Hallmark #1

As Jeff Borden mentioned last Wednesday, this week marks the first blog in a nine week series where the ATC Team will highlight each of the 9 Hallmarks for Quality Online Education.  If you missed it last week, the regional and national accreditors have agreed on a set of outcomes they will use to evaluate online institutions. So let’s take a look at the first hallmark: 

 Hallmark 1

Online learning is appropriate to the institution’s mission and purposes

As I sat down to think about what I would write, I found myself stumped. What could I write about?  This one‘s a “no-brainer”.  We’ve all been taught that, no matter what the industry, corporations, non-profits, all organizations MUST have a mission.   But as I continued to read the analysis/evidence section, I realized this is about more than simply having a mission, it’s about making sure online learning fits into the overall institutional mission.

In other words, quality online programs aren’t just a whim.  They aren’t implemented because every other college has online courses.  Quality programs are not quick money-making ventures designed to support the REAL programs.  Quality programs require extensive planning where the leadership answers questions like:

  •  How will online courses integrate with the current offerings?
  • How will online courses impact the student experience? 
  • Do we want online courses to attract new students to our programs or will we design them to support current student needs?
  • What will be the look and feel of our online environment?  How does this fit into our current environment?

 Most of us have at least heard about online programs where these questions likely were not considered prior to implementation - programs that offer a certain online course once every two years and students just have to wait.  Or we’ve heard about the institution known for its liberal arts education that suddenly offers an online MBA program for Executives.  Neither example assumes bad programs, but Hallmark #1 provides the guidance to help insure that online programs are properly incorporated into the big picture.

In some parts of this country, Chick-Fil-A is a fast food tradition.  Its most popular menu option being a chicken breast deep-fried in a pressure cooker and served a variety of ways: in a salad, as a sandwich, etc.  Chick-Fil-A is also known for their advertising campaigns where cows advocate that we all “Eat More Chikin”.  If you aren’t familiar with the restaurant and their ad campaign, visit the Chick-fil-a Cow Campaign.  This campaign has a national footprint and a 20 year history.  As a Chick-fil-a fan, I would be extremely concerned if the corporation suddenly decided to sell hamburgers. Such a move would cause me to question the leadership.  I’d wonder whether Chick-fil-a can cook a burger?  I would be concerned about the cow campaign.  What about the name of the restaurant? But the Chick-fil-a mission is to “Be America's Best Quick-Service Restaurant”.  So if they did decide to sell beef, they’d have to do extensive planning to address concerns like mine, but they could certainly make a case for it.  The point being, the accreditors are not concerned that institutions with online programs have a mission.  They are concerned that the program fits in, that it has a place in the big picture.  Even when it’s a reach, like beef at Chick-fil-a, that’s OK, as long as the planning work gets done and everyone can explain how all of the pieces fit together!

Kimberly Harwell

Reporting Analyst and Consultant

29Jun/110

My Dad Can Beat Up Your Dad…

I just landed at the Phoenix airport and as I walked toward the rental car bus I overheard a couple in a friendly argument. “Have you ever seen a more beautiful city?” she asked. “It’s hot,” he replied. “But the people…they are incredibly nice. You don’t find that in L.A.!” “It’s hot…” he explained. “OK, but you have to admit, it’s nice to get around town quickly. And the food! We have SO many great places to eat without an hour wait!” she implored. “It’s SO freaking hot!” was his rebuttal.

The argument, while amusing, made me think about my blog this week. Don’t get me wrong, I LOVE a good argument about whether or not the ’87 Bears could beat the 2010 Saints. I enjoy debating Kobe vs Jordan. And I will participate in a discussion of which restaurant makes the best buscuits and gravy (Watercourse Foods, Denver, CO...). But, while mentally stimulating, these conversations are ultimately silly. They are silly because it all comes down to one person’s perception. Oh, I’ve seen the lists of the greatest cities in America, US News & Report’s rankings of colleges, and I actually like the YELP rankings for restaurants in my area. But they are arbitrary. Just because the magazine ranks a college at #11, doesn’t mean anything…except to the parents who put stock in that magazine or the President of the college who will tell the world how awesome they are.

Likewise, I don’t put stock in the argument that online isn’t “as good as” on-ground. It actually pains me that we’re still having the debate, but the fallacy of tradition is strong with people, so we continue to perpetuate the stereotypical and uneducated opinions. Ugh.

But we seem to be on the cusp of a change in paradigm. Not that the staunch, traditional professor is going to admit that online learning actually is better for some students or in some situations than on-campus. No….we just need to wait for many traditionalists to retire.

However, accreditors are finally catching on. Enter the 9 HALLMARKS for online education. Based on two compelling documents by the GAO and WCET, online education now has a set of outcomes for which they will be evaluated. Nine different elements of standardization that EVERY regional (and now national) accreditor has agreed to are being implemented come December, 2011. And it’s about time.

These standards will give online educators another leg to stand on in the debate around efficacy of online vs on-campus. These hallmarks will spotlight the importance of online education toward the mission of the school, not just as a one-off “nice to have” anymore.

Now, some of you may say that these hallmarks seem a tad unfair compared to on-ground counterparts. And that may be true. You’ll notice that some of the hallmarks actually measure the online class against the on-campus version. Hasn’t everyone wondered how on-campus courses are measured from time to time? The process of accreditation isn’t exactly transparent in many cases. It’s akin to end of course evaluations. Who hasn’t had a professor who 90% of the students evaluate as atrocious, yet there they are the next term…teaching the same tired class in the same tired way. Can you imagine if course evaluations were 33% of a faculty member’s ability to continue teaching at a college? (They are in many cases for online courses…but I digress.)

These 9 Hallmarks will give consistency and transparency to the online experience. They will give legitimacy (to some) with regard to outcomes, curriculum, content, delivery, and assessment. They will illustrate what many of us have already researched and know: Online learning works when it is strategic, designed effectively, and measured evenly.

So it is with great pleasure that I introduce the series of 9 blogs you will read for the next 9 Wednesdays. Each Hallmark will be deconstructed and looked at through a unique lens by a different member of our ATC team. Every Wednesday, a new Hallmark will be highlighted and I think you will find them to be both challenging and refreshing.

And even though some may be unfair as they hold online learning to a higher standard than on-campus, just remember that they will help legitimize the modality to some…and that, ultimately is going to beat out, “My online classroom is better than your face to face classroom…” Even if Michael Jordan was teaching it.

Jeff D Borden, ABD (Defending in less than a month!)
Sr Director of Teaching & Learning

15Jun/110

National Survey on Program Level Assessment Practices

A year ago I blogged a summary of the National Institute for Learning Outcomes Assessment (NILOA’s) survey of chief academic officers and their perceptions about the status of learning outcome assessment at the institutional level. Today they released results of a follow-up survey in a report titled “Down and In: Assessment Practices at the Program Level”.

There are several salient conclusions that come out of the data they collected. First, disciplines with specialized program accreditation (like education, nursing, and business) had more mature assessment practices than those covered only by institutional accreditation. In fact, institutions that had some programs with specialized accreditation actually tended to show more developed assessment methods in departments on campus with no program level accreditation. This suggests that there was a benefit across the campus compared to institutions that rely solely on regional accreditation.

Not surprisingly, a second major finding is that resources are scarce with less than 20% of specialized programs having a full-time staff person assigned to assessment activities. Most campuses rely on volunteer committees or part time resources with course release being another option in some instances. To deal with resource constraints, creative solutions included modest stipends to support faculty in the development of course embedded assessments or common department-wide capstone assignments with corresponding rubrics which could be deployed across all students in a program.

Interestingly, while I had the impression that portfolios have been the most common example of direct performance measurement at the program level, they actually ranked seventh in a list of most frequently used approaches. The leaders in rank order are capstones, rubrics, performance assessment, final projects, local tests, and external exams.

One final point I’d like to highlight was that NILOA called for itself and others to produce more case studies that highlight successful assessment practices. We’ve heard the same thing from our partners at Pearson eCollege and are currently working on developing several examples highlighting best practices in a variety of institution types that we can share. We hope to have these published before the end of the year.

Works Cited
Ewell, P., Paulson, K., & Kinzie, J. (2011). Down and In: Assessment Practices at the Program Level.
National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomesassessment.org/documents/NILOAsurveyreport2011.pdf

Brian Epp, M.Ed. | Assessment & Analytics Group, Academic Training & Consulting| Pearson eCollege

6Apr/110

Closing the Assessment Loop

Because the accountability drums have been beating for well over ten years, most institutions are now collecting data on student performance toward mastery of learning outcomes. The question today is whether or not this data is actually being used to drive improvements in curriculum and instruction. The step of analyzing collected data, diagnosing gaps that need attention, and relating results back to student performance is referred to as closing the loop.

At Pearson eCollege we’ve been working with institutions for over three years on technology enhanced outcome management and how it can help educators make the shift from a teaching to a learning paradigm. We’re tackling issues like which metrics provide the best data for academic leaders as they work to improve student mastery of outcomes or how to document discussions that take place to support the assessment of student learning.

Clearly there isn’t a single right answer but it’s important that campus leaders participate vigorously in the debate on these issues. A common problem we find, in fact we’ve struggled with it ourselves, is that we get paralyzed by trying to achieve perfection before initiating a new process which only delays the iterative nature of continuous improvement.

The Pearson eCollege Assessment and Analytics Consultants will be hosting a workshop at the Pearson Cite conference next week in Denver on how to “close the loop”. We’d love to hear your comments or thoughts about additional key questions we should be considering and invite you to join us if you’re attending the conference.

Brian Epp | Assessment & Analytics Group Manager - Academic Training & Consulting | Pearson eCollege

26Jan/110

Faculty SLOwnership

“…the depth and meaning of assessment is only as good as the scope and quality of faculty involvement. ” (Kinzie, 2010)

Most academics would agree that faculty tend to dislike the word assessment and the bureaucracy it involves. The reasons vary but essentially it’s viewed as a time-consuming distraction from the art of teaching and many also believe grades are more than sufficient indicators of student content mastery. One of the challenges with assessment is that it is often imposed on faculty by academic leaders who must prepare data and reports to meet more stringent accountability requirements from accreditors.

So an important initial consideration for provosts, deans, and department chairs is to think about how to get faculty involved early and often in the development of a campus assessment approach. According to Kinzie’s focus group summary on student learning outcome assessment, faculty were highly engaged and energized when reviewing student work and the extent to which these artifacts validate student learning (2010).

Fortunately, there are several solutions to the argument that assessment takes too much time and distracts from what should really be happening in the classroom. First, a best practice is to embed assessment activities in both formative and summative evaluations of student course work. Known as course-embedded assessment, this ensures that faculty are both teaching to and evaluating student learning outcomes in context instead of waiting for programmatic portfolio type evaluation at the end of a student’s degree sequence. Portfolio evaluations are definitely valuable but it’s often difficult to remedy performance deficiencies after a student has completed coursework.

Second, a well-designed assignment rubric can articulate certain criteria that apply to course outcomes along with others that specifically target grading criteria. This integrated approach allows faculty to augment their well-known grading process with newly included outcome performance criteria in a way that creates a single assessment workflow for evaluating student work. It’s a win-win situation because having more fully developed rubrics allows faculty to spell out more precisely what mastery looks like to students and serves as a helpful guideline for conversations about why students earned the grade they did on a particular assignment. It also provides faculty with data to pass up the academic outcome hierarchy for evaluation of program effectiveness.

So, while it’s tempting to impose a one size fits all approach to the assessment of student learning, it is worth it to involve faculty in all phases of this process. Everyone tends to more actively engage when the discussion focuses on how to improve learning as opposed to mandatory data generation requirements.

Works Cited
Kinzie, J. (2010). Perspectives from Campus Leaders on the Current State of Student Learning Outcomes Assessment: NILOA Focus Group Summary 2009-2010. National Institute for Learning Outcomes Assessment. Retrieved November 3, 2010, from http://learningoutcomesassessment.org/documents/FocusGroupFinal.pdf

Brian Epp | Assessment & Analytics Group Supervisor - Academic Training & Consulting| Pearson eCollege

17Nov/100

Annual Reflections on the State of the SLO

I’m midway through my third year as a Student Learning Outcomes (SLO) subject matter expert at Pearson eCollege. As I read through the relevant journals and publications in this field, I am encouraged to see us making progress. Experts say that they’re no longer leading workshops with titles like “What is Assessment?” Instead, academics are now asking how to make good use of assessment data. That’s a pretty significant improvement.

If you haven’t already bookmarked the National Institute for Learning Outcomes Assessment (http://learningoutcomesassessment.org) then I recommend you add it to your list of favorites now. They published two excellent resources in October. The first, “Regional Accreditation and Student Learning Outcomes: Mapping the Territory”, is an outstanding summary of the commonalities and differences in accreditor approach to SLOs as part of the decennial reauthorization process. It also cites statistics by region as to how often SLO assessment comes up as a reason for required follow-up activities.

The second resource is “Perspectives from Campus Leaders on the Current State of Student Learning Outcomes Assessment: NILOA Focus Group Summary 2009-2010”. The paper’s author lists four key themes that came out of her study:

1. Assessment has taken root and is thriving on many campuses.
2. Accreditation is the major catalyst for student learning outcomes assessment.
3. Faculty involvement is central to meaningful assessment.
4. Assessment is furthered when woven into established structures and processes. (Kinzie, 2010)

The fact that accreditors are playing such a central role in emphasizing the importance of using SLO data to improve curriculum and instruction is juxtaposed today against the rather lively debate about their role in ensuring quality and as a gatekeeper of federal financial aid. Throw in a dose of for-profit and traditional higher ed animosity and we’ve got ourselves a brouhaha that should thrive well into the next decade.

Works Cited

Kinzie, J. (2010). Perspectives from Campus Leaders on the Current State of Student Learning Outcomes Assessment: NILOA Focus Group Summary 2009-2010. National Institute for Learning Outcomes Assessment. Retrieved November 3, 2010, from http://learningoutcomesassessment.org/documents/FocusGroupFinal.pdf

Neal, A. (2010). Asking Too Much (and Too Little) of Accreditors. Inside Higher Ed. Retrieved November 12, 2010 from http://www.insidehighered.com/views/2010/11/12/neal

Brian Epp | Academic Trainer and Consultant| Pearson eCollege

5Nov/090

State of the Student Learning Outcome in the Academy



Simply put, colleges and universities must become smarter and better at assessing student learning outcomes…


- (Kuh and Ikenberry, 2009)

Over the past month I’ve consulted with both K-12 and higher education leaders in the U.S., Mexico, and the Middle East. Tracking student achievement and the value add provided by an academic program is high on nearly everyone’s priority list. In fact, in a time of shrinking resources this is one area that is still receiving budgetary support.

In his October 26 article Assessment vs. Action on the Inside Higher Ed website, Scott Jaschik summarizes the results of a survey sent to senior academic leaders at 2,809 regionally accredited institutions in the U.S. The survey was commissioned by the National Institute for Learning Outcomes Assessment (NILOA) which is a joint project between the University of Illinois and Indiana University.

Academic leaders and assessment experts should read the Jaschik article and also bookmark the NILOA website which is an excellent collection of resources and current thought surrounding learning outcome management. Conclusions from the NILOA survey were just released in a report titled “More Than You Think, Less Than We Need: Learning Outcomes Assessment in American Higher Education”.

Essentially, the survey found that nearly all U.S. institutions are actively measuring student learning outcomes driven primarily by accrediting body requirements. The gap that remains, however, is to actually use this data to improve student achievement.

Forward thinkers are actively developing a culture of assessment on campus. They’re using assessment data to drive decisions about everything from curriculum and instruction to admission standards and to inform the strategic planning process (Kuh and Ikenberry, Jaschik, 2009).

Over the past year I’ve been fortunate to work with progressive thinkers who are leveraging technology to enhance the outcome management process and to maximize the time that faculty spend providing meaningful feedback and support to students. I look forward to continuing this work and to identifying and publishing best practices in outcome management.

References

Jaschik, S. (2009, October 26). 'Assessment vs. Action. Retrieved November 2, 2009 from Inside Higher Ed, Web site: http://www.insidehighered.com/news/2009/10/26/assess

National Institute for Learning Outcomes Assessment (October 2009). More Than You Think, Less Than We Need: Learning Outcomes Assessment in American Higher Education. Retrieved November 2, 2009 from Web site: http://www.learningoutcomeassessment.org/NILOAsurveyresults09.htm

Brian McKay Epp
Academic Trainer and Consultant

15Jul/090

Data

I just got off the phone with a colleague who has lost 35 pounds in 2 months.  How did he do it?  Data.  Well, data mixed with exercise and technology to be more precise.  He tried the Nike / iPod experiment and he’s a believer.

This professor of communications and lover of cheese steaks bought a new pair of running shoes a few months back.  Then, he bought the Nike sensor system – a small sensor you put in your shoe somehow.  This sensor sends information to your iPod during a run.  That data tells you (in real time) how you’re doing, but it also allows you to see any trends in your running after you upload the data to the Nike+ website.  Apparently he’s run about 340 miles and his average speed has increased by 1 mile per hour.  He can tell you how many calories he’s burned and he’s delighted to tell you how many pounds he has lost.  

See, data is changing how we live.  And data aggregation, data mining, and data analysis are making our lives better as technology gives us more and more ways to use it quickly and easily.  For example, my wife was called a few months back about her credit card.  Visa thought she might have lost her card.  Why?  Because she purchased a dress that was 2 sizes too big!  Guess what?  Her card had been stolen.  (No, she had not gained any weight…that would have been awkward!)  The credit card company looks for patterns and found something odd in the behavior of the card.  So they checked.

Data is everywhere we look today.  New cars will tell you how many miles you have driven on a tank of gas and how many more you are likely to get out of that same tank.  There is a website where you can upload a sickness in your family.  Then, you can look around your city, state, or the entire country to see where other people are sick too.  Data might help you avoid the plague!!!

Data is useful and becoming easier and easier to digest.  My phone tells me when my flight is late – a handy little feature when you fly 100,000 miles a year.  My refrigerator tells me when the filter is no longer doing any good.  Heck, even my daughter’s baby monitor tells us when the battery is low.  From weather patterns to traffic patterns, data can make our lives tremendously easier.

So why is it so hard to find data for schools?  This is especially true with online schools.  Shouldn’t you know where your students spend their time in classes?  Don’t you think knowing how often you’re B students post vs your D students post to a discussion would be a good piece of information?  Does the first day a student checks into class help determine their probability of dropping?  If you don’t know the answers to these questions...it’s time to.

One of my favorite tools I’ve ever gotten to work with is a business intelligence tool, created by IBM, that we overlay classes with in our system.  This tool allows me and my team to try and predict success, correlate at-risk behaviors to drops, and find benchmarks to hold students accountable to.  Did you know that in most online courses a larger class size (30-35) tends to have a better completion rate than classes with less than 30?  It’s been proven time and time again through data.  (Mind you – data can also beg lots of questions!)

Data mining is becoming easier and easier as technology evolves.  Data analysis is becoming more and more automated.  It’s time for your school’s programs to join the party!  Trends and operational reports are crucial to making accurate predictions and drawing quality conclusions today.  Accreditors are soon going to see this power and demand evidence of data-driven decisions for their schools.  But before the ‘stick’ of accreditation swats at you, shouldn’t you look to the carrot of quality?  Granted, this power can be abused.  (My boss loves to look at my completion rates and give me grief as my public speaking class isn’t the highest completed class on campus…it’s public speaking!)  But the data is there whether you mine it or not.  The information to help you increase retention is sitting there whether or not it’s analyzed.  

We study, analyze, and mine data for everything else today.  It’s time to get education up to speed, don’t you think?  Now if you’ll pardon me…I need to get to a store to buy a sensor.  My pants don’t quite fit like they did last year…

 

Jeff D Borden, M.A.

Senior Director of Teaching & Learning

10Sep/084

Actualizing Assessment Accountability

The higher education community has been rumbling for several years about whether and to what extent government will pressure accrediting bodies to figure out how to hold institutions more accountable for demonstrating student achievement and growth.  Tuition increases have consistently outpaced inflation for years which has led to public pressure to figure out why and subsequent calls for schools to justify their value to students, parents, and employers. 

Legislators responding to public discontent began to increase calls for colleges and universities to actually prove their academic programs were meeting learning outcomes and to demonstrate their value add.  The 2006 U.S. Department of Education report A Test of Leadership: Charting the Future of U.S. Higher Education created quite a buzz as the committee’s conclusions started to reach campus leaders.  What is yet to be determined is whether it was a temporary blip that stirred up emotions and fears or whether it will truly be an impetus for significant reform.

A February 2007 meeting of the federal accrediting panel is an indication that increased accountability will likely come eventually but it will take time and may continue the U.S. tendency toward voluntary compliance.  The most likely scenario is that we will gradually move toward standardization of outcomes beginning with general education which will in turn lead to a growing number of institutions who implement standardized achievement tests.

There are three assessments today that are being administered on campuses and have been approved by the Voluntary System of Accountability, a program that is being piloted by 239 early adopters in the public system under the auspices of the National Association of State Universities and Land Grant Colleges and the American Association of State Colleges and Universities.  A similar program is underway for over 700 private, non-profit institutions (U-CAN) sponsored by the National Association of Independent Colleges and Universities and the Transparency by Design initiative is yet another effort that includes a range of private for-profit and non-profit institutions.

Schools that achieve top scores on these assessments will tout their success which will lead to new categories among prominent ranking systems as evidence that students should attend their programs because of their proven track record of achievement.  Eventually the accreditors will then push for similar accountability within undergraduate professional majors and before long most institutions will be on-board.

eCollege is prepared to collaborate with institutions in the efficient management of learning outcomes which includes reporting of successes and challenges to stakeholders.  Our teams are developing tools that will track outcomes and link them to course content so faculty, department chairs, and deans will have the information they need to assess program and course effectiveness both for curriculum enhancement and for reporting out to accrediting bodies and employers.  We will also be able to show students how well they’re doing against the learning outcomes set by their institutions, degree programs and courses.  Imagine the power of being able to pinpoint which assignments or test questions are most effectively contributing to student learning.

Brian McKay Epp, M.Ed.

Academic Trainer and Consultant