We are nearing the end of our series on the Interregional Guidelines for the Evaluation of Distance Education, and this week we will consider the 8th Hallmark, which concerns resourcing. These Guidelines can be found here as presented by the Middle States Commission on Higher Education (MSCHE).
Hallmark #8 reads: The institution provides sufficient resources to support and, if appropriate, expand its online learning offerings.
I think of this Hallmark as essentially saying: “put your money where your mouth is.” In our discussion of the previous Hallmarks, we’ve seen that online education must be integrated into an institution’s mission and operations, and must have strong student and faculty support. Hallmark #8 supports those previously discussed ideas by stipulating that the institution must actually provide the budget and resources to make their online goals happen. We know that resources are tight everywhere, but dedicating resources to a distance education program is an important way to show that the institution values that program, and also funds the training and services that help set it up for success.
An institution seeking accreditation (for example, by MSCHE), will be asked to include evidence documenting how they are meeting the 9 Hallmarks for their online education program. MSCHE provides two areas of evidence that would allow an institution to demonstrate that they are meeting Hallmark #8:
- The institution prepares a multi-year budget for online learning that includes resources for assessment of program demand, marketing, appropriate levels of faculty and staff, faculty and staff development, library and information resources, and technology infrastructure;
- The institution provides evidence of a multi-year technology plan that addresses its goals for online learning and includes provision for a robust and scalable technical infrastructure.
These seem pretty straightforward, as essentially the institution needs to demonstrate that they have allocated sufficient resources to support their online educational goals. To truly support their online venture, they need strong plans for marketing, faculty and staff support (including items laid out in this blog earlier), student support (as discussed here), a robust online curriculum (see discussion here), etc. The institution must also show commitment to the technological aspects of online course delivery, including the technical infrastructure and a great LMS, such as Pearson LearningStudio (not so subtle hint!).
Working through these items should help the success of an online program and allow the institution to truly “put their money where their mouth is.” Good luck and happy budgeting!
– Gail E. Krovitz, Ph.D. –
Director of Academic Training & Consulting
MSCHE (2011) Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved online from: http://www.msche.org/publications/Guidelines-for-the-Evaluation-of-Distance-Education.pdf
This is the fifth in a series of posts on the 9 Hallmarks of Quality for Distance Education programs that were developed by the Council of Regional Accrediting Commissions (C-RAC) earlier this year.
The institution evaluates the effectiveness of its online offerings, including the extent to which the online learning goals are achieved, and uses the results of its evaluations to enhance the attainment of the goals (MSCHE, 2011).
As institutions seek to develop a culture of assessment that meets increasingly stringent accreditor requirements, a myth prevails that a pre-defined template exists that elegantly solves this ill-structured problem. The truth is that accreditors defer most of the responsibility to the institution who must set their own mission (Hallmark #1), program goals, and individual course outlines that provide the learning experience required for students to demonstrate mastery of the curriculum. They evaluate the extent to which a school has developed an assessment approach that measures curricular and instructional effectiveness and shows how data is used to further the continuous improvement of student learning.
While this may be frustrating to read, there are definitely patterns and best practices that scholars of teaching and learning have developed which synthesize characteristics of successful accountability programs.
First, institutions must be purposeful in their assessment program which means there is a plan for what data to collect and how it will be used to improve student learning. A holistic assessment approach includes both formative and summative assessment within courses and at the program level so students have the ability to remediate their weaknesses before it’s too late. Programs new to assessment usually begin with evaluation of program level goals and move into course level assessment as they mature. Ideally, most assessment can be embedded within the course so faculty of record can gather the data as part of their ongoing student assessment workflow.
This leads to a second major challenge in that perfection can be the enemy of good – or even the ability to get better. Our partners often tell us they’re not ready for assessment and we see academic leaders go through numerous models in their heads without ever actually implementing anything. Getting started creates informed use which yields better questions and action plans going forward.
As we consult on assessment and methods to integrate technology into the outcome management process, we nearly always expose what seem like obvious gaps in curriculum and instruction. This is part of the continuous improvement process and the important thing is to remedy that gap and to then look for the next most critical issue to resolve.
Finally, I’ve often heard assessment experts encourage academic leaders to actually scale back the volume of data they’re collecting. As mentioned earlier, data is meaningless unless you take the time to analyze what you’ve gathered to diagnose gaps and to implement improvement action plans to address the gaps. So, you might consider assessing random samples of student artifacts instead of trying to assess every student each term or you can assess all students against an outcome but only evaluate the outcome every two years.
Our consultants have developed the following modules to support educators in meeting requirements for Hallmark #5.
- Creating a Culture of Assessment
- Writing Quality SLOs
- Rubric Design
- Curriculum Mapping (Institution > Program > Course)
- SLOs and Impact on Course Design (Curriculum mapping within a course)
- Fostering Faculty Ownership of Campus Assessment Culture
- Closing the Loop - Ensuring that SLO Data Impacts Curriculum & Instruction
In addition to the purposeful management of student learning, Hallmark #5 also requires institutions to monitor and set goals for both in-course retention and student persistence through a degree program along with the effectiveness of an institution’s academic and support services (MSCHE, 2011). Again, our consultants can work with you to develop custom reports to track and monitor progress for retention and persistence with student activity and completion data from the LMS. We can also help to identify at-risk students to support the requirement to measure effectiveness of academic and support services although this component certainly requires additional offline analysis of process and services at the institution.
Let us know if you have recommendations for any additional content area we should develop or if you’d like more information on our consulting services.
Middle States Commission on Higher Education (MSCHE). (2011, February). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved July 18, 2011 from http://www.msche.org/publications/Guidelines-for-the-Evaluation-of-Distance-Education.pdf
Brian Epp, M.Ed. | Assessment & Analytics Group, Academic Training & Consulting | Pearson eCollege
A year ago I blogged a summary of the National Institute for Learning Outcomes Assessment (NILOA’s) survey of chief academic officers and their perceptions about the status of learning outcome assessment at the institutional level. Today they released results of a follow-up survey in a report titled “Down and In: Assessment Practices at the Program Level”.
There are several salient conclusions that come out of the data they collected. First, disciplines with specialized program accreditation (like education, nursing, and business) had more mature assessment practices than those covered only by institutional accreditation. In fact, institutions that had some programs with specialized accreditation actually tended to show more developed assessment methods in departments on campus with no program level accreditation. This suggests that there was a benefit across the campus compared to institutions that rely solely on regional accreditation.
Not surprisingly, a second major finding is that resources are scarce with less than 20% of specialized programs having a full-time staff person assigned to assessment activities. Most campuses rely on volunteer committees or part time resources with course release being another option in some instances. To deal with resource constraints, creative solutions included modest stipends to support faculty in the development of course embedded assessments or common department-wide capstone assignments with corresponding rubrics which could be deployed across all students in a program.
Interestingly, while I had the impression that portfolios have been the most common example of direct performance measurement at the program level, they actually ranked seventh in a list of most frequently used approaches. The leaders in rank order are capstones, rubrics, performance assessment, final projects, local tests, and external exams.
One final point I’d like to highlight was that NILOA called for itself and others to produce more case studies that highlight successful assessment practices. We’ve heard the same thing from our partners at Pearson eCollege and are currently working on developing several examples highlighting best practices in a variety of institution types that we can share. We hope to have these published before the end of the year.
Ewell, P., Paulson, K., & Kinzie, J. (2011). Down and In: Assessment Practices at the Program Level.
National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomesassessment.org/documents/NILOAsurveyreport2011.pdf
Brian Epp, M.Ed. | Assessment & Analytics Group, Academic Training & Consulting| Pearson eCollege
As a formative assessment the “one-minute paper” has been well used by faculty. It can provide huge benefits for instructors to early identification of the definitions, concepts or theories that students do not understand. Cross and Angelo (1988) popularized this technique as one of a wide variety of quick “classroom assessment techniques” (CATs) —designed to provide instructors with anonymous feedback on what students are learning in class. Usually at the end of a class session or the beginning of the next session students are asked to write a one-minute paper in response to such questions as:
• What was the most important concept you learned in class today?
• What was the ‘muddiest’ or most confusing concept covered in today’s class?
• What do you still have questions about?
While this technique has been successfully used by on ground instructors for quite some time, to immediately alter course curriculum to clarify points for students, it has not commonly found its way over to the online environment. I believe the reason is that much of our online teaching is asynchronous and we have not been sure the technique would be as valuable or the process even feasible. It certainly would be difficult for the student to remain anonymous as initially designed. However, my belief is that with a slightly different work flow we can use the proven technique to great benefits for ourselves as instructors and our online students.
One common way we establish our online courses are through the use of modules or units. To integrate the “one minute paper” instructors can develop a small “one-five minute” quiz instead of a paper at the end of each section or module. You can use specific questions (fill in the blank, matching or multiple choice) to see if the students correctly understand surface level learning concepts and short answer questions to dive into deeper learning. The quiz may be set to a time limit of one to five minutes. The emphasis needs to be on immediate student reflection of learning. Try to use no more than 3-5 questions. This technique could certainly allow and should encourage students to briefly review their notes before proceeding to the “unit or section “quiz. You may also choose to place the quizzes before a new unit or multiple times within a module depending on your discipline and pedagogy.
It is important to explain to students that this is just one way for instructors to help ensure the knowledge opportunities provided to students are sufficiently meeting their learning needs. One-five minute knowledge checks on the concepts also provide a glimpse of what may appear on future course exams or required research papers and projects which could then lead to reduced student anxiety. As an incentive for the students to provide explicit and serious responses I would suggest some form of integration into your course grading schema.
In my online classes I have received very positive student feedback of the process and this has allowed me to regroup and provide multimodal learning opportunities. When we are face to face we can often look at the class and understand quickly the students that have no idea what we are trying to convey. It is even apparent at times that no students grasp the concept. Online this ability to perceive your students depth of learning is often not discovered until we issue summative assessments. Allowing formative assessment techniques to enhance and capture those “teachable moments” leads us all to greater real-time student success.
Angelo, T. A., & Cross, K.P. (1993) Classroom Techniques: A Handbook for College Teachers, Second Edition. San Francisco: Jossey-Bass.
Karen R. Owens, Ph.D. / Academic Assessment Consultant / Pearson eCollege
A few months ago, I wrote a post to our blog about the “end of the academic year.” (Click here for that post.) In it, I pondered whether the shift from teaching to learning – and the focus on learning outcomes in courses – would put an end to the traditional academic year. That is, if all that is required to pass a course is that each student provide evidence (through assignments and assessments) that they have mastered the course’s learning outcomes, then it shouldn’t matter whether they complete the course in 8 weeks, 10 weeks, 15 weeks, or whatever.
Obviously, there are significant logistical and pedagogical challenges surrounding such a shift. So, I put this question to a few panelists during session at the Pearson Cite 2011 conference, which wrapped here in Denver on April 13. I invited administrators from two schools, Texas Christian University and Kaplan Higher Education, who have been using learning outcomes to structure some of their courses for a few years now.
It’s important to bear in mind that these are very different schools in their mission and delivery of content. I don’t mean to overgeneralize, but in a nutshell, I think it’s fair to say that Kaplan is oriented toward career education and training with a top-down structure for designing curricula, while TCU is oriented towards “traditional” university and liberal arts educational degrees where faculty create curricula from the bottom up. And yet, both are able to use learning outcomes to structure courses in (at least for now) some of their programs.
One audience member at this Cite session commented that in order for learning outcomes to be effective, they must be based on demonstrable competencies (as our guest from Kaplan discussed). However, this audience member pointed out, that is much different when comparing, say, a medical assistant program to a visual arts program. The medical assistant can demonstrate that he or she knows all the steps necessary to draw blood from a patient or insert an IV. It’s a step-by-step, check-off-the-boxes process. While creating a work of art may also be step-by-step, two students can follow the same steps to vastly different results. Can the differences in the visual quality (subjectively judged) of these two students’ pieces be measured by learning outcomes? Or is my parenthetical the key – that subjective assessment and learning outcomes just don’t mesh?
I think the answer is yes, the students’ art can be measured by learning outcomes, but it depends upon how those learning outcomes are defined. It also depends upon the ultimate goal of the program. If the goal of an arts program is to demonstrate competence – not talent, but just competence – then yes. Those competencies can be defined regarding everything from the medium used for a certain piece of art to the technique for achieving one style or another. Further, what is the goal of getting a degree in the arts? If a student wants to become a graphic designer, that is a different set of competencies than if the student wants to become a professional artiste.
I don’t pretend to have all the answers, but I will open up the conversation to you, dear reader. What do you think?
-- Rob Kadel, Ph.D.
-- Supervisor, Academic Training & Consulting
Because the accountability drums have been beating for well over ten years, most institutions are now collecting data on student performance toward mastery of learning outcomes. The question today is whether or not this data is actually being used to drive improvements in curriculum and instruction. The step of analyzing collected data, diagnosing gaps that need attention, and relating results back to student performance is referred to as closing the loop.
At Pearson eCollege we’ve been working with institutions for over three years on technology enhanced outcome management and how it can help educators make the shift from a teaching to a learning paradigm. We’re tackling issues like which metrics provide the best data for academic leaders as they work to improve student mastery of outcomes or how to document discussions that take place to support the assessment of student learning.
Clearly there isn’t a single right answer but it’s important that campus leaders participate vigorously in the debate on these issues. A common problem we find, in fact we’ve struggled with it ourselves, is that we get paralyzed by trying to achieve perfection before initiating a new process which only delays the iterative nature of continuous improvement.
The Pearson eCollege Assessment and Analytics Consultants will be hosting a workshop at the Pearson Cite conference next week in Denver on how to “close the loop”. We’d love to hear your comments or thoughts about additional key questions we should be considering and invite you to join us if you’re attending the conference.
Brian Epp | Assessment & Analytics Group Manager - Academic Training & Consulting | Pearson eCollege
I recently finished the third and final book in Stieg Larsson’s Millennium series, The Girl Who Kicked the Hornet’s Nest. I really enjoyed it. It’s great when the author improves upon each book in a series. By the end of book three I just couldn’t put it down! Anyway, there was a revelation in the final chapters that I am still thinking about several days later. But before I reveal it, let me provide some background for those who are not familiar with the story.
In books one and two, The Girl with the Dragon Tattoo and The Girl Who Played with Fire, respectively, despite her eccentricities and her oddly anti-social behavior, the reader slowly develops a fondness for the lead character, Lisbeth Salander. I felt a variety of emotions as I read about her complicated and traumatic childhood. But despite the horrors of her adolescence, as an adult, Salander is a well-respected and highly capable computer hacker. She is extremely intelligent, has a photographic memory and is adept at silently moving thru society, generally unnoticed.
I tried to imagine her as a primary school student. It is likely her teachers had no idea about her exceptional skills. I’m sure she was simply the odd child who silently negotiated each day and stayed well below everyone’s radar – except that she was a bit odd. You know the kid. They don’t necessarily excel at school and they aren’t the kid who’s been designated eligible for every possible at-risk student support program. This is the quiet one who has no friends, generally cooperates with teachers, does average work and occasionally fights back when the bully descends upon them.
So what was the revelation? – Asperger’s syndrome. It was never made official, but the author hints that Salander may have had Asperger’s. This threw me for a bit of a loop and got me thinking. Not about the diagnosis - that made sense. It simply represented another piece of the puzzle. But I couldn’t help but think how different her adult life might have been had the world recognized and responded to her individual student needs.
Many educators argue that the key to successful modern education is personalization. Data is key to individualization. What if Salander’s school had access to analytical software designed to help teachers recognize exceptional, special, or at-risk students? What if her teachers had had the time and resources to complete personal assessments with Salander, giving them the data to recognize her strengths and improve upon her weaknesses? What if the school had data that helped them recognize and react to the bullies who made Salander’s young life difficult? If her schools had been empowered with the right data to show her aptitude for computers they could have introduced her to technology earlier in life and provided her with personalized learning opportunities. Therefore providing what would prove to be the only constructive avenue available for her to focus her energies.
We all know students like Salander. Sometimes they have an official diagnosis, but most times they don’t. What's clear is that when we have the right data, at the right time, and we put it in the right hands, it can and does change lives.
Reporting Analyst and Consultant
Learning Analytics/Academic Training and Consulting
As we work with our Educational Partners on implementing their outcome assessment plan, from the macro to the micro level in distinctive academic cultures, a common theme emerged. The focus was on organizing and collecting evidence but there was no formal plan for action steps.
Our Assessment & Analytics team worked closely with several of our Educational Partners to customize templates, devise online faculty discussion forums and offer other technology enhanced solutions to be a catalyst for improving curriculum and instruction. As discussion pursued it became evident that there was no “one way” to accommodate the unique requirements of all colleges and universities but we were providing methods that could be used and customized for most.
The next step for us grew into Assessment Consulting Modules that could be designed around the needs of our Educational Partners. This included a “backwards” planning method to help ensure the multitude of assessment data the academies collect would actually help answer their ultimate questions and lead to evidence driven action plans.
From beginning to end the Assessment Consulting Modules are designed to lead participants through best practices of student learning outcome (SLO) assessment. The series begins by exploring why we engage in assessment and by defining a roadmap to creating a culture of evidence on campuses. Participants will then have opportunities to develop and connect SLOs and rubrics that apply directly to their unique curriculum. Our end goal is to provide specific strategy and design suggestions that translate into meaningful and sustainable assessment plans.
Following is a brief description of a few of the modules:
This module looks at the relational aspect of student learning outcomes within an academic institution. Intentional and explicit alignment of each outcome from the discrete course level to the increasingly broader, program, department, campus or institutional level is examined. The mapping concept is designed to provide opportunities and evidence of student learning at various stages of the curriculum.
Measuring and assessing students’ demonstration of learning through the use of rubrics is the focus of this module. Designing an assessment rubric using explicit criteria statements and identification of examples of student performance at varying mastery levels of each outcome is presented. Included are comprehensive and clear rubric exemplars.
SLOs and Impact on Course Design
Quality course SLOs are the foundation for assessing student learning. Quality assessment of student performance requires those SLOs to be purposefully aligned to the learning activities, assessment activities and schedule of the course. In this module participants use a modified card-sort method to analyze the relationship of the course’s design to the SLOs and inform design changes to optimize student learning.
Fostering Faculty Ownership of Campus Assessment Culture
Assessing students’ mastery of learning outcomes falls primarily within the scope of faculty responsibility so it is critical they be an integral stakeholder in the development of campus assessment plans. Faculty engagement is further fostered by focusing on improving the student learning experience. This module provides tangible actions for academic leaders working to integrate faculty into the development of campus assessment culture.
We are no longer living in academic silos and must use web services and other technology enhanced services (along with colleagues) to link information similar to using Lego blocks. The design can be simple or very complex. The creativity is unlimited if we begin by understanding the combination of links within assessment and then proceed connecting until we have designed an application that fits real-time teaching and learning. Collaborate and customize!
Karen R. Owens, Ph.D.
Higher Education Assessment Consultant
Assessment & Analytics Group
“…the depth and meaning of assessment is only as good as the scope and quality of faculty involvement. ” (Kinzie, 2010)
Most academics would agree that faculty tend to dislike the word assessment and the bureaucracy it involves. The reasons vary but essentially it’s viewed as a time-consuming distraction from the art of teaching and many also believe grades are more than sufficient indicators of student content mastery. One of the challenges with assessment is that it is often imposed on faculty by academic leaders who must prepare data and reports to meet more stringent accountability requirements from accreditors.
So an important initial consideration for provosts, deans, and department chairs is to think about how to get faculty involved early and often in the development of a campus assessment approach. According to Kinzie’s focus group summary on student learning outcome assessment, faculty were highly engaged and energized when reviewing student work and the extent to which these artifacts validate student learning (2010).
Fortunately, there are several solutions to the argument that assessment takes too much time and distracts from what should really be happening in the classroom. First, a best practice is to embed assessment activities in both formative and summative evaluations of student course work. Known as course-embedded assessment, this ensures that faculty are both teaching to and evaluating student learning outcomes in context instead of waiting for programmatic portfolio type evaluation at the end of a student’s degree sequence. Portfolio evaluations are definitely valuable but it’s often difficult to remedy performance deficiencies after a student has completed coursework.
Second, a well-designed assignment rubric can articulate certain criteria that apply to course outcomes along with others that specifically target grading criteria. This integrated approach allows faculty to augment their well-known grading process with newly included outcome performance criteria in a way that creates a single assessment workflow for evaluating student work. It’s a win-win situation because having more fully developed rubrics allows faculty to spell out more precisely what mastery looks like to students and serves as a helpful guideline for conversations about why students earned the grade they did on a particular assignment. It also provides faculty with data to pass up the academic outcome hierarchy for evaluation of program effectiveness.
So, while it’s tempting to impose a one size fits all approach to the assessment of student learning, it is worth it to involve faculty in all phases of this process. Everyone tends to more actively engage when the discussion focuses on how to improve learning as opposed to mandatory data generation requirements.
Kinzie, J. (2010). Perspectives from Campus Leaders on the Current State of Student Learning Outcomes Assessment: NILOA Focus Group Summary 2009-2010. National Institute for Learning Outcomes Assessment. Retrieved November 3, 2010, from http://learningoutcomesassessment.org/documents/FocusGroupFinal.pdf
Brian Epp | Assessment & Analytics Group Supervisor - Academic Training & Consulting| Pearson eCollege
In an age of constant student mobility, the students have changed and the paths they follow are diverse. Increasingly, those paths include transfers to and from many types of colleges and universities creating barriers to degree completion. In a 2005 issue of “Policy Matters,” the American Association of State Colleges and Universities (AASCU) emphasized the need for collaboration and change among transfer students’ institutions with these remarks:
The process of bringing together so many different kinds of institutions and programs into common agreement will never be easy, but will remain an essential goal as student mobility increases and options multiply. (…) States, systems, sectors, and institutions must continue to work together to eliminate their differences and create smooth working models that encourage student success. (Conclusion section, para. 2)
The Bureau of Labor Statistics released the Economics Situation Summary, Friday, December 3, 2010 and the November 2010 numbers reflect we may not have made much progress in finding ways to increase student success. The unemployment rate is 9.8% and the difference among those with college degrees and those without is striking (see Summary Table A)!
Table A indicates “High school graduates, no college” have twice the unemployment rate of “Bachelor’s degree and higher”.
Our country has made significant gains in “access” to higher education but despite the efforts of the institutions to have policies in place some students still encounter difficulties obtaining degrees. It seems that one of the problems may be positioned in the fact we are often concerned with “student retention” which has an institutional focus and not “persistence” which has a student focus. Determining clear connections supporting learning and student success could lead to a significant narrowing of the economic inequity in our society through empowering learners.
The first step is to increase high school graduation rates across America. The students who don’t acquire their high school diploma have the highest unemployment rate, currently reaching a level of 15.7%. Upon attaining this milestone, high school alumnae see unemployment rates drop to 10%. These learners need to not only graduate but graduate with college ready skills and have a community of people willing to provide them financial, emotional and motivational support to enter higher education. Taking the initial step to enroll in post-secondary education and secure some college experience shows unemployment numbers of individuals begin to fall, if even ever so slightly. In the most recent unemployment numbers for this group rates fell from 10% to 8.7% when some college credit was achieved.
One resource with implications for academic practice deals directly with innovations in testing and measurement. Pearson’s Test, Measurement, and Research Services Newsletter (TMRS) provides an easy to reference list of current publications and conference presentations that deal directly with promoting student success in K-12 and Higher Education. Pearson's research publications are for educators, parents, students, researchers and policy makers. Visit the Publications section of Pearson’s Assessment & Information website to search by topic, tile, author and date. All documents are available to view in PDF format.
A Sampling of Recent Publications:
Almond, P., Winter, P., Cameto, R., Russell, M., Sato, E., Clarke, J., et al. (2010). Technology-enabled and universally designed assessment: Considering access in measuring the achievement of students with disabilities—A foundation for research. Dover, NH: Measured Progress and Menlo Park, CA: SRI International.
Bodenhorn, N., Wolfe, E. W., & Airens, O. (2010). School counselor program choice and self-efficacy: Relationship to achievement gap and equity. Professional School Counseling, 13, 165–174.
Phan, H., Sentovich, C., Kromrey, J., Ferron, J., & Dedrick, R. (2010, May). Correlates of mathematics achievement in developed and developing countries: An analysis of TIMSS 2003. Paper presented at the American Educational Research Association, Denver, CO.
Van Moere, A., Suzuki, M., & Klungtvedt, M. (2010, October). Time is money: Assessing efficient use of written English skills for work purposes. Paper presented at the ninth annual conference of the East Coast Organization of Language Testers, Georgetown University, Washington, DC.
As educators we all need to take accountability for fostering student success. As United States citizens we need to foster the state of our economy through two of our most valuable resources…..labor and entrepreneurship. Today’s students evolve into leaders of tomorrow. The paths chosen by students have become quite complex. The traditional vertical progression through higher education has become a pathway of the past. It does not suffice to simply understand the various pathways; we must understand why these pathways are chosen and have policies in place to support these pathways. This understanding comes through exploring the perceptions, ambitions and reasons for persistence of all students in their pursuit of degree attainment at all levels of education. The ultimate goal is to empower our struggling labor force to reach personal academic goals and become productive citizens in our workforce!
Let’s make a difference in 2011!
American Association of State Colleges and Universities (ASSCU). (2005, July). Policy Matters: Developing transfer and articulation policies that make a difference. Retrieved December 10, 2010 from http:/aascu.org/policy_matters/pdf/v2n7.pdf
Bureau of Labor Statistics(BLS), United States Department of Labor. (2010, December). Economics Situation Summary. Retrieved December 13, 2010 from http://www.bls.gov/news.release/empsit.nr0.htm
Pearson’s Assessment and Information: Research and Resources Website. Retrieved December 13, 2010 from http://www.pearsonassessments.com/pai/ai/research/ResearchandResources_old.htm
Pearson’s Test, Measurement, & Research Services(TMRS). (2010). Quarterly Newsletter v3 n3. Retrieved December 13, 2010 from http://www.pearsonassessments.com/NR/rdonlyres/11968220-FEDB-46CD-9ED1-75BF30B4AEAF/0/2010_v3n3_newsletter.pdf
Karen R. Owens, Ph.D. / Academic Assessment Consultant / Pearson eCollege