This is the fifth in a series of posts on the 9 Hallmarks of Quality for Distance Education programs that were developed by the Council of Regional Accrediting Commissions (C-RAC) earlier this year.
The institution evaluates the effectiveness of its online offerings, including the extent to which the online learning goals are achieved, and uses the results of its evaluations to enhance the attainment of the goals (MSCHE, 2011).
As institutions seek to develop a culture of assessment that meets increasingly stringent accreditor requirements, a myth prevails that a pre-defined template exists that elegantly solves this ill-structured problem. The truth is that accreditors defer most of the responsibility to the institution who must set their own mission (Hallmark #1), program goals, and individual course outlines that provide the learning experience required for students to demonstrate mastery of the curriculum. They evaluate the extent to which a school has developed an assessment approach that measures curricular and instructional effectiveness and shows how data is used to further the continuous improvement of student learning.
While this may be frustrating to read, there are definitely patterns and best practices that scholars of teaching and learning have developed which synthesize characteristics of successful accountability programs.
First, institutions must be purposeful in their assessment program which means there is a plan for what data to collect and how it will be used to improve student learning. A holistic assessment approach includes both formative and summative assessment within courses and at the program level so students have the ability to remediate their weaknesses before it’s too late. Programs new to assessment usually begin with evaluation of program level goals and move into course level assessment as they mature. Ideally, most assessment can be embedded within the course so faculty of record can gather the data as part of their ongoing student assessment workflow.
This leads to a second major challenge in that perfection can be the enemy of good – or even the ability to get better. Our partners often tell us they’re not ready for assessment and we see academic leaders go through numerous models in their heads without ever actually implementing anything. Getting started creates informed use which yields better questions and action plans going forward.
As we consult on assessment and methods to integrate technology into the outcome management process, we nearly always expose what seem like obvious gaps in curriculum and instruction. This is part of the continuous improvement process and the important thing is to remedy that gap and to then look for the next most critical issue to resolve.
Finally, I’ve often heard assessment experts encourage academic leaders to actually scale back the volume of data they’re collecting. As mentioned earlier, data is meaningless unless you take the time to analyze what you’ve gathered to diagnose gaps and to implement improvement action plans to address the gaps. So, you might consider assessing random samples of student artifacts instead of trying to assess every student each term or you can assess all students against an outcome but only evaluate the outcome every two years.
Our consultants have developed the following modules to support educators in meeting requirements for Hallmark #5.
- Creating a Culture of Assessment
- Writing Quality SLOs
- Rubric Design
- Curriculum Mapping (Institution > Program > Course)
- SLOs and Impact on Course Design (Curriculum mapping within a course)
- Fostering Faculty Ownership of Campus Assessment Culture
- Closing the Loop - Ensuring that SLO Data Impacts Curriculum & Instruction
In addition to the purposeful management of student learning, Hallmark #5 also requires institutions to monitor and set goals for both in-course retention and student persistence through a degree program along with the effectiveness of an institution’s academic and support services (MSCHE, 2011). Again, our consultants can work with you to develop custom reports to track and monitor progress for retention and persistence with student activity and completion data from the LMS. We can also help to identify at-risk students to support the requirement to measure effectiveness of academic and support services although this component certainly requires additional offline analysis of process and services at the institution.
Let us know if you have recommendations for any additional content area we should develop or if you’d like more information on our consulting services.
Middle States Commission on Higher Education (MSCHE). (2011, February). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved July 18, 2011 from http://www.msche.org/publications/Guidelines-for-the-Evaluation-of-Distance-Education.pdf
Brian Epp, M.Ed. | Assessment & Analytics Group, Academic Training & Consulting | Pearson eCollege
Third in a series of nine blogs.
The Hallmarks of Quality # 3
Online learning is incorporated into the institution’s systems of governance and academic oversight.
While my colleagues have written “no brainers” of Hallmarks #1 and #2, mission and planning, #3 begins to become more challenging. Many will cringe, when reading this Hallmark, as I did. The words governance and academic oversight sound too invasive from my perspective but who among us would not want to ensure the institution we teach for protects the integrity of its curriculum and ensures students are offered a valued education? The challenge however with this assurance is in the execution.
Academic governance takes many different forms among colleges but I firmly believe the strongest and most highly regarded schools are those with full transparency of what occurs within the Ivory tower and the classrooms. Instructors and administrators sharing resources, collaborating on essential student learning outcomes and then allowing the creativity of the instructors to emerge makes us all stronger. The concept of online learning “shared governance” provides for individual and collective voices that matter.
We can retain academic freedom and still preserve quality education. This Hallmark is in place for on ground classes and absolutely needs to be the same for online courses. Maybe I am naive but does anyone really believe the standards of quality education should be different for the two modalities of instruction? While the debate rages on over the academic rigor of online classes it seems to me it all boils down to the quality of the faculty. What good teachers do.... they do online and face to face. So, it is these faculty members in conjunction with the administration that need to be the ones monitoring the academic oversight.
Middle States Commission on Higher Education (MSCHE) suggest the analysis and evidence of sustaining this Hallmark may be provided by:
- An institution’s faculty having a designated role in the design and implementation of its online learning offerings;
- An institution ensuring the rigor of the offerings and the quality of the instruction;
- Approval of online courses and programs following standard processes used in the college or university;
- Online learning courses and programs being evaluated on a periodic basis;
- Contractual relationships and arrangements with consortial partners, if any, are clear and guarantee that the institution can exercise appropriate responsibility for the academic quality of all online learning offerings provided under its name.
As Gary A. Olson (2009), provost and vice president for academic affairs at Idaho State University succinctly summarized:
"Clearly, when it comes to university governance, "shared" is a much more capacious concept than most people suspect. True shared governance attempts to balance maximum participation in decision making with clear accountability. That is a difficult balance to maintain, which may explain why the concept has become so fraught. Genuine shared governance gives voice (but not necessarily ultimate authority) to concerns common to all constituencies as well as to issues unique to specific groups.
The key to genuine shared governance is broad and unending communication. When various groups of people are kept in the loop and understand what developments are occurring within the university, and when they are invited to participate as true partners, the institution prospers. That, after all, is our common goal". (para. 17 & 18)
Let's embrace the opportunity!
Karen R. Owens, Ph.D.
Academic Assessment Consultant
Middle States Commission on Higher Education (MSCHE). (2011, February). Interregional Guidelines for the Evaluation of Distance Education Programs (Online Learning). Retrieved July 18, 2011 from http://www.msche.org/publications/Guidelines-for-the-Evaluation-of-Distance-Education.pdf
Olson, G. (2009, July 23). Exactly what is 'shared governance' ? The Chronicle of Higher Education Retrieved July 18, 2011 from http://chronicle.com/article/Exactly-What-Is-Shared/47065/.
Fasten your seatbelt and hold on to your hat! This week we are going to talk about planning in regards to the Middle States Accreditation plan. While I say that a bit facetiously it is actually a little piece of the canvas which is part of a bigger more exciting piece of work. By standardizing accreditation requirements nationwide for higher education online learning programs, those of us firmly planted in online learning programs can take a huge leap forward to demonstrate (with statistics, research and data) that what we are doing is not only catering to a growing market’s demands but doing so because the pedagogy and statistics show that our students are learning and competing and often exceeding their counterparts in fully online programs.
There are 9 hallmarks in the Middle States Accreditation plan and today we look closely at #2-Planning. On a side note, I will give you some background into this series of blogs. After an introduction to the overall Distance Education Programs--Interregional Guidelines for the Evaluation of Distance Education (Online Learning) each person on our team (the Academic Consulting team at Pearson eCollege) took a hallmark to focus on and fully explain. In the draw, I drew #2 Planning.
Now, as I plan for this blog (I deliberately chose the word plan in case you missed that) I can see how apropos it is that I have the planning topic. I am a planner to the point of a clinical neurosis some might say. I am the person who, when the seatbelt light goes off on an airplane as we pull into the gate, I get up and find my car keys and my credit card so when I get off the plane and get to the end of the very long walk to my car, I can jump in, start the car and proceed to pay for parking. Downtime is used for reflection and analysis but it is also a moment or two that can be used to take care of details and save time later on. So from the planner’s perspective, let’s look at hallmark #2.
With that statement of credibility (I am qualified to talk about planning because I am a neurotic planner in my day to day life), let us take a look at how EduKan, the consortium of online campuses for 6 Kansas community colleges, leads by example when it comes to these accreditation hallmarks. Some institutions will fret and have to hire consultants to comply when this becomes standard whereas other institutions, such as EduKan, will simply look at the list and say: “we already do that.”
Hallmark #2 reads:
The institution’s plans for developing, sustaining, and, if appropriate, expanding online learning offerings are integrated into its regular planning and evaluation processes (MSCHE Standard 2).
From the guidelines, analysis and evidence of this hallmark will review:
- Development and ownership of plans for online learning extend beyond the administrators directly responsible for it and the programs directly using it;
- Planning documents are explicit about any goals to increase numbers of programs provided through online learning courses and programs and/or numbers of students to be enrolled in them;
- Plans for online learning are linked effectively to budget and technology planning to ensure adequate support for current and future offerings;
- Plans for expanding online learning demonstrate the institution’s capacity to assure an appropriate level of quality;
- The institution and its online learning programs have a track record of conducting needs analysis and of supporting programs.
So in asking how EduKan’s director Mark Sarver addresses the topic of planning, he replied that all aspects of the planning guideline are addressed through their Strategic Planning committee. The Strategic Planning committee for EduKan includes representatives from all jobs and roles within the organization. The group includes but is not limited to: academic deans, advisors, instructors, registrars, other administrators et. al. They devise a 3 year strategic plan which is created and agreed upon by all members of the committee. It is all encompassing to include goals, budget planning, technology planning, and indicators of success. The stakeholders on the committee then take the plan back to their respective groups and gain approval from those groups. As the committee meets every three years, they check the indicators of progress, document successes and adjust or re-define goals for the next three year plan. Statistics, reporting and data analysis provide the documentation needed to assure the required appropriate level of quality. The process is ongoing and it includes every role in the EduKan system to gain buy-in from all those with a role in the success of the online program and the consortium as a whole.
EduKan is not unique in this process. All institutions have a similar program or committee that examines, develops, implements and then reviews their overall plan for successfully educating the students who attend their institution and enrolls in their courses. If they have always been a traditionally on ground campus, this will have to expand to include the online goals above. If they already have an online component to their offerings, they will have to be sure they can document that they are addressing the analysis components above. Of the 9 hallmarks soon to be part of the accreditation process for online learning programs, number two might be one that you can check off as already being in place. Good luck!
-Pamela Kachka, M.A.Ed.-
As Jeff Borden mentioned last Wednesday, this week marks the first blog in a nine week series where the ATC Team will highlight each of the 9 Hallmarks for Quality Online Education. If you missed it last week, the regional and national accreditors have agreed on a set of outcomes they will use to evaluate online institutions. So let’s take a look at the first hallmark:
Online learning is appropriate to the institution’s mission and purposes
As I sat down to think about what I would write, I found myself stumped. What could I write about? This one‘s a “no-brainer”. We’ve all been taught that, no matter what the industry, corporations, non-profits, all organizations MUST have a mission. But as I continued to read the analysis/evidence section, I realized this is about more than simply having a mission, it’s about making sure online learning fits into the overall institutional mission.
In other words, quality online programs aren’t just a whim. They aren’t implemented because every other college has online courses. Quality programs are not quick money-making ventures designed to support the REAL programs. Quality programs require extensive planning where the leadership answers questions like:
- How will online courses integrate with the current offerings?
- How will online courses impact the student experience?
- Do we want online courses to attract new students to our programs or will we design them to support current student needs?
- What will be the look and feel of our online environment? How does this fit into our current environment?
Most of us have at least heard about online programs where these questions likely were not considered prior to implementation - programs that offer a certain online course once every two years and students just have to wait. Or we’ve heard about the institution known for its liberal arts education that suddenly offers an online MBA program for Executives. Neither example assumes bad programs, but Hallmark #1 provides the guidance to help insure that online programs are properly incorporated into the big picture.
In some parts of this country, Chick-Fil-A is a fast food tradition. Its most popular menu option being a chicken breast deep-fried in a pressure cooker and served a variety of ways: in a salad, as a sandwich, etc. Chick-Fil-A is also known for their advertising campaigns where cows advocate that we all “Eat More Chikin”. If you aren’t familiar with the restaurant and their ad campaign, visit the Chick-fil-a Cow Campaign. This campaign has a national footprint and a 20 year history. As a Chick-fil-a fan, I would be extremely concerned if the corporation suddenly decided to sell hamburgers. Such a move would cause me to question the leadership. I’d wonder whether Chick-fil-a can cook a burger? I would be concerned about the cow campaign. What about the name of the restaurant? But the Chick-fil-a mission is to “Be America's Best Quick-Service Restaurant”. So if they did decide to sell beef, they’d have to do extensive planning to address concerns like mine, but they could certainly make a case for it. The point being, the accreditors are not concerned that institutions with online programs have a mission. They are concerned that the program fits in, that it has a place in the big picture. Even when it’s a reach, like beef at Chick-fil-a, that’s OK, as long as the planning work gets done and everyone can explain how all of the pieces fit together!
Reporting Analyst and Consultant
I just landed at the Phoenix airport and as I walked toward the rental car bus I overheard a couple in a friendly argument. “Have you ever seen a more beautiful city?” she asked. “It’s hot,” he replied. “But the people…they are incredibly nice. You don’t find that in L.A.!” “It’s hot…” he explained. “OK, but you have to admit, it’s nice to get around town quickly. And the food! We have SO many great places to eat without an hour wait!” she implored. “It’s SO freaking hot!” was his rebuttal.
The argument, while amusing, made me think about my blog this week. Don’t get me wrong, I LOVE a good argument about whether or not the ’87 Bears could beat the 2010 Saints. I enjoy debating Kobe vs Jordan. And I will participate in a discussion of which restaurant makes the best buscuits and gravy (Watercourse Foods, Denver, CO...). But, while mentally stimulating, these conversations are ultimately silly. They are silly because it all comes down to one person’s perception. Oh, I’ve seen the lists of the greatest cities in America, US News & Report’s rankings of colleges, and I actually like the YELP rankings for restaurants in my area. But they are arbitrary. Just because the magazine ranks a college at #11, doesn’t mean anything…except to the parents who put stock in that magazine or the President of the college who will tell the world how awesome they are.
Likewise, I don’t put stock in the argument that online isn’t “as good as” on-ground. It actually pains me that we’re still having the debate, but the fallacy of tradition is strong with people, so we continue to perpetuate the stereotypical and uneducated opinions. Ugh.
But we seem to be on the cusp of a change in paradigm. Not that the staunch, traditional professor is going to admit that online learning actually is better for some students or in some situations than on-campus. No….we just need to wait for many traditionalists to retire.
However, accreditors are finally catching on. Enter the 9 HALLMARKS for online education. Based on two compelling documents by the GAO and WCET, online education now has a set of outcomes for which they will be evaluated. Nine different elements of standardization that EVERY regional (and now national) accreditor has agreed to are being implemented come December, 2011. And it’s about time.
These standards will give online educators another leg to stand on in the debate around efficacy of online vs on-campus. These hallmarks will spotlight the importance of online education toward the mission of the school, not just as a one-off “nice to have” anymore.
Now, some of you may say that these hallmarks seem a tad unfair compared to on-ground counterparts. And that may be true. You’ll notice that some of the hallmarks actually measure the online class against the on-campus version. Hasn’t everyone wondered how on-campus courses are measured from time to time? The process of accreditation isn’t exactly transparent in many cases. It’s akin to end of course evaluations. Who hasn’t had a professor who 90% of the students evaluate as atrocious, yet there they are the next term…teaching the same tired class in the same tired way. Can you imagine if course evaluations were 33% of a faculty member’s ability to continue teaching at a college? (They are in many cases for online courses…but I digress.)
These 9 Hallmarks will give consistency and transparency to the online experience. They will give legitimacy (to some) with regard to outcomes, curriculum, content, delivery, and assessment. They will illustrate what many of us have already researched and know: Online learning works when it is strategic, designed effectively, and measured evenly.
So it is with great pleasure that I introduce the series of 9 blogs you will read for the next 9 Wednesdays. Each Hallmark will be deconstructed and looked at through a unique lens by a different member of our ATC team. Every Wednesday, a new Hallmark will be highlighted and I think you will find them to be both challenging and refreshing.
And even though some may be unfair as they hold online learning to a higher standard than on-campus, just remember that they will help legitimize the modality to some…and that, ultimately is going to beat out, “My online classroom is better than your face to face classroom…” Even if Michael Jordan was teaching it.
Jeff D Borden, ABD (Defending in less than a month!)
Sr Director of Teaching & Learning
A year ago I blogged a summary of the National Institute for Learning Outcomes Assessment (NILOA’s) survey of chief academic officers and their perceptions about the status of learning outcome assessment at the institutional level. Today they released results of a follow-up survey in a report titled “Down and In: Assessment Practices at the Program Level”.
There are several salient conclusions that come out of the data they collected. First, disciplines with specialized program accreditation (like education, nursing, and business) had more mature assessment practices than those covered only by institutional accreditation. In fact, institutions that had some programs with specialized accreditation actually tended to show more developed assessment methods in departments on campus with no program level accreditation. This suggests that there was a benefit across the campus compared to institutions that rely solely on regional accreditation.
Not surprisingly, a second major finding is that resources are scarce with less than 20% of specialized programs having a full-time staff person assigned to assessment activities. Most campuses rely on volunteer committees or part time resources with course release being another option in some instances. To deal with resource constraints, creative solutions included modest stipends to support faculty in the development of course embedded assessments or common department-wide capstone assignments with corresponding rubrics which could be deployed across all students in a program.
Interestingly, while I had the impression that portfolios have been the most common example of direct performance measurement at the program level, they actually ranked seventh in a list of most frequently used approaches. The leaders in rank order are capstones, rubrics, performance assessment, final projects, local tests, and external exams.
One final point I’d like to highlight was that NILOA called for itself and others to produce more case studies that highlight successful assessment practices. We’ve heard the same thing from our partners at Pearson eCollege and are currently working on developing several examples highlighting best practices in a variety of institution types that we can share. We hope to have these published before the end of the year.
Ewell, P., Paulson, K., & Kinzie, J. (2011). Down and In: Assessment Practices at the Program Level.
National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomesassessment.org/documents/NILOAsurveyreport2011.pdf
Brian Epp, M.Ed. | Assessment & Analytics Group, Academic Training & Consulting| Pearson eCollege
A few months ago, I wrote a post to our blog about the “end of the academic year.” (Click here for that post.) In it, I pondered whether the shift from teaching to learning – and the focus on learning outcomes in courses – would put an end to the traditional academic year. That is, if all that is required to pass a course is that each student provide evidence (through assignments and assessments) that they have mastered the course’s learning outcomes, then it shouldn’t matter whether they complete the course in 8 weeks, 10 weeks, 15 weeks, or whatever.
Obviously, there are significant logistical and pedagogical challenges surrounding such a shift. So, I put this question to a few panelists during session at the Pearson Cite 2011 conference, which wrapped here in Denver on April 13. I invited administrators from two schools, Texas Christian University and Kaplan Higher Education, who have been using learning outcomes to structure some of their courses for a few years now.
It’s important to bear in mind that these are very different schools in their mission and delivery of content. I don’t mean to overgeneralize, but in a nutshell, I think it’s fair to say that Kaplan is oriented toward career education and training with a top-down structure for designing curricula, while TCU is oriented towards “traditional” university and liberal arts educational degrees where faculty create curricula from the bottom up. And yet, both are able to use learning outcomes to structure courses in (at least for now) some of their programs.
One audience member at this Cite session commented that in order for learning outcomes to be effective, they must be based on demonstrable competencies (as our guest from Kaplan discussed). However, this audience member pointed out, that is much different when comparing, say, a medical assistant program to a visual arts program. The medical assistant can demonstrate that he or she knows all the steps necessary to draw blood from a patient or insert an IV. It’s a step-by-step, check-off-the-boxes process. While creating a work of art may also be step-by-step, two students can follow the same steps to vastly different results. Can the differences in the visual quality (subjectively judged) of these two students’ pieces be measured by learning outcomes? Or is my parenthetical the key – that subjective assessment and learning outcomes just don’t mesh?
I think the answer is yes, the students’ art can be measured by learning outcomes, but it depends upon how those learning outcomes are defined. It also depends upon the ultimate goal of the program. If the goal of an arts program is to demonstrate competence – not talent, but just competence – then yes. Those competencies can be defined regarding everything from the medium used for a certain piece of art to the technique for achieving one style or another. Further, what is the goal of getting a degree in the arts? If a student wants to become a graphic designer, that is a different set of competencies than if the student wants to become a professional artiste.
I don’t pretend to have all the answers, but I will open up the conversation to you, dear reader. What do you think?
-- Rob Kadel, Ph.D.
-- Supervisor, Academic Training & Consulting
Because the accountability drums have been beating for well over ten years, most institutions are now collecting data on student performance toward mastery of learning outcomes. The question today is whether or not this data is actually being used to drive improvements in curriculum and instruction. The step of analyzing collected data, diagnosing gaps that need attention, and relating results back to student performance is referred to as closing the loop.
At Pearson eCollege we’ve been working with institutions for over three years on technology enhanced outcome management and how it can help educators make the shift from a teaching to a learning paradigm. We’re tackling issues like which metrics provide the best data for academic leaders as they work to improve student mastery of outcomes or how to document discussions that take place to support the assessment of student learning.
Clearly there isn’t a single right answer but it’s important that campus leaders participate vigorously in the debate on these issues. A common problem we find, in fact we’ve struggled with it ourselves, is that we get paralyzed by trying to achieve perfection before initiating a new process which only delays the iterative nature of continuous improvement.
The Pearson eCollege Assessment and Analytics Consultants will be hosting a workshop at the Pearson Cite conference next week in Denver on how to “close the loop”. We’d love to hear your comments or thoughts about additional key questions we should be considering and invite you to join us if you’re attending the conference.
Brian Epp | Assessment & Analytics Group Manager - Academic Training & Consulting | Pearson eCollege
When you see a pizza on tv, then get hungry and order one, do you order toppings based on your own, unfailing desires? Or, is it possible that since the commercial had extra cheese, pepperoni, and mushrooms that you just went with that?
I recently drove to Nebraska for an eLearning conference and since that gave me about 12 uninterrupted hours in the car, I got a book on CD. Nudge, by Thaler and Sunstein (2008) is an interesting argument around choice. While I liked some of the tangential thoughts brought out in the book a lot, the main premise from the authors is the idea that, "each of us thinks and chooses unfailingly well..." is a myth.
(Tangential idea I liked the most: Most people assume they are "better than average" at most things important to them. Most people feel they are "better than average" drivers, "better than average" parents, and "better than average" instructors...)
Thaler and Sunstein instead believe that the choices we make are based on one of two things: quick-reacting instinct, or deliberate reflection. As such, the authors suggest that the way in which choices are offered makes a monumental difference in the choice that is actually picked.
For example, when a group of people was asked if they consider themselves to be "happy" they gave a fairly high number (on a scale of 1-10) on average. If this question was followed up with, "Are you dating anyone" or "Are you intimate with your spouse regularly," the answer was quick and reflective of the first answer.
However, when the two questions were reversed, things changed. When people first think about dating or intimacy with their partner, most people tend to feel that (objectively speaking) they are not getting enough dates or intimacy at all. So, when this question is followed with, "Are you happy?", the average numbers fell drastically.
In other words, the way that we architect choices determines how those options will actually be chosen.
I am reminded of an instructor in my undergraduate program who crafted a final exam where every answer was 'B'. It was the cruelest test I have ever taken. Why? Because the psychology of the test begins to overshadow the content! Whether I knew the content or not didn't matter after a while. Surely no professor would use the same dis-tractor as the answer for all 100 questions, right? So, the choices that I made were not based on unbiased reasoning through the questions. Instead, they were based on conspiratorial fear and mistrust.
That may be an extreme example, but it makes me wonder how many students succeed or fail in our online classes because we aren't very good choice architects? How often do we assume that directions aren't required as students "get" online learning better than we do? How often do students go unnoticed (and therefore drop out) in discussions because we don't pay attention week after week to which threads we respond to? How often do we create software solutions that don't work for every student, thereby creating an unlevel playing field?
As instructors we are choice architects to be sure. As ONLINE instructors, the choices we set up for our students grows almost exponentially. Do we use blogs, wikis, journals, or discussions? Why? Do we ask students to participate in groups? If so, do they self-select or do we assign? Why? Do we use YouTube to present? Do we use Quora to poll? Do we use test banks to quiz? Why?
Every choice we make impacts every choice they make. So, what choices have you given your students lately?
Good luck and good teaching...
Jeff D Borden, M.A.
Sr Dir of Teaching & Learning