Currently available course design rubrics can be very valuable tools. However, these rubrics do not address several very important issues related to course quality. We’ll examine five additional areas that should be considered when working to improve the quality of online courses.
Z Score,T Score, Percential Rank and Box Plot Graph
Five Important Things You Won't Find in a Course Quality Rubric - Barry Dahl
1. Barry Dahl
Teaching & Learning Advocate, D2L
Five Important Things
You Won't Find in a
Course Quality Rubric
2. Brief History of Online Course Design Rubrics
•2002: Cal State Chico developed the ROI (Rubric for
Online Instruction)
• Implemented at Chico in January 2003
•2003: FIPSE Grant won by Maryland Online: "Quality
Matters: Inter-Institutional Quality Assurance in Online
Learning“
• Grant term: Sept 2003 – Aug 2006
3. A Brief Personal History
• Feb. 2004 - QM workshop at ITC eLearning Conference
• May 2004 - Faculty at my college voted to create a peer
review process using V1 of QM rubric as the course design
recommendations
• 2005-present – former college still uses a peer review
process with a rubric adapted from V1 of QM.
• 2010 – began consulting gigs promoting the use of a three-
pronged approach to improving online course and program
quality
• 2012 – began work at D2L – and here we are today
10. Purposes of Instructor Evaluation
Encourage
excellence in
teaching and
learning.
01
Facilitate long-
term quality
improvement by
monitoring
instructional
performance.
02
Provide
constructive
feedback by
identifying
strengths and
areas for
improvement.
03
Inspire
professional
growth and
development.
04
11. Faculty Evaluation Components
• Student Evaluation of Instruction
• Faculty Self-Evaluation
• Professional Development Plan
• Faculty Peer Review
• Faculty Portfolio
• Supervisor’s Evaluation of Faculty
Typically some combination of the above components
12. Barry’s
Thoughts on
Online
Instructor
Evaluation
• It’s not the same
• Classroom observations in
traditional classroom are
mostly for show
• Observers/Evaluators need
to “get” the art of online
teaching
• It CAN be done.
Item
#1
15. Examples of Collegewide Assessments
•Collegewide Outcome #1
• Written and oral
communication
• Practiced extensively,
across the curriculum, in
the context of
progressively more
challenging problems,
projects, and standards
for performance
• Assessment Projects for #1
• Assessments of writing
skills
• Scored by a cross-
disciplinary team of faculty
• Administered to random
selection of students from
across the college
• Segregate writing samples
of online students for
comparison to on-campus
students
18. Why Do Students Take Online
Courses?
Caveat: assuming they aren’t forced to by a
worldwide pandemic or something similar…
19. Ruffalo Noel-Levitz PSOL
•This survey can tell you how satisfiedyour students are and what issues are
reallyimportantto them in the areas of:
•Academic services
•Enrollmentservices
•Institutionalperceptions
•Instructionalservices
•Studentservices
20. PSOL Factors to Enroll in Online
46. Financialassistanceavailable
47. Future employment opportunities
48. Reputation of Institution
49. Work schedule
50. Flexible pacing for completing a program
51. Convenience
52. Distance from campus
53. Program requirements
54. Recommendations from employer
Students rank
each factor on
how important it
is to them.
21. Most Important Factors to Enroll Online
46. Financialassistanceavailable
47. Future employment opportunities
48. Reputation of Institution
49. Work schedule
50. Flexible pacing for completing a program
51. Convenience
52. Distance from campus
53. Program requirements
54. Recommendations from employer
These 3 factors
always rank as
the most
important.
22. Flexibility
•#1 Reason why student take online courses
•Not distance, not cost, not preferred
learning modality
TIME FLEXIBILITY
23. 3. Windows of
Opportunity
What is a Window of
Opportunity?
The amount of time that an online
student is given to complete a
task; such as
• take a quiz,
• complete and submit an
assignment, or
• post to a discussion
• et cetera Item
#3
24. Example: Required Writing Assignment
• Opens Friday morning and closes Sunday night.
• Three days (but only two sleeps) might not seem too short for some people, but it
certainly would for other people,depending on their circumstances.
Piece of cake.
That’s perfect
for me.
No way can I
get that done
in time
27. 4. Student-
friendly Flexible
Design
Course design choices
made for online courses
tend to fall into one of
these categories:
1. Student-friendly
2. Faculty-friendly
3. Both
Item
#4
28. Example of “Friendliness”
• Instructor 1 isgoing to post a schedule for
weekly online office hours.
Step 1: instructor looks at hisweekly schedule
to see what will fit in
Step 2: chooses days/times that fit nicely for
him
Step 3: posts the schedule
• Instructor 2 isgoing to post a schedule for
weekly online office hours.
Step 1: instructor surveys students to see
which days and times they are availableto
meet, if needed
Step 2: chooses days/times that work best for
the most students
Step 3: posts the schedule
30. Beyond Design:
Instructional Procedures that are Student-Friendly
•Again,if the focus is placedsquarelyon the coursedesign,
variousthings will slip through the cracks.
•This next exampleis a caseof instructor behaviors(practices)
and it is unlikelythat someonewould design/planthe coursein
this manner.
31. Example of Untimely Feedback
Monday – Module 1 Essay Questions due date
Thursday – Module 1 Exam due date
Friday – Feedback on Essay Questions received by students
32. Timely Instructor Feedback
•Timely and useful feedback is needed to stay ontrack in online courses.
•Planto set asidesufficientgradingtime soon afterthe assessmentis due.
•Some facultyfind thatthey save time by providingaudioorvideo
feedbackrather than text, whichcan alsoincreaseyour instructorpresence
withonline students.
33. Barry’s
Thoughts on
Student-
Friendly
Design
• Being student-friendly does
not require you to be faculty-
unfriendly.
• Synchronous activity
alternatives?
• Form student groups based
on availability?
• Give option of not working
on weekends or holidays.
Item
#4
34. 5. Continuous
Quality
Improvement
(CQI)
Not “one and done”
Iterations on quality
improvement – don’t eat
the whole elephant.
In and of itself, a course
quality rubric is a very
small cog in an overall
CQI process. Item
#5
35. Barry’s
Thoughts on
Continuous
Quality
Improvement
Some of the important things you
won’t find in a Course Quality
Rubric:
• Institutional support of DE
• Technology support of DE
• Instructional design support
• Online course development
support
• Ongoing PD for online
instructors
• Robust online student services
• Ongoing research and a culture
of evidence
Item
#5
36. Putting it All Together
Continuous
Improvement
Student
Friendly
Windows
Learning
Achieve-
ment
Instruction
Course
Design
Notes de l'éditeur
Hello everyone. Thanks for joining us today
Many of you know me but for those of you who don't, my name is Barry Dahl, and I serve as the Teaching and Learning Advocate at D2L. After spending 27 years working inside Higher Ed, I've spent the last 10 years working with D2L primarily providing professional development opportunities for faculty and support staff who use D2L.
I also had the pleasure of serving on the Board of Directors of the Instructional Technology Council for a few years while still working at a college in Minnesota. This is an organization that's very near and dear to my heart. Let's get started with the next session titled: 5 important things you won't find in of course quality rubric.
Let's start with a brief history of course design rubrics for online courses.
The first one that I know of and can find in the research was from 2002, the Cal State Chico rubric for online instruction or Roi.
This preceded the quality matters rubric by about a year or more.
Quality Matters began in 2003 when they were awarded a substantial FIPSE Grant which covered a three-year term ending in 2006.
There are many other course design rubrics as well, but these two seem to encapsulate the start of the rubrics era
My own history with course design rubric started in February 2004 when I attended the ITC eLearning conference and the people from Maryland Online were doing a workshop about Quality Matters.
Because it was funded by a FIPSE Grant, they needed to share their developments during that Grant period, and they did share version one of their quality matters rubric with the audience at the conference.
That same spring, I brought the rubric back to my College in Minnesota and with a large group of Faculty members we started working on our own process based on an adaptation of the quality matters rubric to create an online course peer review process.
I'm pleased to say that 18 years later that process is still in place and has only grown in quality and acceptance over the years, or so they tell me.
I also spent a fair amount of time working with other colleges as a consultant to take a look at their online course development procedures as well as accreditation efforts for online programs.
In 2012 I started at D2L, and no longer work directly with course quality improvement processes, but I help direct the Course Quality Improvement Affinity Network at our annual Fusion conference and throughout the year.
When I was doing my consulting work, this diagram got a lot of use. Once upon a time I called this my pyramid of e-Quality, and a faculty member quickly pointed out that it’s 2D, so it’s not a pyramid. So, I didn’t call it that anymore.
Part of my message was that there were at minimum three major components of e-learning quality.
As you can see here in this diagram the three different colored triangles represent the quality of learning, of teaching, and of design. I guess you could say that the overall triangle represents the totality of e-learning quality – but that would only be true if lots of other factors were living in that white space in the middle.
My contention then and it's still my contention today is that all three of these components are important when considering the quality of online courses.
The three components can be measured in many ways, but the most common ways are shown on the slide
through assessments we measure the level of learning achieved,
through some sort of performance evaluation process, we measure the level of teaching that occurred,
and through things such as quality matters rubrics and similar instruments we measure the quality of online course design.
Plenty of people do presentations on the course design rubrics, so we are not going to be talking about that today. So, what are we talking about?
We are going to be looking at 1) the quality of teaching, 2) the quality of learning, and then three other items that are not just tangential to the quality of an online course, but I would argue that they are central to it.
Let’s start with the quality of instruction.
One of the issues with relying on a course design rubric is that although it looks at how the course is designed to operate, it is typically evaluated looking at an inactive course. You can't (typically) look at what is actually happening while the course is being taught. In other words, it doesn’t look at how well the course is taught.
For example, a course design rubric typically includes standards related to interaction, such as interactions between students and instructors. However, designing for interaction doesn’t answer the question about whether those interactions actually occurred and at what level they occurred and what value was achieved.
Online Instructor Evaluations are critical to the overall e-Quality process, but they are not without a few potential hurdles or issues.
What are some of the potential issues with conducting an instructor evaluation to determine the “quality” of the instruction?
Here are a few of them:
Many institutions do not have an evaluation instrument appropriate for online teaching
Many institutional leaders do not have the experience with online teaching & learning to be effective evaluators.
Although classroom observations have a long history and are generally accepted practice, observations of the online classroom do not and are not.
However, all the items above have been improving over the past 20 years.
What are the goals or purposes of instructor evaluations?
Things such as:
Encouraging excellence
Monitoring instructional performance
Providing constructive feedback
Inspiring professional growth
Faculty evaluations typically are made up of some combination of the following:
Student Evaluation of Instruction
Faculty Self-Evaluation
Professional Development Plan
Faculty Peer Review
Faculty Portfolio
Supervisor’s Evaluation of Faculty
Some of my thoughts about this process:
Evaluating people who teach online is not the same as evaluating people who teach on-campus (of course many people do both)
Classroom observations in traditional classroom are mostly for show, with the dean or other supervisor sitting in the back of the classroom observing some or all of a single class period.
In online classroom, observation can be very informative and instructive, particularly if the Observers/Evaluators are well-versed in the art of online teaching
Some schools have 20-25 years of experience with this process – so it CAN be done. (I used to often hear the objection of “we just can’t do it!”
Let’s move on to the quality or level of learning that happens in online classes.
There’s really two lenses that I think need to be examined here. Course-level outcomes and college-wide outcomes.
We have a problem if you can’t provide evidence of students successfully completing course outcomes. We also have a more general problem if online students overall are less successful than F2F students in achieving college-wide outcomes.
Let’s start with assessments of collegewide and Program-level learning outcomes.
Do your online students achieve the Collegewide Outcomes and/or Program Outcomes at similar levels as F2F students?
Are you even including (and segregating) online-only students in your collegewide assessments?
Have you been asked by your accrediting bodies for this information? Or have you reported on this info to your accreditors as part of a review process?
An example from my former college
Collegewide Outcome #1
Written and oral communication
Practiced extensively, across the curriculum, in the context of progressively more challenging problems, projects, and standards for performance
Assessment Projects for #1
Assessments of writing skills
Scored by a cross-disciplinary team of faculty
Administered to random selection of students from across the college
Segregate writing samples of online students for comparison to on-campus students
And what about course-level learning outcomes.
Do your online students achieve the expected course Outcomes at similar levels as F2F students in other sections of the same course? Are you tracking Competencies with different groups of learners and comparing?
What can you learn from this? One thing is you can discern your own slice of the pie of the “No Significant Difference” phenomenon and possibly drill into smaller differences in the courses you teach.
What to do about a lower performing group? Possibly remediation for one group or the other. Redesign activities and assessments to help the lower achieving group be more successful
Develop a Culture of Evidence, and this will be part of it.
Not sure if you’ve noticed, but there are still, in 2022, there are still skeptics of online learning. Be prepared to answer them with data.
If you say “but we can’t do that with online students…” then we have a bigger problem to deal with
This was a big deal 15-20 years ago when getting online programs approved by regional accreditors. It still might be a big deal for your accreditor.
Let’s examine this question: Why Do Students Take Online Courses?
How do you answer this question? Why do students take online courses? in normal times, making normal decisions for oneself? In other words, not during a pandemic when remote (online) learning was the only option. I am talking about why students take online courses when they have choices related to the course delivery method.
Many of you are probably familiar with the PSOL, which is the priorities survey of online Learners from Noel Levitz. Back in my college vice-president days, we used the survey 5 times between the years of 2003 and 2009 to gather data about student satisfaction as well as some other factors related to their online learning experiences at the college. I have continued to follow the results of this annual survey since then – partly because I’m a bit of a data nerd – but also because this survey is the one place where I learned the most about our online students and especially how our survey results compared to others who used the survey.
The crux of the survey is satisfaction based, looking at how satisfied online students are with various different services at the college and the quality of instruction received
There's also another portion of the survey that asks about the factors or reasons why they chose to enroll in online courses. You can see the list of nine items on the screen. Three of those nine items consistently have much higher importance levels than all the other items. Which three do you believe would be at the top of their lists?
If you guess the three items in the middle, you are correct. Work schedule, flexible pacing, and convenience, consistently rank as the most important factors.
Almost all of the data that I have seen, both organization specific as well as Nationwide data, point to these three items being by far the most important factors that drive student enrollment in online courses. This is especially true at the undergraduate level. so looking at work schedule, flexible pacing, and convenience, we see a common thread
It seems perfectly logical to me to look at those three items and their similarities and come up with the number one reason why students take online courses. We talk about distance education but it's not really distance. It's not the cost of education because sometimes it's higher than face to face education, in fact usually it is higher. The idea of preferred learning modality is kind of a bunch of hooey.
Instead, those things all relate to the students having time flexibility in their lives. The ability to fit an education into their otherwise full schedules and busy lives.
So, the third factor on our list of five factors today has to do with windows of opportunity. Again this is not something you'll find in a course design rubric, but it's incredibly important for students to manage their time restraints related to coursework.
A window of opportunity is the amount of time that an online student is given to complete a particular task or activity or a series of actions. Faculty members usually make the call on how long the window opportunity will be for each item, which makes sense since they are the ones designing the activities but ensuring that the students have a large enough window of opportunity is a key for student success.
Here's a very simple example, not meant to insult anybody's intelligence, but just to make sure we illustrate what I'm talking about. A writing assignment that the professor posts on Friday morning and the due date is Sunday night so students have three days, in other words, the weekend. For one student that might be awesome. Plenty of time to get it done. For another student that may be incredibly short and they're going to have to disrupt their lives in order to meet the due date.
And what about weekends?
Once upon a time, I took the advice of two students and tried having just one big window for a 16-week Personal Finance course. Everything was available on Day one and everything was due on the last day. There are several reasons that this is not a great idea, unless you’re shooting for having an online correspondence course with very little student interaction – not what most of us are shooting for.
The result was that it worked great for two or three students, and not so great for everybody else. 16 weeks to get the work done, and many students waited until week 14 or 15 to get serious about doing the work. Procrastination is the king of the world.
What would Goldilocks say? How do you know if they’re too long, too short, or just right?
We could make up an answer…but…Common sense goes a long way
Big projects need bigger windows
Student situations are not homogenous – it’s not one size fits all
Consistency is very helpful to online students.
Keep windows similar for similar tasks.
Generally speaking (with exceptions), design choices that are made for online courses tend to fall into one of these categories:
Student-friendly
Faculty-friendly
Both
Here’s a simple example
Instructor 1 is going to post a schedule for weekly online office hours.
Step 1: instructor looks at his weekly schedule to see what will fit in
Step 2: chooses days/times that fit nicely for him
Step 3: posts the schedule
Instructor 2 is going to post a schedule for weekly online office hours.
Step 1: instructor surveys students to see which days and times they are available to meet, if needed
Step 2: chooses days/times that work best for the most students
Step 3: posts the schedule
Instructor 1 is faculty-friendly. Instructor 2 is student-friendly.
Other design considerations to make your course student-friendly
Days and times of the week when due dates are scheduled
Required synchronous activities
Required group work
Modes of contact for online office hours; both the timing of instructor availability and the mode/methods of contact.
Appropriate amount of course content
Extra credit or make-up opportunities for required coursework that is missed
Looking beyond the course design. What about some instructional practices that are Student-Friendly?
Again, if the focus is placed squarely on the course design, various things will slip through the cracks.
This next example is a case of instructor behaviors (practices) and it is unlikely that someone would design/plan the course in this manner.
This is an example of untimely feedback from the instructor to the student.
On Monday the student is supposed to turn in essay question answers for module one. The module one exam is on Thursday of that same week, but the instructor feedback on the answers to the essay questions doesn't get delivered to the students until Friday after the exam.
Again, it's unlikely that someone would design the course with this in mind, but reality doesn't always match design.
Instructor Feedback is one of the items that the PSOL survey shows is super important to students but with which they typically have a low satisfaction rate.
Timely and useful feedback from instructors is important for keeping students on track in online courses.
Plan to set aside sufficient grading time soon after the assessment is due.
Some faculty find that they save time by providing audio or video feedback rather than text, which can also increase your instructor presence with online students.
Being student-friendly does not require you to be faculty-unfriendly.
If synchronous activities are required, have you provided opportunities for students who cannot be in a certain place at a certain time?
If group work is required, have you provided opportunities for students to form groups based on availability?
Are your due dates scheduled in a way, in combination with the length of the windows of opportunity, that provide students with the option of not working on weekends or holidays?
The fifth item was to do with the concept of CQI, or Continuous Quality Improvement.
Don’t treat your Quality Process as a “one and done” operation.
By making it a continuous process, you can make progress over time rather than trying to achieve greatness all at once.
In and of itself, a course quality rubric is a very small cog in an overall CQI process.
Some closing thoughts about Continuous Quality Improvement and the shortcomings of course design rubrics
Some of the important things you won’t find in a Course Quality Rubric:
Institutional support of online education
Technology support for online education
Instructional design support
Online course development support
Ongoing professional development for online instructors
Robust student services for online students
Ongoing research and a culture of evidence
Now comes the easy part. Just take all these moving pieces and create yourself a well-oiled machine of e-Learning greatness.