QSSET for Heads and Deans
Using QSSET in the Evaluation of Teaching: For Heads, Deans, and RTP Communities
The purpose of this document is to assist Heads, Deans, RTP committees and any others who are the evaluators of teaching per Article 29 of the Collective Agreement, in using QSSET responses as part of the evidence of teaching effectiveness they must consider in merit, renewal, tenure, promotion and continuing appointment decisions.
QSSET and the Evaluation of Teaching
Article 29.3.1 provides that a survey approved by QUFA and the University, now QSSET, will be used in the assessment and evaluation of teaching. However, it is important for evaluators to recognize that this survey is not in itself an assessment and/or evaluation of teaching but one source of evidence which Heads, Deans, members of RTP committees and others will consider in the course of assessing and evaluating teaching.i The assessment of teaching as it is described in Article 29 requires the consideration of matters that extend well beyond the scope of QSSET. Article 29.1.2 of the QUFA-Queen’s Collective Agreement provides: “For assessment and evaluation purposes, teaching includes all presentation whether through lectures, seminar and tutorials, individual and group discussion or supervision of individual students work in degree-credit programs.” 29.1.3 adds that “Assessment and evaluation of teaching shall be based on the effectiveness of the instructors, as indicated by command over subject matter, familiarity with recent developments in the field, preparedness, presentation, accessibility to students and influence on the intellectual and scholarly developments of students.” Matters such as supervision, command over subject matter, and familiarity with recent developments in the field cannot be compassed by a survey of students as they do not have the knowledge or expertise to provide this information. QSSET has been designed in recognition of the value of information about students’ experience of teaching but also the limitations in students’ ability to assess teaching which means that the surveys cannot serve as a proxy for evaluation.
QSSET Design
QSSET has also been designed to acknowledge that students’ experience of teaching is affected by factors beyond the Instructor’s control. These include the student’s own preparation for and engagement with the course, marking which may not have been performed by the Instructor, or course materials which may not have been prepared by them, the adequacy of the classroom, and/or technological support. The purpose of the questions under “Student,” “Course,” and “Infrastructure” are to provide context to help those assessing and evaluating teaching to determine how reliably the scores on the “Instructor” questions reflect the Instructor’s actual teaching. Only scores on questions under the heading “Instructor” are to be used directly in the evaluation and assessment of teaching. The exception is where the evaluator knows that the Instructor also performed all evaluations of student work and/or was responsible for the design and presentation of the course materials. In such cases, appropriate questions under “Course” should be considered as well. Evaluators may seek clarification about responsibility for course elements from Instructors should it be necessary.
Using QSSET
Consider the questions under “Student.” They ask students to reflect on their own relation to the course. While the reflections these questions prompt may temper students’ responses on the Instructor section, the students’ responses also provide information to the evaluator about what the Instructor was up against, or alternatively what advantages the Instructor may have enjoyed. For instance, if students do not indicate strongly that the “course fits their interest” this may serve as a reminder to the evaluator that the course is a tough, required course—or alternatively that it is an elective but because of resource constraints there are few options for students. In such circumstances students might be expected to rate an Instructor as less effective in the presentation of a material than in a course where the students are already enthusiastic about the subject matter. Alternatively, if in these circumstances students rate an Instructor as highly effective in presenting the course material, their experience may indicate the considerable pedagogical intelligence the Instructor brought to the course. If many students indicate that they were not prepared for class, their rating of the Instructor’s effectiveness in presenting the material may be less reliable because the Instructor, reasonably was assuming preparation. Students’ dissatisfaction with marking, or frustrations with the IT support, or dislike of the course materials may cause them to experience an Instructor as less effective when these factors lie beyond the control of the Instructor. The QSSET is designed both to foreground the fact that the students are testifying to their experience, not performing an evaluation, and to provide evaluators with information about the determinants of that experience. Evaluators, who will be familiar with the curriculum, the resources, and the student culture of their Units, will need to review the responses from QSSET for the relationships the survey questions indicate between the students’ experience of teaching and the conditions in which it is conducted.
The attention QSSET demands to the correlation between the students’ rating of teaching effectiveness and the circumstances in which the teaching was conducted also demands that the data it yields be presented in terms of a distribution of responses rather than through means and standard deviations as was used with USAT. Such presentation was always dubious, inviting comparisons between teaching conducted under vastly differing conditions. In contrast, the presentation of QSSET results allows evaluators to discern patterns and relationships. For instance, persistent bi-modal responses for a particular course may indicate that the Instructor is teaching controversial material—off-putting to some students but exciting to others. Or it may indicate that the Instructor’s teaching is highly effective and stimulating for well-prepared students, but loses less well-prepared ones, a hypothesis supported by the responses about preparation for work at course level in the “Student” section. Where a mean would simply indicate mediocre teaching, the QSSET might show the evaluator either a strong Instructor struggling with a problem in the Unit’s curricular design, or alternatively an Instructor who needs to work on explaining basic premises or concepts. The evaluator’s decision about which of these interpretations is best will depend on knowledge about teaching in the Unit. It may also be influenced by the Instructor’s own interpretation of the results which the Instructor has supplied in a teaching dossier.
Finally, it is important to note that scholarship regarding student evaluations of teaching indicates that responses can be biased with respect to factors not relevant to teaching quality. With respect to gender bias in student evaluations of teaching, research findings are complex and often contradictory, but the general conclusion is that when biases exist, it is female instructors who are disadvantaged.ii Further, it appears that students have different expectations of male and female instructors based upon gender stereotypes.iii For example, female instructors are generally rated higher on questions pertaining to interpersonal skills, however, when they are perceived to be weak in this area, they are rated more harshly than male counterparts. Gender biases have been shown to exist both in the quantitative survey items and the comments. It should be noted that while gender bias is the most studied, other forms of bias based on race, attractiveness, age, and accent have been shown to exist. The problem of bias is intractable because the bias lies in the students, rather than in the survey tool itself. Evaluators of teaching need to be mindful of potential bias when considering QSSET results.
i It is the Instructor’s responsibility to provide materials that support a full assessment. In all RTP processes save Renewal of Tenure Track appointments the burden of demonstrating that the required standard has been met is on the Member. In the case of Annual/Biennial reviews, 28.2.4 requires the Member to provide “sufficient detail of activities and their outcomes to enable the Unit Head to assess the Member’s performance” and where the Member fails to do that the Unit Head is to base their assessment and evaluation on the “information reasonably available” to them.
ii MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291-303. Young, S., Rush, L., & Shaw, D. (2009). Evaluating Gender Bias in Ratings of University Instructors' Teaching Effectiveness. International Journal for the Scholarship of Teaching and Learning, 3(2), 1-14.
iii Mitchell, K. M., & Martin, J. (2018). Gender bias in student evaluations. PS: Political Science & Politics, 51(3), 648-652.
This information is also available as a PDF: