QSSET for Instructors

QSSET for Instructors

Setting up QSSET

Before the start of every semester, data collection spreadsheets are emailed to all departmental administrators asking which courses are to be subject to a QSSET survey, and of those to be surveyed which week would be preferrable for each QSSET survey to take place. After receiving these data collection spreadsheets, the departmental administrators should be reaching out to instructors to get their input for which survey week would be best. If there is no preference for the survey week indicated on the spreadsheet, the QSSET survey will take place on the default week. (which is the 2nd to last week of class – with reference to the course end date) 

What is needed from Instructors to set up QSSET surveys:

  1. Respond to your departmental admin and select the week you wish your course survey to take place.  The survey must be held during the last three weeks of class, pursuant to the Queen’s-QUFA Collective Agreement. 
  2. Notify your class of the week the course survey will be conducted.
    1. For on-campus courses:  Work with your departmental admin to arrange an Class Student representative who will provide instructions to the students.
  3. Receive a reminder email when your course survey opens.  Remind your class to complete the survey.
    1. Conduct in-class time for survey completion, if applicable.
      Note: If any students are experiencing difficulty accessing or submitting their survey, please advise them to send an email to qsset@queensu.ca for assistance.
  4. Post grades at end of term.
  5. Receive survey results after grade submission (i.e. early January for Fall courses, early May for Winter courses).

Additional Notes:

  • Surveys are not conducted for classes if there are no students registered in the course, or if there is no instructor of record attached in PeopleSoft/SOLUS.
    Certain courses may not be appropriate for QSSET.  Examples are independent study and project courses, where the instructor of record in PeopleSoft is not working directly with the students. Currently requests for exemption should be submitted to qsset@queensu.ca for individual
    consideration; we are exploring options to add these exemptions to the PeopleSoft course catalog, allowing the process to be automated in the future.
  • QSSET results are shared with the instructors in full, and the department heads and deans in a summary form.
  • Heads and deans receive only numerical answers; instructors receive numerical and text answers.
  • All surveys are anonymous.  Student email addresses are stored in the system and used to ensure each student can only survey their class once.
  • Although no Queen’s staff can view which student provided a certain response, it would theoretically be possible to obtain this information from the database.  This would only be considered in serious cases of potential misconduct or criminal activity.
  • Once a survey closes it cannot be reopened.  If a student misses their opportunity to provide feedback, we encourage them to speak directly with their instructor and/or department head, as appropriate.

Using QSSET

The purpose of this document is to assist instructors in interpreting QSSET results and presenting those interpretations to Heads, Deans, RTP committees or any others who will be making decisions about the instructor using QSSET results.

QSSET and the Evaluation of Teaching

Article 29.3.1 provides that a survey approved by QUFA and the University, now QSSET, will be used in the assessment and evaluation of teaching. However, it is important for instructors to recognize that this survey is not in itself an assessment and/or evaluation of teaching but one source of evidence which Heads, Deans, members of RTP committees and others will consider in the course of assessing and evaluating teaching.iv The assessment of teaching as it is described in Article 29 requires the consideration of matters that extend well beyond the scope of QSSET, or any survey of students. Article 29.1.2 of the QUFA-Queen’s Collective Agreement provides: “For assessment and evaluation purposes, teaching includes all presentation whether through lectures, seminar and tutorials, individual and group discussion or supervision of individual students work in degree-credit programs.” 29.1.3 adds that “Assessment and evaluation of teaching shall be based on the effectiveness of the instructors, as indicated by command over subject matter, familiarity with recent developments in the field, preparedness, presentation, accessibility to students and influence on the intellectual and scholarly developments of students.” However, as a one-on-one form of instruction supervision cannot be assessed through surveying. Moreover, students do not have the expertise to comment on matters such as command over subject matter and familiarity with recent developments in the field. QSSET has been designed in recognition of the value of information about students’ experience but also the limitations in students’ ability to perform and full and valid assessment of teaching, which mean that the surveys cannot serve as a proxy for evaluation. The QSSET design furnishes opportunities for instructors to interpret student responses in relation to multiple determinants of students’ experience of teaching. The limited scope of student surveys places an onus on instructors to furnish supplementary information or material so that Heads, Deans, RTP committees and other evaluators can make best use of QSSET and evaluate aspects of teaching that a survey of students cannot compass.

QSSET Design

QSSET acknowledges that students’ experience of teaching is affected by factors beyond the instructor’s control. These include the student’s own preparation for and engagement with the course, marking which may not have been performed by the instructor, any course materials not prepared by the instructor, the adequacy of the classroom, and/or technological support. The questions under “Student,” “Course,” and “Infrastructure,” provide context to help those assessing and evaluating teaching to determine how well the scores on the “instructor” questions reflect the instructor’s actual teaching. Only scores on questions under the heading “instructor” are to be used directly in the evaluation and assessment of teaching. The exceptions are where the assessor knows that the instructor also performed all evaluations of student work and/or was responsible for the design of the course and presentation of the course materials. In such cases, appropriate questions under “Course” should be considered as well. However, instructors must be aware that Heads, Deans, RTP committees and any other evaluators may not be aware of the courses which the instructor has designed and/or prepared the materials for. They may also not be aware of the extent to which the instructor performed assessment in the course. Article 28.2.4 of the Collective Agreement provides that for the purpose of Annual and Biennial reports “it is the Member’s responsibility to provide…sufficient detail of activities and their outcomes to enable the Unit Head to assess the Member’s performance.” Moreover, for most personnel decisions the onus is on the Member to demonstrate that standards have been met. For these reasons, it is in the instructor’s interest both to ensure that assessors have adequate information to evaluate QSSET results appropriately, and to supplement QSSET with additional material as necessary.

Using QSSET

QSSET is designed to present correlations between student perceptions of instructor effectiveness and other factors that influence their experience of the course. Consider the questions under “Student,” which ask students to reflect on their own relation to the course. While the reflections these questions prompt may temper students’ responses on the “instructor” section, the students’ responses also provide information to the assessor about what the instructor was up against, or alternatively what advantages the instructor may have enjoyed. For instance, if students do not indicate strongly that the “course fits their interest,” and the course is a tough, required course—or alternatively that it is an elective but because of resource constraints there are few options for students—the instructor may wish to remind the assessor of that fact in explaining less than enthusiastic responses to the “instructor” questions. Alternatively, if in these circumstances students rate an instructor as highly effective, the instructor may wish to underscore this fact in the context of their particular decisions on how to present the course material. Students’ dissatisfaction with marking, or frustrations with the IT support, or dislike of the course materials, may cause them to experience an instructor as less effective when these factors lie beyond the control of the instructor, correlations that the instructor can also recognize and point out.

Because instructors receive the individual responses to the survey while Heads, Deans, RTP committees and other evaluators will see only aggregated data, instructors can play a role in framing their results for assessors. They can demonstrate correlations that may not be visible to assessors. For instance, if the same students who indicate that they were not prepared for class rate the instructor’s effectiveness low while the better prepared students indicate greater satisfaction, this correlation suggests that lower ratings may be due to factors other than poor instruction. Moreover, if instructors feel that the written comments illuminate survey results, they can help assessors of teaching by passing them on in a teaching dossier. QSSET is designed so that students can furnish written comments at the end of each section as well as at the end, but those comments are seen only by the instructor per Article 29.3.7 unless the instructor chooses to share them.


The attention QSSET demands to the correlation between the students’ rating of teaching effectiveness and the circumstances in which the teaching was conducted also demands that the data it yields be presented in terms of a distribution of responses rather than through means and standard deviations as was done with USAT. Here too there are opportunities for instructors to provide valuable interpretation. Persistent bi-modal responses for a particular course may indicate that the instructor is teaching controversial material—off-putting to some students but exciting to others. Or it may indicate that the instructor’s teaching is highly effective and stimulating for well-prepared students, but loses less well-prepared ones, a hypothesis that could be further supported by responses about preparation for work at course level in the “Student” section. instructors have a role to play in assisting the evaluator in interpreting such results by providing context about the course, their approach, and challenges they may face.


QUFA Members should also be mindful that if they wish to be assessed on the full range of teaching related matters that assessors and evaluators of teaching are supposed to consider per that Article 29.1.5 then they must provide assessors with appropriate information, and the way to do this is through a teaching dossier. Article 29.2 describes the purpose and possible contents of a teaching dossier which may be submitted for all personnel processes.

Bias and Inappropriate Comments

Finally, it is important to note that scholarship regarding student evaluations of teaching indicates that responses can be biased with respect to factors not relevant to teaching quality. With respect to gender bias, research findings are complex and often contradictory, but the general conclusion is that when biases exist, it is female professors who are disadvantaged.v Further, it appears that students have different expectations of male and female instructors based upon gender stereotypes.vi For example, female instructors are generally rated higher on questions pertaining to interpersonal skills, however, when they are perceived to be weak in this area, they are rated more harshly than male counterparts. Gender biases have been shown to exist both in the quantitative survey items and the comments. The problem of bias is intractable because the bias lies in the students rather than in the survey tool itself. It should be noted that while gender bias is the most studied, other forms of bias based on race, attractiveness, age, accent, and other factors have been shown to exist. Evaluators of teaching need to be mindful of potential bias when considering QSSET results; instructors who feel that such bias may have played a role in QSSET results, may wish to remind evaluators of this possibility.

 


iv It is the instructor’s responsibility to provide materials that support a full assessment. In all RTP processes save Renewal of Tenure Track appointments the burden of demonstrating that the required standard has been met is on the Member. In the case of Annual/Biennial reviews, 28.2.4 requires the Member to provide “sufficient detail of activities and their outcomes to enable the Unit Head to assess the Member’s performance” and where the Member fails to do that the Unit Head is to base their assessment and evaluation on the “information reasonably available” to them.
v MacNell, Driscoll, & Hunt, 2015; Young, Rush, & Shaw, 2009.
vi Mitchell & Martin, 2018.

 

This information is also available as a PDF:

 QSSET for Instructors - Using QSSET (PDF, 18 KB)