Holistic Teaching Feedback and Evaluation

Teaching traditionally happens “behind closed doors.” A holistic approach to teaching feedback and evaluation begins by making these invisible interactions and systems of teaching visible.

A summary of this website can be found in the Teaching Feedback and Evaluation Brief

FLEX Pilot

Sign up for the FLEX pilot by March 7 to help us pilot the replacement for ICES, a new survey called FLEX.

Why do we conduct teaching feedback and evaluation?

A successful teaching feedback and evaluation process will improve the educational experience for our students by creating a culture in which our faculty

  • Celebrate excellence
  • Collaborate to create and share knowledge and practices for teaching
  • Encourage experimentation through scholarly and curious approaches to teaching

How do we conduct teaching feedback and evaluation?

No single data point is a good description of teaching. Multiple types of data from multiple sources complement one another and provide a more holistic perspective. We recommend three tools to make teaching visible: instructor documentation, peer feedback and unit review, and student feedback.

These three tools can spark conversations that empower a holistic evaluation of teaching that goes beyond observations or surveys. Instead, these tools

  • Contextualize the data used for feedback and evaluation
  • Identify an instructor’s teaching strengths
  • Identify next steps to improve teaching for the instructor and their unit.

Most importantly, this is a collaborative process that relies on input from teachers, colleagues, and students. Evaluation takes place only after units have data from multiple sources on multiple occasions, allowing for robust interpretation of the data.

What is the result of teaching feedback?

A collaborative, collegial culture for teaching that encourages the sharing of knowledge, experimentation and innovation with new teaching practices, and celebration of excellence. 

What is the result of a teaching evaluation?

An approach to teaching evaluation that is holistic and collaborative provides information for senior leaders and administrators to guide decision making to identify needs, mobilize resources, guide personnel functions, and recognize excellence with awards.

Teaching evaluation process infographic
Teaching Evaluation Process

A holistic approach to teaching feedback and evaluation begins with the collection of data from multiple sources (instructor documents, peer feedback and unit review, and student feedback) on multiple occasions. Instructors work with their units to curate data from those sources to construct a teaching portfolio that showcases the instructors’ teaching. A teaching evaluation draws from the teaching portfolio and evaluates teaching along the four dimensions in the campus definition of teaching excellence and other impacts (student mentoring and contributions to programs).  

Instructor documents could include things like a teaching statement, course syllabi, reflections, and curricular materials. Peer feedback could include things like conversations about course materials, peer observations, student mentoring, and participation in a unit’s teaching mission in other ways. Student feedback could include things like data from a campus-level survey (formerly ICES which will be replaced by FLEX) or other evidence-based surveys given to students.  

Differences between feedback and evaluation

Feedback is meant to be low stakes: a time where faculty can engage freely in dialogue about their teaching without fear of being penalized.

Evaluation is higher stakes: used to guide decision making (e.g., mobilize resources, recognize excellence with awards, guide personnel decisions).

It can be helpful for units to think of the feedback-evaluation distinction as a continuum rather than binary buckets (see following table).

Feedback/EvaluationExamples (non-comprehensive, descriptive not prescriptive)
Primarily FeedbackInstructor is observed by staff from the Center for Innovation in Teaching & Learning. Information from the observation dialogue remains with the instructor unless they opt to add information to their teaching portfolio.
Prioritizes feedback over evaluationInstructor is given feedback from two faculty peers who are not on the P&T committee. The peers and instructor mutually agree on strengths and next steps for the instructor that will be documented in a teaching portfolio. The next steps are documented to provide data that teaching has been reflective and evolving.
Prioritizes evaluation over feedbackInstructor is evaluated by an ad hoc committee or department head for third-year review. The committee is provided with a teaching portfolio that includes data from multiple years from all three tools: peer feedback, student feedback, and instructor reflection. A mock promotion evaluation report is drafted and is shared with the instructor. The instructor discusses the report with the department head to gain clarity about expectations and to voice needed support from the department.
Primarily evaluationInstructor is evaluated by an ad hoc committee or department head for promotion using the teaching portfolio. Instructor does not see the dossier report, but the department head discusses highlights from the report with the instructor.

Components of Excellent Teaching

Teaching feedback and evaluation should align with the campus-wide definition of teaching excellence.  

Excellent teaching in the classroom at the University of Illinois Urbana-Champaign comprises four dimensions:

Icon of three circles arranged in a triangle overlapping in the center

Well-designed

Icon of two people with a chat bubble between them

Well-delivered

Icon of three different people

Inclusive and ethical

Icon of a lightbulb and pencil

Reflective and evolving

Excellent teaching will also have impacts beyond the classroom such as 

Icon of a handshake and two chat bubbles

Student mentoring and supervision

Icon of a map with three pinpoints

Contributions to instructional and curricular programs 

Guidelines for feedback and evaluation

We recommend that units conceptualize their process for feedback and evaluation as comprising three steps: collection of data, construction of a teaching portfolio, and evaluation of the portfolio. Each step in the process curates the data for a different audience and purpose. We analogize each of these steps to how we collect data for evaluating research/scholarship (acknowledging there are important limitations to this analogy).

Data collection

Primary purpose – make teaching visible and contribute to the exchange of ideas and practices. Data collection is similar to publishing research or the performance/display of art. We publicly capture only snapshots of much more elaborate and complex processes. These public snapshots are important vehicles for promoting the exchange of ideas and to advance the state of our fields. Critically, these snapshots provide time for iteration and refinement without penalty: it is expected that drafts get reworked and papers get rejected. Units should collect data in ways that accurately capture the practices and artifacts of teaching while supporting iteration and refinement without penalty. Units can also incorporate non-public feedback (like how blind peer reviews are not made public) at this stage.

Portfolio construction

Primary purpose – curate data and construct a narrative. Not all scholarly outputs are treated the same: an awarded “best paper” matters more than a work-in-progress poster presentation. A research portfolio often includes selected publications or works to showcase strengths along with a narrative that explains the selection process. Similarly, a faculty member should be actively involved in the construction of their teaching portfolio to showcase their strengths and craft a narrative of their teaching journey. Units should provide guidance and support for construction of the portfolio.

Evaluation

Primary purpose – identify progress of a faculty member toward disciplinary expectations and guide next steps for individual faculty members and units. The portfolio is reviewed by the unit and potentially external peers to summarize the degree to which a faculty member is progressing toward disciplinary expectations and norms. This could be an annual process but is more often associated with a third-year review or an evaluation when a faculty member is being considered for promotion. Units should conduct an evaluation based on the data provided in the teaching portfolio. For promotion cases, units may need to provide additional context to help college- or campus-level committees to understand disciplinary norms and expectations. Ideally, this process is also not a unidirectional process of a unit evaluating a faculty member, but rather units, colleges, and campus should also be getting feedback on how they can be better supporting their faculty.

Example forms, processes and policies

Following are example forms, processes and policies to help units craft their teaching feedback and evaluation process. (Many resources are still being curated and will be available soon.)

General guidance
  • Sources of Data: A grid providing suggestions for where units can gather data for each of the four dimensions of the definition of teaching excellence.
  • Example Rubric (coming soon): An example rubric for providing feedback or evaluation. When used for evaluation, it is not expected that all instructors will be excellent on all dimensions, rather units should identify expectations for achievement of the rubric levels for instructors at particular career phases or tracks (e.g., a unit may decide that junior faculty should be developing in no more than one category for promotion, but mid-career faculty should be at least proficient in all categories for promotion).
  • Example Portfolio (coming soon)
Training modules
  • Canvas Course on Peer Observation (coming soon): This training course introduces the critical distinction between evidence and evaluation during peer feedback and unit review of teaching and how focusing on evidence rather than evaluation can promote honest dialogue between observers and observed. The course describes best practices for conducting a peer observation, particularly the model of using pre-observation meetings, evidence-centered observations, and post-observation meetings. This model enables observers to learn more about the context of what they are going to observe and provide observations that are more beneficial to the faculty member being observed.
  • Teaching Philosophy/Statement Course (coming soon): This training course guides a faculty member through the process of writing a teaching philosophy statement for their promotion dossier that aligns with the Campus Definition of Teaching Excellence.
Models for annual data collection

Non-exhaustive-list of models that units have used for annual data collection for feedback and evaluation. 

  • Assigned committee/individual: The unit assigns a committee or an individual to oversee the feedback and evaluation processes. This would generally be considered a significant service load for those involved. May be necessary for larger units where there would be too much administrative overhead for the unit EO to coordinate.
  • Evenly distribute across unit: The unit EO assigns all faculty in the unit to provide peer feedback to each other and each faculty member is responsible for documenting their own teaching. For example, every faculty member is responsible for providing feedback to two other faculty and can expect feedback from two other faculty members. Assignment process may vary based on department size or culture.
  • “Faculty trios”: The unit EO creates peer mentoring groups of three or more faculty members and members of these groups provide peer feedback for each other. This model can be especially helpful for units where the content and delivery of clusters of courses are tightly coupled.
Documenting and reflecting on your own teaching

An instructor likely knows more about the design, delivery, and goals of a course than anyone else, but they have the most conflicted perspective when it comes to interpreting whether their instruction is achieving the desired goals.

  • General guidance about what to document: An overview document for instructors.
  • Teaching philosophy/statement: Required component of a promotion dossier, but also something worth regularly revisiting as an aid for reflection and goal setting.
  • Instructor reflection templates: Structured reflections can be especially helpful for faculty to document how their teaching has evolved over time.
Peer feedback on teaching

Peers can provide perspectives on the quality of content in a course and to varying degrees the quality of pedagogical approach, but they have the least visibility into how the course actually runs or how students respond to the content and pedagogy.

Peer feedback should sometimes focus more on feedback and prioritize dialogue and reflection. One component of the definition of excellent teaching is that teaching is reflective and evolving: this type of feedback best supports our faculty in meeting this expectation. By focusing on feedback and dialogue, general faculty mentoring can be woven into the process.

Peer feedback is more than peer observation. Peer observations of teaching focus on aspects of whether a course is well delivered and provide little insight into whether a course is well designed. Units should be using other peer feedback activities to complement or replace peer observations. Most of these other feedback activities are less time/resource intensive than peer observations and your unit may already be doing some of these activities. For example, reviewing a new course proposal can count toward peer feedback that is focused on whether a course is well designed.

For feedback, peers can be anyone with relevant expertise and experience. While evaluations (internal or external) that are included in the promotion dossier must be written by a faculty member who is of a higher rank than the faculty member being evaluated, there is no such restriction for teaching feedback. Units are encouraged to think about who may be the best peer to provide feedback on different dimensions of teaching. For example, staff from the Center for Innovation in Teaching and Learning (CITL) may serve as peers on in-class delivery. Likewise, for some disciplines, practicing professionals (e.g., practicing lawyers, veterinarians, artists) may be the appropriate peers for reviewing the content of a course or the learning outcomes of students. 

  • Peer observation: We recommend that peer observations include a pre-meeting to establish the goals of an observation and a post-meeting during which the instructor and observers collaboratively interpret the observation data.
  • Course artifact feedback: Artifacts can be things like a syllabus, assignments, or anything else students are given to aid their learning. Feedback on these items will typically focus on the design of a course, especially whether learning resources provide appropriate support for students’ learning and are well-aligned with each other. For example, a well written syllabus will communicate why certain activities can help students achieve the learning objectives of the course.
  • Course proposal feedback: Course proposals are an excellent opportunity to provide feedback on whether a course’s learning objectives are appropriate and prepare students to engage with the state of the art in a field.
  • Course accessibility feedback: The Center for Innovation in Teaching and Learning is providing guidance for how to make online courses comply with the accessibility standards of Title II.
  • Course descriptive data (DMI list of courses taught, etc.): Basic data to capture the overall workload of instructors.
  • Supervision of students: Teaching also takes place outside the classroom through formal mentoring and supervision of students (e.g., course advising for undergraduate students or research advising for graduate students).
Sources for student feedback

Students are deeply familiar with what actually happens in a course and have the most incentive to be honest. Their educational outcomes are the primary goals of teaching. However, they have limited perspectives on the quality of content and pedagogy or even how much they learned in a course.

  • Student feedback on teaching: We are in the process of replacing the current Instructor and Course Evaluation System (ICES) with new software (Blue) and a new survey (Feedback on the Learning Experience, FLEX) that emphasizes that students are not in a position to evaluate teaching on their own, but they are in a position to provide feedback on how they experienced a course and their interaction with the instructor.
  • We are working to remove the requirement that longitudinal ICES ratings be included in promotion dossiers. Including longitudinal ICES ratings, separate from the rest of a teaching evaluation, makes it easier for the numbers to be misinterpreted or abused. ICES, and soon FLEX, ratings should be interpreted within other contextual factors such as peer feedback, instructor documentation, and the role that course plays in the curriculum.
  • Recommended reading for interpreting and using student feedback (Paper in Studies in Educational Evaluation)
  • Evidence-based surveys: Evidence-based surveys are often research-based published surveys, but may also include peer-reviewed surveys. These surveys are used to track progress toward some identified goal of the instructor. For example, an instructor might use a motivation survey from the research literature to track whether changes in their course policies and pedagogy are increasing students’ intrinsic motivation to learn and decreasing their extrinsic motivation to earn a specific grade.
  • Learning outcomes tracking: Instructors may wish to improve students’ attainment of specific learning objectives and collect data to document the success (or not) of their efforts.
Constructing a teaching portfolio (with examples)

The teaching portfolio should include data from all three sources from multiple time points. Units have flexibility to decide what should be in the portfolio and how the portfolio Is constructed (e.g., should everything be aggregated into a single document or a Box folder?)

  • Examples coming soon
Unit evaluation/review

Unit evaluations should triangulate information from multiple sources. When multiple sources of data agree, stronger claims can be made. For example, if a peer observation of teaching describes excellent in-class delivery and student ratings on delivery are similarly high, then units may confidently conclude that an instructor provides instruction that is well-delivered. However, when sources disagree, units may need to weigh some data more heavily than others. For example, students may provide feedback that a course’s material is not relevant to their careers, but peer review suggests that the content is appropriate, a unit may discount the student feedback, though they may also want to explore with the instructor why students don’t perceive the relevance of the content. 

  • Annual reviews should prioritize feedback and evaluation at different stages of an instructor’s career. For example, a unit may focus on feedback in years 1 and 2 but focus on evaluation during the 3rd year review and when an instructor is being considered for promotion.
  • Units do not need to make the 3rd year or promotion review purely evaluative. Units can explore mechanisms to provide feedback to instructors after the evaluation is conducted.
  • While we recommend that units use the same process for teaching feedback and evaluation for instructors regardless of rank or track, it may be appropriate for units to have different levels of expectations for performance for different ranks or tracks. For example, units may have higher expectations of quality of teaching for associate-level faculty than for assistant-level faculty. 

Frequently Asked Questions and Responses (Q&R)

We call this a Q&R and not a Q&A because we expect that some responses may create more questions or may not address every nuance of the questions. We hope that these responses provoke more dialogue and conversation rather than acting as definitive, prescriptive answers.

Why are we changing teaching evaluations?

We need more frequent data collection and more robust evaluations of teaching to improve teaching and make fair and equitable decisions. Teaching is a central part of our mission as a university. If we do not have robust teaching evaluations, we do not value teaching as an institution. Without richly informed feedback on their teaching, we limit the ability of our faculty to improve their teaching. Likewise, it is inappropriate to make high-stakes decisions about promotion or tenure based on poor or one-off evaluations.

Do feedback processes count toward the expectations of annual evaluations for assistant professors or every other year evaluations for associate professors?

Yes! Feedback is essential for helping faculty meet the criteria of teaching that is reflective/evolving. We are working to update the language in the Provost Communications to align with the guidance provided on this website.

Who can conduct peer feedback that goes into a teaching portfolio?

While promotion dossier documents need to be written by a faculty member at a higher rank than the faculty member being evaluated, feedback can be conducted by any peer (See “mostly feedback” example in the table above). Peers also do not need to be from the same unit of the faculty member being evaluated (e.g., they could be faculty from related departments, faculty from peer institutions, or members of professional/industry advisory boards).

What are some ways to make these new teaching feedback and evaluations sustainable?

The two core principles of peer feedback – peer feedback should sometimes focus more on feedback and should include more than peer observations – are driven by this question.

If our new peer feedback processes need to happen more regularly, they need to provide more value. Focusing on formative feedback can provide more value both to the faculty being reviewed and for the faculty doing the review. By lowering the stakes of peer feedback at times, all faculty in the process can be open to learning and growing as instructors, both improving the quality of future evaluations and the quality of teaching on campus. Peer feedback does not need to be intensive. Peer observations are the most time intensive option for peer feedback, so units should explore many of the less time-intensive options, some of which you may already be doing (e.g., reviewing new course proposals).

Will teaching evaluations really matter in promotion and tenure cases?

Our current teaching evaluations already impact promotion and tenure cases. Making our teaching evaluations more robust is an essential first step to making them matter even more.

It seems that Provost Communications (PC) 25 and 26 have different expectations than PC 9 or are even inconsistent across tracks, do units need to have different processes for specialized and tenure-track faculty?

We are planning updates to the Provost Communications to align with the guidance on this website. The Provost Communications all reference the campus definition for teaching excellence as the primary guide for teaching feedback and evaluations. Therefore, units can create a single core process for teaching feedback and evaluation for all faculty, regardless of track. Units may want to vary how often they collect data for different tracks or tweak the expectations for excellence for different tracks.

Can associate professors receive feedback and evaluation more often than every other year?

Yes. Every other year is a minimum recommendation. We encourage units to provide feedback and evaluations for their faculty as often as is sustainable.

What about teaching feedback and evaluations for full professors?

The Provost Communications are focused on promotion and tenure processes, so they are necessarily silent on teaching feedback and evaluations for full professors. Everyone can continue to learn from teaching feedback and evaluations, so units are encouraged to continue providing teaching feedback and evaluations for full professors. For example, feedback and evaluation can support development for leadership roles and awards.

What’s being done about student feedback on teaching?

The current Instructor/Course Evaluation System (ICES) is still in place for now, but we expect to replace the software and the questions by Fall 2025. We expect the new survey and system will align the new questions with the campus-wide definition of teaching excellence and with the goal of clarifying that students are providing feedback on teaching and not evaluating it.

Contact