Jump to: B1 - Pedagogy & Assessment | B2 - Teaching Evaluation
B1 - Pedagogy & Assessment
Stephanie Chasteen, Independent
Joel Corbo, University of Colorado at Boulder
Robert Dalka,University of Maryland-College Park
Chandra Turpen,University of Maryland-College Park
Poster
The APS Effective Practices for Physics Programs (EP3) Project Team surveyed chairs at physics-degree-granting departments nationwide to assess their perspectives on assessment practices and departmental change. Chairs perceived the assessment of student learning and departmental program outcomes as largely driven by compliance and as not leading to valuable change. However, chairs' visions of ideal assessment practices and department change processes largely align with the guiding principles of the EP3 Project (
https://ep3guide.org/get-started). Our results show that chairs' vision for an ideal department was largely misaligned with their current modes of operation. The cultural practices with the most potential for improvement were 1) engaging multiple stakeholders and 2) using data effectively. Chairs also perceived program review as being a valuable practice, generally approached as an opportunity for improvement, and resulting in positive change for the department. These results provide us a baseline for current practices and attitudes in physics departments, as well as a baseline measuring change as a result of the release of the EP3 Guide (
https://ep3guide.org).
David Devraj Kumar,Florida Atlantic University
Poster
The role of needs assessment in informed decisions involving programmatic changes in transforming undergraduate STEM education is explored in this presentation. Is a change made because there is a need for change or for the sake of change is an important question that should be addressed during the process of decision making in transforming STEM education programs in higher education. Decisions to change an existing program that is successful or to create new programs for the sake of change under external pressure and political trend are standard on college and university campuses. For example, STEM programs developed with good intentions, but based on fatal attraction to the availability of external funding seem to fall away without adequate institutional buy-in once the funding cycle ends. Likewise, changes to STEM programs implemented by administrative decisions and not based on curricular and program needs, tend to disappear when a new administration enters the scene. There is a loss of fiscal and human capital investments in both scenarios because a change to an existing STEM program was made without an adequate needs assessment. Unfortunately, students left by the wayside are the major casualty because of a poorly planned change on campus. How to avoid these situations to make lasting changes that are beneficial to students in STEM programs is a very important question. This question becomes more significant in minority-serving institutions where efforts to promote access, equity, and inclusion in STEM education are underway. In this context, it is worth exploring some salient principles and practical methods of needs assessment to facilitate stakeholders of STEM education to make informed decisions to transform undergraduate STEM programs by drawing input from the needs assessment literature. Also, implications for research and policy in transforming undergraduate STEM education will be discussed.
Uma Swamy,Florida International University
Sonia Underwood,Florida International University
Poster
Florida International University (FIU) is a Hispanic serving institution, and a majority of students are commuters who work at least part time. Other challenges for students include large class sizes, heavy course loads, fear of chemistry and unrealistic expectations about time commitment required. Performance in introductory "gateway" college-level General Chemistry courses has a huge impact on student engagement, motivation, persistence and progress to graduation. Most introductory college courses are typically designed as surveys of the discipline to be a "mile wide and an inch deep". Students typically are expected to "learn" vast amounts of content, but do not spend enough time making critical connections between concepts. The assessments for students in these courses usually include three exams and a final supplemented by a grade for participation and homework.
We need to provide students with opportunities to work on and build crucial connections and place them in context such that if they need to recall these ideas especially in a new context and apply them to explain real world phenomena. The COVID-19 pandemic and the switch to remote learning gave Drs Underwood and Swamy a unique opportunity to explore alternate (and authentic) assessments in their large-enrollment general chemistry 1 (CHM 1045) courses in Fall 2020. They reimagined learning in the remote environment where their team of trusted undergraduate Learning Assistants (LAs) worked with groups of students within breakout rooms to help them "tweak" their existing mental models, build crucial connections and place them in context. They will share their ideas and reimagined assessments, including replacing three exams with six mastery checkpoints and a final semester project, that allowed them to build a novel learning experience for students with the help of their Learning Assistants in the remote environment without compromising content and rigor.
Karen Myhr,Wayne State University
Peter Hoffmann,Wayne State University
Sara Kacin,Wayne State University
Asli Ozgun-Koca,Wayne State University
Alisa Hutchinson,Wayne State University
Poster
Student Success Through Evidence-based Pedagogies (SSTEP) is a project aimed at helping STEM faculty implement evidence-based teaching practices in undergraduate courses at an urban research university. The goals were:
- support faculty team-initiated two-year course development projects
- collect data on faculty teaching attitudes and practices
- develop an undergraduate STEM Learning Assistant program to support evidence-based practices in large lectures
- support training of graduate teaching assistants (GTAs)
- engage local community colleges. We framed our project on the four quadrants of the institutional change model of Henderson, Beach, and Finkelstein (2010).
We found that most of the elements of our project spanned more than one quadrant.
Including a two-year pilot, we are in the eighth year of the project and are focusing on sustainability and institutionalization. We transitioned from large-budget faculty fellows projects, to a faculty learning community model with more modest costs, and a larger focus on community building and peer observations. For the undergraduate STEM Learning Assistant program we developed more diverse funding, and stability through a pedagogy course that offers service-learning credit instead of pay. The curriculum is also being used for new GTA training in the Physics and Astronomy Department. Our community of staff and faculty that focused on GTA training shared resources, helped update university GTA training, and supported each other as we transitioned to remote and hybrid lab classes. Finally, progress in engaging the local community colleges has been limited but successful. It demonstrated the efficacy of reaching out in areas where we have local capacity and a tight curricular connection with the community college. Indirect institutional outcomes included development of a STEM Commons in association with the STEM LA program, and close work with an expansion the Office for Teaching and Learning that occurred during this project.
Linden Higgins,University of Vermont and State Agricultural College
Maya Sobel,University of Vermont and State Agricultural College
Poster
Students enter college science and math programs with disparate and unequal high school experiences. They vary not only in foundational knowledge but also in the metacognitive behaviors related to success: self-regulation, persistence, and self-efficacy (NAS 2017). Self-regulation and persistence can manifest in the strategies students adopt when preparing for exams. The majority of students tend to use re-reading (Karpicke, Butler & Roediger 2009), which has been demonstrated to be less effective than active learning strategies such as forced recall (e.g. Smith, Floerke & Thomas 2016). However, college STEM faculty rarely make time to guide students' development of more appropriate study strategies. Faculty do commonly employ post-exam wrappers to encourage student reflection, but the impact on metacognition is controversial (e.g. Gizem Gezer-Templeton et al. 2017 versus Soicher & Gurung 2016). We enhanced mid-semester exam wrappers by requiring readings of brief on-line blogs about learning. Our evaluation question is: does exposing students to information about the efficacy of different study strategies change their habits?
Students completed on-line surveys as wrappers for five exams, with the exposure to research presented at the Learning Scientists website occurring with the third and forth wrappers. The second and fifth wrappers collected pre/post-exposure data about study strategies, asking students to rank-order their study strategies then reflect on their choices. A total of 79 of 83 students completed at least one wrapper and 66 did the fifth wrapper. Among those, 57 did both the pre- and post-exposure wrappers and 55 completed one or both of the Learning scientist wrappers. Preliminary analysis indicates a marked decline in the proportion of students who use 'rereading' as their primary or secondary study strategy (from 77% to 46%). Our poster contextualizes these results through reflections by the student co-author and qualitative analysis of students' reflections on their changing strategies.
Peter Ghazarian,Ashland University
Erik Kormos,Ashland University
Kendra Wisdom,Ashland University
Poster
Technological advancements have created new opportunities for simulated field experiences in higher education. These simulated field experiences allow individuals to work through the dilemmas of their chosen professional field within a controlled environment. Undergraduate students may benefit from these new types of practical exposure before entering into their chosen careers. Higher education institutions need to be willing to test these new technologies to learn how they may complement existing modalities of practical instruction and address persistent challenges associated with traditional approaches. This presentation describes the results of a current study on teacher candidates (n=56) before and after a full semester program of Simschool simulated field experiences. The study tests participants' self-reported culturally relevant pedagogy (and subscales of diverse cultural value, explicitness, self-regulation support, ethic of care, literacy teaching, behavior support, and pedagogical expertise) via Wilcoxon signed-rank tests. The findings suggest promise in the effectiveness of simulated field experiences in fostering the development of teacher candidates' culturally relevant pedagogy, particularly in contexts where there is limited access to culturally diverse traditional field experiences. The broader application of simulated field experiences and their potential benefits to undergraduate students across disciplines are considered.
B2 - Teaching Evaluation
Shawn Simonson,Boise State University
Megan Frary,Boise State University
Brittnee Earl,Boise State University
Julia Boderick,Boise State University
Poster
Current teaching evaluation procedures in higher education are an issue. Most institutional practices are inadequate, incomplete, inaccurate, and neither improve teaching directly nor incentivize teaching improvement. Because of insufficient evaluation and reward structures, faculty may be hesitant to change or may not be motivated to consider their teaching, aware of the need to change their teaching practice, and/or how to effect meaningful change. In this project, we have established a framework defining the four criteria of effective teaching (Course design, Scholarly teaching, Learner-centeredness, and Reflective teaching) and developed a rubric to consider the multiple facets of each criteria that is flexible enough to accommodate different approaches, modes, and environments. We are using both a bottom-up approach in working with individual instructors and departments as well as a top-down approach in consulting with the Faculty Senate and Provost's office to change university policy. The framework, rubric, draft policy, and successes and failures to date will be presented. Concerns around a Center for Teaching and Learning supporting changes in the evaluation of teaching while still providing a safe place for faculty to take risks and explore new practices will be explored. In addition, the lessons learned about the support structures and resources necessary for such efforts to occur at scale will be presented.
Jean Hertzberg,University of Colorado at Boulder
Daniel Knight,University of Colorado at Boulder
Cynthia Hampton,University of Colorado at Boulder
Sarah Andrews,University of Colorado at Boulder
Poster
The Teaching Quality Framework Committee of the CU Boulder Mechanical Engineering Department has been working since 2017 on tools that can be used for assessment of teaching in promotion and tenure cases, focusing on 'low hanging fruit' that can be implemented within existing department processes. This poster will present synopses of the resources we've developed:
- Comprehensive teaching statement guidelines.
- Peer observation protocol and guidelines for use.
- Guidelines for soliciting recommendation letters from students.
- Expectations and guidelines for faculty-faculty and faculty-student/postdoc mentoring.
- Other examples of student voice, including class interviews and concept inventories.
All of these have been developed based on extensive literature surveys. Evidence-based methods have been incorporated wherever possible. We will provide feedback on resource discussions with the department and implementation plans.
Lorraine Cordeiro,University of Massachusetts-Amherst
Claire Norton,University of Massachusetts-Amherst
Heather Wemhoener,University of Massachusetts-Amherst
Elena Carbone,University of Massachusetts-Amherst
Poster
The Department of Nutrition at UMass Amherst launched several initiatives to improve our approach to faculty teaching evaluations. Traditionally, standardized student course and instructor evaluations (SRTI) formed the basis of faculty teaching evaluations at UMass with limited personnel committee understanding on how to effectively use SRTIs in the assessment process. The first initiative involved piloting peer faculty reviews for three faculty going up for promotion. Faculty could opt to include these peer reviews in their dossiers for personnel action. For this pilot, two TEval team members drafted review criteria. These were presented to faculty for feedback. Peers then observed the faculty being reviewed and assessed them according to the criteria that had been developed. Faculty involved in this process felt that it was so helpful that it was recommended that all faculty should review and be reviewed in the future. The second initiative involved a comprehensive review of all in-person courses. A parallel review of online courses is currently underway. Preliminary results indicate gaps in meeting required university components, course objectives, and diversity statements. This information will be used to guide faculty in improving their syllabi and increasing accessibility to students. The third initiative was to develop a department Self-Study to understand historical trends, systemic concerns, and propose revisions to the department's strategic plan and By-Laws. A draft of the self-study protocol was completed and shared with faculty and with university-wide ADVANCE fellows. The department will finalize the self-study and begin to collect data in the coming year.
Teresa Foley,University of Colorado at Boulder
Poster
In 2017, Integrative Physiology (IPHY) faculty agreed to participate in the Teaching Quality Framework (TQF) initiative at the University of Colorado Boulder. The TQF initiative facilitates department and campus-wide efforts to implement scholarly approaches to teaching evaluation. It is an opt-in model, with departments choosing to engage with new ways of assessing teaching for merit evaluations and reappointment, promotion, and tenure reviews.
In general, the goals of the departmental TQF teams are as follows:
- To define teaching excellence based on six categories of scholarly activity
- To develop a framework for teaching evaluation and standards and processes for use
- To use multiple data sources of assessment including the perspective of the faculty member being assessed, their students, and their peers
- To leverage the collective efforts of multiple departments to create a shared pool of resources
- To improve undergraduate education by providing faculty members with feedback and support to become better teachers
For four years, IPHY faculty met monthly with facilitators from TQF Central to develop tools and processes for teaching evaluation. To help faculty reflect on their own teaching, the IPHY TQF team developed a guide for writing a strong teaching statement that describes the instructor's thoughts on teaching and learning, how they teach, and why they teach that way. The team also developed a student letter writing guide with suggestions on how to provide constructive feedback on an instructor's teaching and mentoring. To incorporate peer feedback into the evaluation process, the team developed a detailed peer observation protocol that assesses the environment, structure, content, and implementation of an instructor's lesson.
Our future goals are to create an overarching teaching effectiveness rubric that distinguishes among levels of accomplishment and explains how instructors can achieve each level, and to continue developing tools to evaluate and enhance teaching excellence in IPHY.
Karen Moeller,University of Kansas Main Campus
Crystal Burkhardt,University of Kansas Main Campus
Cambrey Nguyen,University of Kansas Main Campus
Brittany Melton,University of Kansas Main Campus
Poster
Evaluation of teaching effectiveness in higher education has traditionally focused heavily on student evaluations, especially in annual reviews and for promotion and tenure. However, research has shown that student evaluations are unable to capture overall teaching outcomes and contain high levels of bias. In teaching, there are numerous materials available to help evaluate teaching effectiveness in addition to student evaluations, such as peer reviews, philosophy statements, examples of student works, and assessment data. The goal of our project is to describe our part in changing the culture of our department in using a multidimensional rubric to assess teaching effectiveness as part of the TEval project, a collaborative project of the Bay View Alliance that is funded by an NSF grant.
The Department of Pharmacy Practice joined the TEval project in January 2020 with the goal to improve the current peer review system of evaluating teaching practices in the department in order to promote effective and equitable educational methods. In addition to adapting the TEval framework to align with the department's structure, a pilot of two peer-triad groups was developed to assess the feasibility and utility of the framework. Through the success of the pilot program, we will be rolling out the framework to the entire department in the summer of 2021. We will first describe the development of the peer-triad groups and adaptation of the TEval framework. Secondly, we will discuss methods of department-wide formation of peer-triads, along with the role of each member in the triads and suggested documentation required in the triads. Lastly, we will review our department's goals for integration of materials (e.g. self-reflections, peer-evaluations) for faculty development and for promotion with or without tenure packet.
Tracey A. LaPierre,University of Kansas Main Campus
Lisa-Marie Wright,University of Kansas Main Campus
Poster
This poster describes the work done by KU Sociology as part of the TEval project, an interdisciplinary, inter-institutional NSF funded project designed to improve undergraduate education by engendering processes that encourage, document, and recognize effective and equitable educational practices in higher education. A key focus has been on implementing a common research-based framework (the Benchmarks Rubric) to holistically define, document and evaluate effective teaching. This framework covers 7 key areas of teaching: 1) Course goals, content, and alignment; 2) Teaching practices; 3) Achievement of learning outcomes; 4) Classroom climate and student perceptions; 5) Reflection and iterative growth; 6) Mentoring and advising; and 7) Involvement in teaching, service, scholarship, or community. Over the past 3 ½ years, sociology has adapted the Benchmarks Rubric for 1) Tenured & Tenure Track Faculty; 2) Graduate Student Teaching Assistants; and 3) Teaching Professors and Multi-Term Lecturers. The poster will describe the process and adaptations made for each targeted group, successes and failures in utilizing the adapted rubrics for annual evaluations, promotion and tenure and post-tenure reviews, and developing a culture of effective teaching and shared expectations for teaching. Barriers to change among tenured faculty included: concerns that using the rubric would take too much time; that it will result in more busy work related to documentation and assessment but would not actually improve teaching; that those who take it seriously would not be rewarded; and the overall lack of incentives to excel at teaching at a research intensive University. Facilitators of change included buy-in from key members of the department, working through most of the department individually or in small groups before having larger department discussions, aligning messaging regarding incentives for excellent teaching with current changes in higher education, identifying non-monetary incentives, and developing streamlined materials and instructions for using the rubric.
Sarah Andrews,University of Colorado at Boulder
Alanna Pawlak,University of Colorado at Boulder
Kristin Oliver,University of Colorado at Boulder
Jessica Keating,University of Colorado at Boulder
Cynthia Hampton,University of Colorado at Boulder
Mark Gammon,University of Colorado at Boulder
Noah Finkelstein,University of Colorado at Boulder
Poster
The Teaching Quality Framework Initiative (TQF) is an ongoing action research project at the University of Colorado Boulder (part of the multi-institutional NSF-funded TEval project) that engages multiple stakeholders in the process of transforming teaching evaluation across campus. We approach this transformation primarily through a process of facilitated departmental working groups in combination with cross-departmental and cross-campus stakeholder engagement. This approach is grounded in higher education scholarship, institutional and organizational change in academia, and scholarly approaches to teaching and learning. Predicted outcomes of the TQF Initiative span multiple scales and include (but are not limited to):
- Departments that form TQF working groups will adopt and implement better teaching evaluation tools and practices
- These tools and practices will be aligned with a common scholarly framework (the TQF rubric), more explicitly attend to equity and inclusion, and rely on multiple measures from three voices (self, peer, student)
- As better tools and practices are developed and implemented within departments, they will be taken up by other units across campus
- This work will help departments/the university establish improved, transparent guidelines for defining quality teaching and how to recognize and evaluate it for merit, reappointment, tenure, and promotion
- This work will enhance the visibility and value of teaching campus-wide.
To assess whether these outcomes are being met, we are gathering and analyzing data to systematically evaluate the effectiveness of the TQF approach to transforming teaching evaluation. These data include interviews with TQF project leads and departmental working group participants, pre/post comparison of evaluation tools/systems, meeting notes/artifacts, etc. In this presentation we will discuss the forms of data we collect to assess impact and how these are represented, share these representations for others engaged in similar transformation efforts, and share preliminary findings from data analyses.
Sandhya Krishnan,University of Georgia
Paula Lemons,University of Georgia
Tessa Andrews,University of Georgia
Poster
Departments that recognize, encourage, and reward effective teaching can help faculty prioritize continuous teaching improvement. Yet many STEM departments lack robust teaching evaluation practices. Teaching evaluation may be limited to end-of-course evaluations and haphazard peer observation, producing inadequate evidence to support and reward teaching improvement.
Based on a need in local STEM departments, we developed and refined guides that departments can use to develop cohesive practices for robust, equitable, and sustainable teaching evaluation. These guides address departmental practices for three voices that contribute to teaching evaluation: peers, students, and instructors themselves.
At the core of each guide is a set of Target Practices, which specify department-level teaching evaluation practices and processes. We developed the Target Practices based on scholarly literature, practices that have proven useful across institutions, and key principles of evaluation. Target Practices guide departments to work toward teaching evaluation that is: (i) structured to minimize bias, including formalized processes, training and support for enactment, and collective decision-making; (ii) reliable, including multiple sources of meaningful and trustworthy evidence; and (iii) longitudinal, in order to document change over time and provide feedback to instructors.
We encourage departments to see the Target Practices as long-term goals. Based on observations and interviews, we developed accompanying resources. Each guide includes a description of common departmental approaches to teaching evaluation. This helps departments quickly characterize their current status. Each guide includes a Target Practice self-assessment for planning and documenting change. Perhaps most importantly, each guide has a "quick-start" page that suggests starting places that have been fruitful for other departments, "bundles" to highlight how work on one Target Practice can be leveraged to achieve other practices, and links to resources. This talk will present the guides and their potential as tools for departments and change researchers.