We recently hosted a webinar centered on a question many health professions programs eventually face: why doesn’t feedback consistently lead to meaningful improvement?
What made the discussion especially valuable was that it did not focus on conducting more assessment or collecting more data. Most programs are already doing both. Instead, the conversation focused on how the design of assessment and feedback processes shapes what gets observed, what gets communicated, and how learners interpret and use that information.
Observation Drives Everything
One of the clearest takeaways from the session was that the quality of feedback is closely tied to the quality of observation. When observations are vague or incomplete, feedback tends to follow the same pattern.
This is not simply a matter of faculty effort. It is also influenced by how assessment tools are designed. Studies examining workplace-based assessment forms have shown that even small changes to prompts can significantly affect the specificity and usefulness of feedback (French et al., 2021). When prompts focus on observable behaviors connected to real tasks, faculty are more likely to document meaningful details rather than general impressions.
In practice, that means improving feedback often starts with making it easier to capture what actually happened during an encounter.
Why Feedback Doesn’t Always Land
Even when observations are stronger, feedback does not always lead to change. Much of the challenge lies in how feedback is structured and how learners interpret it.
In many cases, feedback is too general to act on, too disconnected from a specific encounter, or too broad to provide clear direction. Timing also matters. Feedback delivered too far from the experience is often harder to apply. These patterns are well documented in the literature on feedback quality and learner engagement, which consistently points to the importance of specificity and usability (Chakroun et al., 2022).
There is also a more subtle issue at play. Learners need a clear sense of what strong performance looks like in order to interpret feedback accurately. Without that reference point, feedback can be misunderstood. In some cases, learners may overestimate their performance when the intent was developmental. Research on feedback literacy reinforces this point, emphasizing that learners must be able to interpret and use feedback effectively, not simply receive it (Carless & Boud, 2018).
What Makes Feedback More Usable
When feedback does support improvement, it usually shares a few common characteristics. It is grounded in a specific observation, focused on a clear priority, and paired with concrete next steps that can be applied in a future encounter. Just as importantly, it helps the learner understand how their performance aligns with expectations.
These elements are not especially complex, but they do require intentional design. Without them, feedback often remains informational rather than actionable. With them, it becomes much easier for learners to understand what needs to change and how to approach that change.
From Individual Comments to Meaningful Patterns
Individual feedback moments matter, but they rarely tell the whole story on their own. Learners receive feedback across multiple encounters, and it is the accumulation of those observations that begins to reveal meaningful patterns.
Research on programmatic assessment has consistently emphasized the importance of looking across multiple data points rather than relying on single evaluations (Schut et al., 2020; Heeneman et al., 2021). Repeated themes in narrative feedback, consistent strengths, and variation across contexts all contribute to a fuller understanding of learner progress.
The challenge, of course, is that these patterns are not always easy to see. When feedback is reviewed in isolation, it becomes much harder to identify trends, especially as the volume of information grows.
Making Data More Visible and Useful
This is where process and technology begin to intersect. Identifying patterns often requires looking across multiple observations, noticing recurring themes, and comparing performance over time and across settings.
Tools such as Elentra Analytics, along with other reporting platforms, can help surface those patterns more clearly by organizing and visualizing data in ways that support interpretation. But the goal is not to add more data to the system. Most programs already have plenty. The goal is to make existing data easier to see, interpret, and use in decision-making.
When patterns become more visible, they can better support coaching conversations, guide priorities, and enable earlier intervention.
Bringing It Together
Across all of these ideas, a consistent theme emerged: improving assessment and feedback does not necessarily require large-scale change. In many cases, it comes down to a series of smaller, more intentional design decisions.
When it becomes easier to capture meaningful observations, feedback improves. When feedback is structured to support interpretation and action, learners are more likely to use it. And when patterns are visible over time, programs are better positioned to support learner development in a meaningful way.
At Elentra, this is a perspective we see reinforced again and again. The challenge for most programs is not a lack of assessment activity or available information. It is the difficulty of turning observations, narrative feedback, and performance data into something visible, connected, and usable. When the right information is easier to capture, interpret, and act on, feedback becomes more than a routine requirement. It becomes a stronger foundation for coaching, learner growth, and more informed educational decisions.
A Brief Recap
-
Strong feedback begins with clear observation.
- Feedback is most useful when it is specific, focused, and actionable.
- Learners need a clear reference point to interpret feedback accurately.
- Patterns across multiple encounters provide the most meaningful insight.
- The goal is not more data, but more usable data.
Continue the Conversation
If you would like to explore these ideas further, you may request access to the full webinar recording.
We are also continuing the conversation in our member community, Elentra Connect, and we would love to hear how this resonates with your experience:
-
Where do you see feedback breaking down most often in your program?
- What changes have had the biggest impact on making feedback more actionable?
-
How are you identifying and using patterns in learner performance?
Elentra Members are invited to join the discussion on Elentra Connect.
If your program is working to make assessment and feedback more meaningful, Elentra can help. From capturing richer observations to making learner performance data easier to interpret and act on, we support institutions in turning assessment information into practical insight. If you'd like to learn how Elentra can help your program make feedback more usable, connected, and impactful—contact us today.
References
-
Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354
- Chakroun, M., Dion, V. R., Ouellet, K., Graillon, A., Désilets, V., Xhignesse, M., & St-Onge, C. (2022). Narrative assessments in higher education: A scoping review to identify evidence-based quality indicators. Academic Medicine, 97(11), 1699–1706. https://doi.org/10.1097/ACM.0000000000004755
- French, J. C., & Pien, L. C. (2021). A document analysis of nationally available faculty assessment forms of resident performance. Journal of Graduate Medical Education, 13(6), 833–840. https://doi.org/10.4300/JGME-D-21-00289.1
- Heeneman, S., de Jong, L. H., Dawson, L. J., Wilkinson, T. J., Ryan, A., Tait, G. R., et al. (2021). Ottawa 2020 consensus statement for programmatic assessment – 1. Agreement on the principles. Medical Teacher, 43(10), 1139–1148. https://doi.org/10.1080/0142159X.2021.1957088
- Schut, S., Maggio, L. A., Heeneman, S., van Tartwijk, J., van der Vleuten, C., & Driessen, E. (2021). Where the rubber meets the road—An integrative review of programmatic assessment in health care professions education. Perspectives on Medical Education, 10(1), 6–13. https://doi.org/10.1007/s40037-020-00625-w