Chat with us, powered by LiveChat An explanation of the role of supervision in your field education experience A description of your field instructor's leadership style (HE IS ORGANIZED, KNOW - Writeedu

An explanation of the role of supervision in your field education experience A description of your field instructor’s leadership style (HE IS ORGANIZED, KNOW

By Day 3

Post a blog post that includes:

  • An explanation of the role of supervision in your field education experience
  • A description of your field instructor's leadership style (HE IS ORGANIZED, KNOWLEDGABLE, AND A PERFECTIONIST) and an explanation of whether the leadership style will promote your agency learning agreement during your field education experience

Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=wcsu20

The Clinical Supervisior

ISSN: 0732-5223 (Print) 1545-231X (Online) Journal homepage: https://www.tandfonline.com/loi/wcsu20

When Values Collide Field Instructors' Experiences of Providing Feedback and Evaluating Competence

Marion Bogo MSW, Adv. Dip. SW , Cheryl Regehr PhD , Roxanne Power MSW & Glenn Regehr PhD

To cite this article: Marion Bogo MSW, Adv. Dip. SW , Cheryl Regehr PhD , Roxanne Power MSW & Glenn Regehr PhD (2007) When Values Collide, The Clinical Supervisior, 26:1-2, 99-117, DOI: 10.1300/J001v26n01_08

To link to this article: https://doi.org/10.1300/J001v26n01_08

Published online: 08 Sep 2008.

Submit your article to this journal

Article views: 4334

View related articles

Citing articles: 11 View citing articles

When Values Collide: Field Instructors’ Experiences

of Providing Feedback and Evaluating Competence

Marion Bogo Cheryl Regehr Roxanne Power Glenn Regehr

ABSTRACT. This paper reports on an analysis of qualitative data ac- crued across four research studies that addressed the experiences of field instructors in evaluating students and providing corrective feedback when necessary. Findings suggest that while tools for field evaluation are in- creasingly attempting to provide standardized, objective, and “impartial” measures of performance, these evaluations nevertheless occur within a professional and relational context that may undermine their value. As social workers, field instructors are guided by the professional values of respecting diversity, focusing on strengths and empowerment, advocating

Marion Bogo, MSW, Adv. Dip. SW, is Professor, Faculty of Social Work, University of Toronto, 246 Bloor Street West, Toronto, Ontario, Canada, M5S 1A1 (E-mail: marion. [email protected]).

Cheryl Regehr, PhD, is Professor, Sandra Rotman Chair, Faculty of Social Work, University of Toronto.

Roxanne Power, MSW, is Senior Lecturer, Faculty of Social Work, University of Toronto.

Glenn Regehr, PhD, is Professor, Richard and Elizabeth Currie Chair in Health Pro- fessions Education Research, Faculty of Medicine, University of Toronto.

Glenn Regehr, PhD, is supported as the Richard and Elizabeth Currie Chair in Health Professions Education Research.

This study was funded by a grant from the Social Sciences and Humanities Research Council of Canada.

The Clinical Supervisor, Vol. 26(1/2) 2007 Available online at http://cs.haworthpress.com

© 2007 by The Haworth Press, Inc. All rights reserved. doi:10.1300/J001v26n01_08 99

for vulnerable individuals, and valuing relationships as avenues for growth and change. By placing field instructors in a gatekeeping role, the univer- sity requires them to advocate for particular normative standards of pro- fessional behavior and to record a negative evaluation for a student who fails to achieve or adhere to these normative standards. Such activities can be in direct conflict with social workers’ personal and professional values, thereby creating a disquieting paradox for the field instructor. Models of student evaluation must consider the influence of this conflict on the field instructor’s ability to fulfill the role of professional gatekeeper and must find new ways of addressing the problematic student. doi:10.1300/ J001v26n01_08 [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <[email protected] com> Website: <http://www.HaworthPress.com> © 2007 by The Haworth Press, Inc. All rights reserved.]

KEYWORDS. Field instruction, educational assessment, evaluation, su- pervision

The accurate assessment of competence is of vital concern to all the professional disciplines. The ability to reliably and validly differentiate between those students who possess the knowledge, skills, and judg- ment necessary for safe and effective practice and those who do not is central to the critical role that is expected of university-based profes- sional programs as gatekeepers of their respective professions. Social work educational programs delegate a major portion of the responsibil- ity for evaluating students’ practice competence to field instructors in the practicum. While faculty field liaisons are involved to a greater or lesser extent in this process (Bennett & Coe, 1998), the primary respon- sibility for providing corrective feedback and for assessing students’ practice competence rests largely with community-based social work- ers in their role as field instructors. When students have strong social work skills, or when students are active and open learners who quickly integrate corrective feedback in a positive manner, the process of provid- ing feedback can be rewarding for field instructors as they participate in the generative activity of teaching and preparing the next generation of social workers (Bogo & Power, 1992; Globerman & Bogo, 2003). But what happens when the student is not able to develop the skills and competencies necessary to be a competent practitioner and displays many deficits? How does a field instructor respond to situations in which the pleasure of generativity is transformed into the responsibility for gatekeeping?

100 THE CLINICAL SUPERVISOR

In an attempt to better understand the factors that contribute to field instructors’ ability to communicate with students about their level of performance, this article examines dynamics and issues in instruction and evaluation. The data for this analysis were drawn from a series of studies on conceptualizing and assessing student competence where the topics of working with students who present problematic behaviors, providing corrective feedback, and generating summative evaluations of students were explored directly or arose spontaneously.

PREVIOUS RESEARCH

Students’ presenting with attitudes and behaviors inconsistent with social work has frequently been raised as a concern for educators. In one line of inquiry addressing this issue, researchers have attempted to de- termine whether it is possible to identify students who may not possess characteristics required to attain competence in social work prior to ad- mission. Pelech, Stalker, Regehr, and Jacobs (1999), for example, exam- ined the predictive validity of admission criteria in identifying potentially unsuitable students. Analyzing quantitative data from admission files, they determined that students who were later identified as problematic were on average older than other students, were more likely to be male, had lower grade point averages, and had more social service experience. These findings were consistent with some earlier findings regarding the value of age and gender as predictors of difficulty in student social workers (Cunningham, 1982; Duder & Aronson, 1978; Pfouts & Henley, 1977). However, these demographically-based findings were considered to be of limited use to admissions decision makers in establishing screen- ing criteria, and therefore a further content analysis was conducted of personal statements prepared at the candidacy stage. When compared to other students, issues identified in the statements of students later rec- ognized as problematic included a focus on personal histories of abuse, injustice, or neglect, and plans to work with others with similar experi- ences (Regehr, Stalker, Jacobs, & Pelech, 2001). To the extent that these predictive markers manifest as professional challenges for the student when entering the practicum setting, they are likely to be difficult issues for field instructors to address with students.

Other research has focused on the evaluation of students and identifi- cation of those with inadequate skill levels or those who do not possess the characteristics that would render them suitable for social work. Social work educators have sought to articulate outcome objectives and related

Bogo et al. 101

criteria for assessing student field learning and practice competence and to develop reliable and valid measures of field performance (Bogo, Regehr, Hughes, Power, & Globerman, 2002; Dore, Morrison, Epstein, & Herrerias, 1992; Koroloff, & Rhyne, 1989; O’Hare, & Collins, 1997; Reid, Bailey-Dempsey, & Viggiana, 1996; Vourlekis, Bembry, Hall, & Rosenblum, 1996). Despite the movement to increasingly standardized measures, clinical performance evaluation remains a complex process that is further complicated by the social and relational issues involved in a mentoring relationship (Lazar & Mosek, 1993). Yet, in all these ef- forts to “improve the scales,” researchers and developers have left the re- sponsibility for both evaluation and communicating negative evaluations primarily to the field instructor.

If a student is identified as potentially unsuitable for the profession, the issue of termination takes precedence. Research by several authors in this area has identified the absence of policies and procedures in schools of social work for terminating students for reasons such as pro- fessional unsuitability (Cobb & Jordan, 1989; Koerin & Miller, 1995), and others have addressed the legal issues associated with such termina- tion including the framework provided by the Americans with Disabili- ties Act (Cole & Lewis, 1993; Gillis & Lewis, 2004). From this work, it is clear that whether one is finding mechanisms to avoid termination of a student by correcting problematic behavior or whether one is prepar- ing the way for termination, feedback must be provided to the student regarding performance deficits, and explicit expectations for change must be enunciated. Again, given the fact that problematic behavior is most likely to be manifested and detected in the field placement, it be- comes the challenging responsibility of the field instructor to enact many of these underspecified and ill-supported, but legally necessary correc- tive actions.

Even for students who are not in potential difficulty, however, the importance of providing meaningful corrective feedback has been a consistent theme in social work education literature. Munson (2002), for example, cautioned against giving only positive feedback observing that in general, social workers (not unlike others perhaps) dislike giving or receiving criticism. Freeman (1985) provided guidelines for giving bal- anced feedback that is systematic, timely, clear, and invites dialogue. Kadushin (1992) offered similar guidelines and observed that workers’ and students’ performance failures need attention from their supervi- sors. Noting that despite social workers’ valuing of feedback they find it difficult to give when it is more negative in nature, Abbott and Lyter (1998) surveyed students and field instructors about their perceptions of

102 THE CLINICAL SUPERVISOR

giving and receiving criticism. They found that criticism was experi- enced as helpful when part of a positive trusting student and field in- structor relationship. Criticism was seen as harmful when delivered in a demeaning or harsh manner. More than a quarter of the respondents opposed criticism that was also not balanced with positive comments, and some highlighted the importance of the student being prepared for receiving criticism. Without such preparation, criticism was thought to be responsible for damaging self-esteem and self-confidence, decreasing motivation for learning social work practice, and impeding growth.

While there has been little empirical investigation of field instructors’ experience of their role as evaluators, anecdotal evidence from field coordinators note that field instructors find this aspect of their role as “most worrisome” (Pease, 1988, p. 35). Gitterman and Gitterman (1979) found that field instructors experienced defining criteria, writing the for- mal document, assessing student practice, and engaging the student in the evaluation process as stressful. In a study on the supervision of workers, Kadushin (1985) found that supervisors disliked evaluation as it rein- forced the power differential with supervisees, and that a negative evalua- tion evoked anger and upset the balance in the relationship.

Similar concerns about evaluation and gatekeeping are expressed by supervisors in related human service fields. In the field of clinical psy- chology, researchers have acknowledged the presence of conceptual ar- ticles regarding the evaluation of competence and the responsibilities of internship supervisors for gatekeeping, but they have also noted that there is little empirical work related to these concepts (Gizara & Forrest, 2004). Problematic student behaviors and trainee impairment have been studied in incidence studies and in surveys that document the percep- tions of training directors’ regarding the scope of the problem and ways of addressing it (Vacha-Haase, Davenport, & Kerewsky, 2004). Gizara and Forrest (2004) studied the experiences of 12 supervisors who had worked with students with serious competence problems, and, argued that further qualitative studies of this type would provide information that could be helpful to other supervisors. They found that supervisors perceived the process of evaluation, especially when trainees were not achieving expected levels of competence, as complex, challenging, and difficult. Issues that affected supervisors’ experience included lack of adequate preparation in their own training for the evaluation role in supervision, the degree of support they received from colleagues in their agencies, and the negative personal and emotional impact on the supervisor.

Bogo et al. 103

In a study of clinical supervisors in medicine and surgery, Dudek, Marks, and Regehr (2005) explored supervisors’ perspectives about evalu- ating poorly performing medical students and/or residents. They found that these supervisors felt they were able to identify poor performance but were often reluctant to report it for a number of reasons. Factors included their lack of previously documenting poor performance and their lack of clarity about what to document as supporting evidence for their judg- ment. Concerns about the potential of an appeal, its impact on their own credibility with their colleagues, and whether there would be faculty support for their decision also had an impact. In addition, the perceived lack of remediation opportunities for the trainee affected their decisions.

In summary, across the human services professions, studies about gatekeeping at the admissions level have provided some data regarding potential difficulty that some students might experience in field practice with populations with similar experiences to the students. In addition, studies on termination have described the processes required to remove unsuitable students and highlight the need for explicit feedback regard- ing unacceptable performance. However, these studies have not fully il- luminated the issues and challenges faced by field instructors in their day-to-day interactions with students presenting with problematic behav- iors. Similarly, despite anecdotal reports from field coordinators regard- ing the central role that field instructors play as educators and evaluators, and despite the apparent responsibility these roles engender for the field instructors as the frontline gatekeepers of the profession, there is little empirical evidence available regarding the issue of how field instructors enact these crucial gatekeeping roles: how they evaluate students and provide feedback when performance does not meet expected standards.

METHOD

As part of a program of research on conceptualizing and measuring students’ practice competence, a number of studies were conducted (see the following for a complete discussion of methodology of each study: Bogo et al., 2002, 2004, 2006; Regehr, Bogo, Regehr, & Power, 2007). A range of research methodologies were used in each study including scaling student behaviors, sorting vignettes, focus groups, and in-depth interviews. In an attempt to understand the challenges field instructors experience when teaching and assessing for competence, the research- ers pooled qualitative data from these four studies. The data relevant to this

104 THE CLINICAL SUPERVISOR

elaborated reanalysis were elicited when various aspects of evaluation were specifically investigated or when instructors’ experiences were of- fered spontaneously during discussions of evaluation scales. These rele- vant qualitative data from across the four studies were compiled for the current analysis. The methodologies of the four studies and the resulting qualitative data sets from which the relevant data were extracted are de- scribed as follows.

Study 1: In-depth interviews were held with 19 experienced field in- structors who were asked to provide descriptions of exemplary, average, and problematic students they had taught in the field practicum (Bogo et al., 2006). Spontaneous comments made by field instructors during these interviews about educating and evaluating students with problem- atic behaviors were subjected to grounded theory data analysis. An iter- ative process involved the research team in reviewing the open coding reports, engaging in selective coding and developing a theoretical un- derstanding, which was grounded in the themes that emerged.

Study 2: From the 57 descriptions of students collected in the study above, 20 realistic student vignettes were created to represent a range of student competence. Ten experienced field instructors were asked, first independently and then in one of two small groups, to divide these vignettes into as many categories as they felt necessary to reflect vari- ous levels of student performance. Two recorders (one for each group) captured the content and process of the groups as members discussed their rationale for ranking students (Bogo et al., 2004). Spontaneous comments made by field instructors about what they imagined it would be like to teach and to evaluate these fictitious students were used for this analysis.

Study 3: A Practice-Based Evaluation Tool, grounded in the concepts and language used by field instructors during the first two studies, was created. This tool consisted of six dimensions of competence described in detail along five levels of student competence. Forty three experienced field instructors were asked to recall their most recent student and to eval- uate the student first using the school’s current competency-based tool, then using the new practice-based evaluation tool (Regehr et al., in re- view). Following completion of the tools, focus groups of approximately 10 instructors each, were held where participants were asked for their opinions about the two tools, about giving feedback to students (espe- cially negative feedback), and about evaluation of student competence in general. One recorder captured the content in the discussions. Follow- ing the focus groups the recorder and group facilitator reviewed the writ- ten notes to check for accuracy and comprehensiveness. These notes

Bogo et al. 105

were subjected to grounded theory analysis following the procedure described above.

Study 4: The 20 realistic student vignettes, as described in Study 2, were provided to 28 experienced instructors who were asked to recall their most recent student and select the vignettes that were “most simi- lar” to their student. They were then asked to rate the same student using both the practice-based evaluation tool and the school’s current compe- tency-based evaluation tool described above (Regehr et al., in review). Following completion of the evaluations, focus groups ranging from 6 to 10 participants were held where participants’ opinions about the various evaluation methods were elicited and recorded. As well, participants dis- cussed their experiences of giving feedback to students, especially nega- tive feedback, and about evaluation of student competence in general. Methods of recording, data checking, and analysis were the same as de- scribed in Study 3.

In summary, 100 field instructors participated in these studies with 19 instructors providing data in individual interviews and 81 instructors providing data in 9 focus groups of 5 to 10 participants.

The researchers reviewed the relevant data from these four studies and, in the analysis, used a grounded theory iterative approach building on the themes emerging from each study. Each successive study pro- vided an opportunity to challenge the team’s interpretations through en- gagement with groups of field instructors who had not participated in the earlier phases of the research program in order to assess transferabil- ity and confirmability (Cresswell, 1998; Erlandson, Harris, Skipper, & Allen, 1993).

FINDINGS

Discussions with the field instructors across these four studies re- vealed six recurrent and interconnected themes. Each of these themes will be discussed separately in the following sections, and a model of how these various considerations combine to represent field instructors’ experiences and constructions of giving negative feedback will be offered in the discussion section.

Posture Towards Evaluation

Evaluating students presents a range of issues for social work field in- structors. When field instructors in these studies were expected to evaluate

106 THE CLINICAL SUPERVISOR

their students’ performance and rank or categorize it on a continuum, they reported conflict between the need to determine skill levels and their deeply held professional values, such as being nonjudgmental, using a strengths perspective, individualizing the person one is working with, and understanding behaviors in context. While they acknowledged that as social workers they must make judgments “in the real world of practice,” the role of facilitating learning is far more appealing to them than the role of judging student performance.

Field instructors in this group of studies were asked to provide feed- back on a series of tools for evaluation. Given their commitment to being nonjudgmental and focusing on strengths, field instructors were very sensitive to the language used in various evaluation tools, preferring what they perceived as the neutral and specific behaviors found on competency-based inventories. They were critical of tools that used what they perceived to be value-laden terms (such as unfocused, authoritar- ian with clients, inflexible regarding intervention planning) or referred to personal qualities (initiative, warmth, sensitivity) despite their ac- knowledgment that these factors were often more important dimensions of practice than some of the concrete behavioral skills. They wanted to “individualize the student” and preferred tools that provided a frame- work and a means for them to “describe the attributes and process of learning and development” of the particular student rather than tools that required them to grade, rank, categorize, or rate students. They rec- ognized the time challenge also, with one participant expressing explic- itly that “Whatever evaluation tool we use we will complain about the time it takes, even though evaluation is important.”

Student Response to Feedback and Evaluation

Giving feedback is not a problem for instructors when the student re- sponds in a thoughtful manner or accepts it, works with it, and uses it in subsequent work with clients. Instructors spoke about their gratification when they “could see the student using the feedback in the next inter- view.” Giving feedback becomes difficult, however, when the student does not accept it. The instructors described a range of student reactions including arguing, becoming defensive, attacking the instructor’s teaching style, and becoming silent and avoidant. Three types of circumstances were identified by instructors as limiting students’ acceptance of feedback: (1) where students had difficulty understanding the role of social work and the nature of practice and hence could not accurately assess their behaviors or skills; (2) where students had worked before entering the

Bogo et al. 107

educational program, believed themselves to be competent and were not open to a new view of their skill level; and (3) where students’ personality style was such that problematic behaviors were a pervasive part of their interactions with clients, colleagues from related professions, or both.

When students did not use the instructors’ feedback productively, the focus in the practicum changed from developing practice competence to concerns about the possibility of the student failing. Some students became fearful and cautious, and their struggles in learning were exacer- bated. A downward and deteriorating cycle ensued with negative feedback producing more anxiety and concerns for students, which in turn inter- fered with their ability to learn and progress.

The Relationship as a Context for Feedback and Evaluation

Social worker field instructors in these studies discussed giving both positive and negative feedback to students as similar to giving feedback to clients. They highlighted the importance of the relationship as the context where feedback and information are provided that could pro- duce growth, development, or change: “You have to be open and honest from the beginning and not shy away from correcting behavior and skills. In establishing an open and honest relationship you earn the right to give open and honest feedback.” They underscored the importance of giving feedback in a nonjudgmental way in practice and when work- ing with students: for example, “I try not to only be critical but ask how could you have done better?”

Using social work values and adult education principles, the instruc- tors encourage student participation and collaboration in all aspects of field education including setting learning objectives and evaluating learning. When expected to provide a numerical ranking for students on a rating scale, they reported that students pressure them for rankings at the high end of the scale. Interpersonal dynamics, differences in interpreting the meaning of the numbers on the scale, and time constraints left instruc- tors feeling burdened and pressured to provide higher ratings.

As a consequence of the intensity in the dyadic tutorial model of social work field instruction, the instructors commented on how giving feed- back, especially negative feedback, is difficult. Hence as one instructor stated, while other focus group participants nodded in unison, “giving negative feedback to the student is so difficult . . . it feels so personal.” They noted this was especially so when aspects of the student’s personality or personal style were at issue, for example, relationship abilities or de- gree of initiative in learning and practice. In these situations, students

108 THE CLINICAL SUPERVISOR

frequently were reported to have difficulty accepting feedback. Field in- structors reflected that when feedback was not accepted, not only was learning and change impeded but also an acrimonious process devel- oped in the relationship with the student. Field instructors used strong terms to describe the atmosphere in their subsequent sessions and in the relationship such as “becoming tense,” “very heavy, intense,” “emotional,” and “like me against the student.”

The Practicum Setting as an Influence on Feedback and Evaluation

Social work practicum generally takes place in organizations where students join instructors on multi-professional teams. Instructors reported being caught between organizational needs and students’ needs. On the one hand, they needed to preserve longstanding inter-professional relation- ships and the organization’s positive perceptions about social workers’ contributions. These perceptions were challenged when colleagues were critical of problematic student behaviors and impatient with the instruc- tors, perceiving them as inappropriately defending the student. On the other hand, instructors wanted to be fair and ensure the student had every opportunity to learn and progress. A time-consuming balancing act en- sued: “I had to spend inordinate amounts of time managing the fall-out from the student’s behavior in the setting.”

Similar to the instructors’ concern about individualizing students’ approaches to learning and progress was their perception that evalua- tion takes place within the context of a particular organizational setting. They were concerned that dynamics in their setting affected opportu- nities for student learning. Even though the instructors might rate the student highly, they were concerned that their rating would be inter- preted to mean that the student could function in other settings, a predic- tion they were not comfortable in making.

The Responsibility of the School of Social Work

A general theme emerged that can best be labeled “Where is the school?” While faculty field liaisons were praised on an individual basis as supportive and involved, instructors voiced concern about the school supporting their judgments. Instructors w

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteEdu. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.

Do you need an answer to this or any other questions?

Do you need help with this question?

Get assignment help from WriteEdu.com Paper Writing Website and forget about your problems.

WriteEdu provides custom & cheap essay writing 100% original, plagiarism free essays, assignments & dissertations.

With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.

Chat with us today! We are always waiting to answer all your questions.

Click here to Place your Order Now