Previous Next

Australian Clinical Legal Education

8

Clinical assessment of students’ work

Introduction

Australian clinics only occasionally spend time discussing the issue of student assessment. Clinical program leaders appear too often to face other more pressing challenges: finding suitable clinicians, dealing with academic colleagues’ misgivings about the cost of clinics or their pedagogical legitimacy and, especially, just finding the time to look outside their own law school and reflect on ‘what could be’, as opposed to ‘what is’. But when student assessment comes up in conversation or at conferences, the issues are seen to be significant and always in need of further thought.

Clinical assessment takes place against the background of general student assessment of law courses and, in this larger agenda, there is an unfortunate focus on competition as opposed to collaboration. That focus drives other debates, such as whether to grade students’ performance in some law schools, episodic law school pressure to apply moderating algorithms to clinical results, and the quest for ever more precise descriptors of varying clinical performance levels. Although not all law schools are determined to apply a ‘grading curve’—which operates to smooth out students’ results to fit a predetermined expectation of high, medium and poor academic performance—clinicians are predictably resistant to that concept when it rears its head. On the other hand, pressure for better grade definition and better methods of self-assessment of performance is not contentious at all among law schools with grading regimes and, for those clinics where grading is in place, there is every reason to continually refine and improve them.1 Until recently, Australian law schools have had no national set of agreed learning outcomes with which to measure their students’ performance in any area of values, legal knowledge or skills, let alone those that are more specific to clinical legal education. The 2010 arrival of the Threshold Learning Outcomes (TLOs) for Law2 rectified this omission. It is now feasible and cost-effective for clinicians to confidently assert learning outcomes for their programs that are consistent with the TLOs, and to design assessment indicators that relate closely to those outcomes. What is particularly interesting is the active-voice language used by the TLOs: they ask legal educators to define learning outcomes in terms such as ‘demonstrate’ and ‘be able to’, phrases that are well suited to the day-to-day scrutiny that supervisors bring to students’ activities inside clinics. This qualitative language also sensibly allows for the possibility of grading while avoiding any insistence on metric measurement of ‘demonstration’ or ‘ability’.

The architecture set by the Australian Qualifications Framework (AQF),3 which requires of LLB graduates a Level 7 achievement of ‘broad and coherent knowledge and skills for professional work’,4 is also essentially consistent with many current assessment practices in Australian clinics, as we discuss later in this chapter.

This chapter begins with a short discussion of the results of the regional reporting process in the Best Practices project described in Chapter 1. Our research shows that while assessment practices are quite diverse around Australia, with some being very sophisticated, much clinical assessment tends to the basic, intuitive and generalised rather than being developed systematically from the learning objectives of the particular course. There is little explicit pedagogy in assessment, and too few law schools have internally coherent assessment routines for their clinical courses. The strongest divergence of opinion occurred in relation to whether to grade students’ performance beyond pass/fail, and we discuss the arguments for and against both approaches. Our research led to a series of recommended best practices for clinical assessment. We discuss the main themes of these best practices below. They lead into a wider discussion of the international and Australian literature about assessment practices for different types of clinics. The chapter concludes with a discussion of several underlying and important themes that emerge from our consideration of this important aspect of clinical method.

Australian clinicians’ views on assessment: The contributions of our survey to best practices

In considering the debates about clinical assessment and their proper place in developing the best practices, it was important to survey what happens in Australian clinical programs and what Australian clinicians think about assessment issues. As the following section makes clear, there is little consistency or reflection on assessment pedagogy, and even less awareness of that gap.

All of the outcomes contained in the TLOs5 are well suited to various types of clinical experience and live client clinics can achieve all of them, a reality that only a few highly innovative law schools have fully exploited.6 However, survey respondents did not explicitly refer to these high-level outcomes, even though many were likely to be achieving some or all of them in practice.

Respondents were asked for their opinions about seven discrete areas involving assessment of students: levels of sensitivity to clients and communication; ethics and ethics awareness; intellectual grasp of substantive law/practical implementation; drafting, negotiation and advocacy skills; self-organisational ability; sociolegal awareness; and, finally, their comprehension of law reform processes.

The edited responses7 are instructive chiefly because they show respondents’ fairly limited ability to articulate learning outcomes, as opposed to describing techniques and approaches to the mechanics of assessment. In quite a number of areas, respondent clinicians across the country said they did not attempt assessment in the designated area of enquiry. Clinical components of doctrinal courses are not listed in the table; they were identified for assessment purposes only in relation to sensitivity to client communication and ethics awareness, and for both criteria assessment occurred only through student reflection, for example, in a reflective journal.

Some minor differences were observable between regions. However, very often, clinicians appear to rely on their intuition in deciding if a student is achieving in a particular area and do not think it necessary to articulate the basis on which they exercised that intuition. These clinicians may, of course, have had explicit internal criteria for measuring different areas of achievement, but they did not see the need to be too precise in their responses. Only in a few cases were possible assessment standards articulated in a way that showed an awareness of the need to measure something according to expressed criteria, even though the survey questions asked for details of both techniques used and opinions as to appropriate assessment standards.

Approaches to assessment criteria in different types of clinical experience

Edited responses from all regions

Clinical programs in all regions of Australia are likely, in varying degrees, to be addressing quite appropriate learning outcomes and attempting to conscientiously assess their achievement or otherwise. However, most clinician respondents did not say that they recognised the critical need to directly connect their own assessment regime to those learning outcomes. On the contrary, respondents identified a range of disparate practices that they thought were relevant to measuring different learning outcomes.

For example, in relation to assessing students’ perceived levels of sensitivity to clients and the effectiveness of their client communication within in-house clinics, respondents referred to a diverse group of techniques and concepts:

establish relationship with students and through this assess against standards of the supervisor

[observe] the way students talk to clients

[use] reflective journaling

[note their] instinctive reaction

[note] the way the student communicates with supervisor – rely on the supervisor to pick up on that

if you cannot see the student in with the client you rely on how they are talking about them

we assess their reflection [on the clinical process only – see Chapter 7 of this book] and what they learn from it themselves, which is probably more valuable

[a] teacher can have a view about how to deal with clients but students’ views could be equally valid

[conduct] grid and case conferences.8

In an externship context, respondents had a very different and perhaps less sophisticated set of approaches to the same assessment need, as reflected in these comments:

[consider] feedback from solicitors

sometimes informed by client feedback

[the] supervisor rates the students’ communication skills

[the] academic supervisor does not assess these qualities. A way to do so would be to measure whether a student listens to the client, [noting] whether they responded to the client’s questions, [and] whether they showed empathy.9

The general state of awareness of the need for explicit assessment criteria for each identified learning outcome appears highly variable, ranging from the sophisticated, in highly organised clinics, to minimalist or non-existent in those with less history, less funding and fewer connections to the law school. The following main themes of Best Practices relating to clinical assessment are a part of the remedy for those deficiencies.

Preliminary statement

Clinical legal education courses offered by law schools can and should be assessed. This can be done in many ways including, where appropriate, overall clinic performance and performance of specific tasks within particular clients’ cases, essays on points of law arising in clinic cases, reflective journals, the quality of court advocacy on behalf of clients, observation of students’ performance in common simulated scenarios based on prior cases, the quality of law reform submissions and vivas based on the content of much of this previously submitted work.10 Clinics can support students to achieve deep and active learning through the timely provision of feedback to them. Clinical assessment is most helpful when provided in a constructive manner, in close proximity to the actions of the students.

After considering the results of our survey, we determined that best practice requires the alignment of assessment tasks with identified learning outcomes, and the use of both formative (developmental) and summative (concluding) assessment. Considering the strength of views on the issue, we also concluded that assessment could be conducted on either a graded or pass/fail basis, providing that both approaches offer detailed summative and written feedback. We also thought it important to avoid standardising algorithms and to ensure that final mark moderation occurs through peer supervisor discussion.

Literature on assessment purposes and techniques

Our consideration of the link between best practice and what actually happens in clinics has also been influenced by the writing of many scholars in Australia and overseas. Their views, summarised in the following section, have allowed us to be confident that recommended best practices for assessment are internally consistent, pedagogically sound, and reasonably achievable.

In this section, we discuss scholarship dealing with several important issues in clinical assessment. Scholars’ general concerns around assessment issues are numerous and varied. The best place to begin, as we discussed in Chapter 4, is with the impact of Stuckey’s proper insistence on aligning assessment with learning outcomes. This is followed by the debate about grading versus pass/fail, then by discussion of the pressure to standardise clinical assessment, formative and summative assessment in clinics, and how and what to assess in clinical performance.

Each of these issues has implications for best practice, although not all are on law schools’ ‘must discuss’ list.

A well-established United States clinician, Anthony Amsterdam, sums up the distinction between ‘conventional’ or academic teaching and clinical teaching in this way: ‘The academic teacher seeks to enrich understanding of the general by deriving abstract principles from the particular; the clinician seeks to enrich understanding of the general by refining a capacity to discern the full context of the particular.’11

This distinction is commendable, although today many academic teachers and clinicians would say they use both approaches. Clinicians often help their students to generalise from their clients’ cases, just as conventional teachers increasingly look for and provide a ‘real world’ context in explaining particular principles. But for both conventional teachers and clinicians, refining students’ capacity to discern ‘the full context’ is no small task, particularly when it comes to assessing the depth of their understanding of the real world. Clinical teaching may well make it easier to investigate more ‘depth’ in issues and cases than is possible in classrooms with case reports, but the assessment of that depth of knowledge, in all the colour and shade of context, is complex. Though many have reflected on assessment, our research shows that clinicians themselves are uncertain about what can be clinically assessed, how best to do it and, in particular, whether graded assessments are legitimate in a clinical setting. There are few who can cut through that haze, but one of those is Roy Stuckey.

Best practices in United States legal education: Grading beyond pass/fail

Although Stuckey has focused on the United States’ approach to legal education,12 his often critical observations are highly relevant to other systems of legal education.

Stuckey is dismissive of grading in the context of first-year United States students’ law courses,13 and is also clear about the deficiencies of much clinical grading in the United States:

In many in-house clinics and externships, grades are based mostly on the subjective opinion of one teacher who supervises the students’ work. Grades in these courses tend to reflect an appraisal of students’ overall performance as lawyers, not necessarily what they learned or how their abilities developed during the course. When written criteria are given to students, they tend to be checklists that cover the entire spectrum of lawyering activities without any descriptions of different levels of proficiency.

Virtually no experiential education courses give written tests or otherwise try to find out if students are acquiring the knowledge and understandings that the courses purport to teach. Items that could be clearly subjected to more objective testing include students’ understanding of theories of practice or particular aspects of law, procedure, ethics and professionalism. A student’s understanding of many aspects of law practice as well as their lifelong learning skills could also be assessed, for example, by asking them to analyze recordings or transcripts of lawyers’ performances. Serious efforts to assess student learning in experiential learning courses are not being made on any large scale.14

In Australian clinics, it is common for students to have several supervisors for different aspects of their clinical experience. So it is not possible to apply United States practice to Australia uncritically, but the warning about one-dimensional and limited assessment practices is still relevant. Discussion about the dimensions of assessment can become very energetic. In Australia there is little contest around whether to assess students at all (decided, it seems by default, in the affirmative), but there has been a longstanding debate (and divergent practice) about whether students’ performance should be graded beyond the initial classifications of fail or pass. Simon Rice has written perhaps the most impassioned and articulate article in Australian clinical legal education, in which he has argued for pass/fail grades only,15 and that approach continues at the clinical courses run through Kingsford Legal Centre and some of the other clinical courses in the University of New South Wales (UNSW) Faculty of Law. But, to date, only a few other law schools have followed this course of action and, with the passage of time and the demands of students for competitive advantage over their peers, it is unlikely that pass/fail assessment will gain the allegiance of a majority of law schools. Clinicians need support for their programs from their more ‘conventional’ teaching colleagues, and a decision to move to pass/fail assessment might risk that support.

Rice’s views are, however, influential, and led us to take an open position on the merits of grading clinical performance. He regards assessment as important, but not grades:

On recognising effort, teachers will often want to acknowledge a student’s efforts, or to confirm a student’s lack of effort, and would feel frustrated if not able to. This does not, of itself, lead to a subject being graded. Grading is only one, and not a necessary, means of a teacher’s expressing encouragement or concern. Grading is a simple and simplistic mechanism. I suspect that it is attractive to teachers precisely because it is unspecific and impersonal.16

Rice does not deny the importance that students themselves place on grading,17 but considers that clinics offer their own attraction and students do not require grades in order to enrol in a clinic.18 He asserts that what is needed and is sufficient in clinical assessment is a calculation as to whether student awareness has been achieved or not. If students reach adequate awareness, then they should pass:

my learning objective of the study of justice, for which I choose clinical method … This learning objective, I concede, can be measured. In fact it might usefully be measured if the goal is a student’s attainment of an awareness they did previously not have. What might be best is a before and after snapshot of understandings and awareness, to confirm the occurrence of change, and hence the achievement of the learning goal. But that is to assess, not to grade.19

The difficulty in this formulation is that students’ awareness is of a layered, multifaceted and context-rich quality. It may be possible to say that a student has reached sufficient awareness to pass a clinical course, but that judgment does not deny or necessarily rule out lower and higher levels of awareness. There is debate in some areas of experiential legal education as to whether it is possible to go beyond pass/fail assessment. For example, in relation to the acquisition of skills, practical legal training (PLT) providers have commented that pass/fail is all that can be asserted. Their argument is that, in a PLT environment at least, ‘you can’t grade practical training at say 85% because the 15% is the risk zone and you can’t advise clients with a specified % of risk. The advice is either competent or not – [practising lawyers] must service the client and the client’s needs’.20

It must be emphasised, however, that sufficiency and insufficiency are also, logically understood, themselves grades. To assess anything as adequate (a pass) or inadequate (a fail) involves determining one of two grades. And if it is necessary to make that choice, then is this not grading?

But this issue of students’ awareness or confusion is Rice’s derivative, not primary, point. He is very clear, echoing Stuckey and many other legal educators, that learning outcomes must be reflected in assessment—and his preferred primary clinical learning outcome is the achievement of a level of awareness of justice and injustice.21 From that perspective, Rice is making an argument for assessment-without-grading since, if clinical legal education is concerned to focus on justice and process (‘from instrumentation to empowerment’), then it must avoid a vision of law and lawyering—including competitive grades—that still dominates conventional classroom instruction.22

However, it is difficult to see anything offensive in recognising that awareness involves shades of grey rather than only black or white. The real world of justice and injustice is not one of black and white—grey is everywhere. For example, it is difficult to point out to a student exactly how their comprehension of the justice process is mixed or in what precise way their understanding of the effect of poverty on client recidivism is patchy, but these are common situations where students’ awareness may be adequate but not superior. Setting out where improvements are desirable is a formative responsibility of clinical supervision and it is perhaps a bit churlish not to recognise, at the end of the semester, students’ differing progress towards higher states of awareness.

Rice also makes another important point: it should be enough for adequate clinical achievement that a student is in effect ‘on the move’, since when can anyone of us be said to have sufficiently ‘arrived’?

The language of the learning goal is of process, not result, of moving, not of having arrived. The goal is not the attainment of a measurable degree of knowledge of theories of justice, it is the students’ [degree of] internalising of the fact of power, their sense that they are becoming a part of a system whose currency is power, their awareness of their place in law, and their potential as lawyers.23

In this quotation the parenthesised ‘degree of’ is added to beg the quantum question. Rice is content to say that such ‘internalising’ is a binary state. In practice, it is doubtful whether students can identify such a neat state, though all experienced clinicians are required to decide if they think that sufficient student internalising has occurred. The point is that, if capable of deciding whether an initial ‘pass grade’ of internalising has been achieved, clinicians can go on to decide if deeper internalising has also occurred, and award higher grades. Put this way, goals can address both process and results. And if that is possible, and appropriate propositional criteria are developed that reflect learning outcomes accommodating deeper levels of awareness or internalising, then why should that not be fostered?

Fundamentally, Rice is not convinced that clinicians should grade beyond pass/fail, even if they can do so at a technical level, because the necessary level of supervisor intrusion is essentially immoral:

Grading cannot respect the internal and personal nature of the learning we are bringing to the students. The clinical experience makes demands of their emotional intelligence and they will respond to it in different ways and to different degrees. Because there is difference does not mean the difference should be measured. It is simply difference. It is not better or worse.24

There may be no satisfactory answer to this charge of intrusion and, if so, group agnosticism on the merit of grading beyond pass/fail is appropriate. But clinical supervisors within the one clinic must adopt the same approach and, ideally, they will support that approach intellectually and emotionally.

International practice is also relevant. Hyams has surveyed such practices and ultimately supports grading beyond pass/fail:

It has also been argued that clinics are intended to be safe environments for students to experiment, satisfy curiosity and explore their own values, assumptions and motivations. [citation omitted] Grading students may interfere with the non-judgmental environment, [citation omitted] inhibiting students’ desire to explore and test themselves for fear of ‘getting it wrong’ and consequently losing marks. Further, it may be an additional source of stress and preoccupation for students in an already stressful environment. [citation omitted]

Alternatively, grading may have the opposite effect on students – it can have a motivational effect and lead to a higher level of professionalism. Grades also provide the opportunity to acknowledge the time, effort and labour that students contribute to their clinical work. Finally, there is always the ‘external’ issue of the academic credibility of the clinic. Grading makes a statement to both the students and the faculty that clinic has as much academic rigour as other ‘black letter law’ units and students will be subjected to the same exacting regime as their other units of study. [citation omitted]

Brustin and Chavkin’s rigorous investigation led them to conclude that there are ‘tangible benefits’ to grade students in clinical courses which, they believed, may improve the pedagogical process and augment service delivery to clients. [citation omitted]25

Since other academics tend to have a simplistic and sometimes sceptical (perhaps cynical) view of clinic assessment, defensible assessment has become an important symbol of clinic credibility within the wider law school. That political dimension ought not to be forgotten.

As Stuckey reminds us, however, the decision as to whether higher grades are appropriate must, in the end, come back to a clinic’s agreed learning outcomes.26 And he is notable for his insistence on defining outcomes well in advance of any student commencement in a clinic. This pre-definition task includes being very clear about the minutiae of the criteria to be used to measure adequate and higher levels of achievement, not just to limit the potential for vaguely defined grading but fundamentally to make self-learning possible and empowering:

We can improve the quality of our assessments by following the approach used in other disciplines of developing and disclosing criteria-referenced assessments. Criteria-referenced assessments rely on detailed, explicit criteria that identify the abilities students should be demonstrating (for example, applying and distinguishing cases) and the bases on which the instructor will distinguish among excellent, good, competent, or incompetent performances [citation omitted] … The use of criteria minimizes the risk of unreliability in assigning grades.27

Stuckey might prefer that clinical assessment were pass/fail only, and his arguments make it clear that this is not just because the dominant objective of United States legal education is to prepare students for a career in private legal practice where formal grades of law school achievement become professionally less important than word-of-mouth reputation. But he realises that grading is what happens in law schools and proposes ways and means to improve its reliability and validity:

The use of clear criteria helps students understand what is expected of them as well as why they receive the grades they receive. Even more importantly, it increases the reliability of the teacher’s assessment by tethering the assessment to explicit criteria rather than the instructor’s gestalt sense of the correct answer or performance. The criteria should be explained to students long before the students undergo an assessment. This enhances learning and encourages students to become reflective, empowered, self-regulated learners.28

Formative and summative assessment in clinics

Much is now made of the distinction between formative and summative assessment in all education. Legal education is no exception. But in clinical contexts the distinction may be less important to the extent that formative and summative assessment can blend into each other, except for the purpose of developing detailed criteria for assessment. A United Kingdom legal educator observes that:

The difference between formative and summative assessment is often an area of concern for law teachers. The essence of formative assessment is that undertaking the assessment constitutes a learning experience in its own right. Writing an essay or undertaking a class presentation, for example, can be valuable formative activities as a means of enhancing substantive knowledge as well as for developing research, communication, intellectual and organisational skills. Formative assessment is not often included in the formal grading of work, and indeed many believe that it should not be.

In contrast, summative assessment is not traditionally regarded as having any intrinsic learning value. It is usually undertaken at the end of a period of learning in order to generate a grade that reflects the student’s performance. The traditional unseen end of module examination is often presented as a typical form of summative assessment.29

Clinical assessment ‘events’ tend to be more diverse and more frequent than assessments in conventional law teaching. In live client clinics they range from the fairly mechanical examination of file maintenance standards (that is, the degree to which client instructions are comprehensibly and accurately recorded, the comprehension, legibility and detail of file notes, the evidence of relevant legal research, the grammatical quality of letters, briefs and written advocacy), to more specific measures associated with the quality of client interviewing and representation (for example, client and fellow supervisor feedback, observation of test interviews, observations of interpersonal skills, portfolios of written case reports and the outcomes of hearings) and, finally (as we discussed in more detail in Chapter 7), to supervisors’ overall judgments about the quality of the process of students’ self-reflection in their learning journals. As we strongly emphasised in Chapter 7, assessment of reflection for this limited purpose is justified. Less tangible qualities, such as clinic attendance, participation, improvement and effort, are also important in these final judgments. Each of these categories of assessment should be considered for both formative and summative purposes.30

In externships, some or all of these criteria are also available and, in simulated clinical experiences, it is also possible to standardise formative assessments with strictly comparable case scenarios and narrowly defined instructions to students as to expected performances.

In most cases, clinicians wish to assess both students’ developmental learning process and the work they actually create, but it is important for all the above reasons to be clear about the distinction. Different measures can be better for different objectives: journals are popular for assessing learning development, and case outcomes obviously allow some judgments about overall performance, but the interaction between these methods is also instructive and can allow assessment of the capacity to reflect. For example, a student who obtains a reduced penalty in a lower court criminal case by declining to remind a magistrate of a known prior conviction might well claim a ‘successful’ outcome, but if their journal entry on the same case contains no awareness that they have been reflecting on any implicit deception of the court process—that is, they do not appear to reflect on whether it is appropriate or justified to rely on the silence of the prosecution in such cases—then it might be considered that their understanding of the reflection process is itself underdeveloped. Comparing the apparent insights of different assessment approaches improves the definition and precision of each individual measure.

Assessing student formation can be addressed through some forms of feedback, providing clinicians are clear with students in advance as to what learning outcomes are at stake and how they will be assessed.31 Feedback is discussed carefully in Chapter 6 in relation to supervision, but it is important to recognise that it cannot be realistically offered or accepted for assessment purposes unless these outcomes are clear to everyone at the start of a clinical experience.

Similarly, feedback works best as an assessment tool when accompanied and supported by students undertaking a variety of self-assessment exercises,32 because self-assessment often allows both supervisor and student to quickly get to the heart of persistent gaps between desired outcomes and actual achievements. These exercises are most useful when they contain detailed opportunities not only to discuss a particular case file outcome, but also to talk about how the result was achieved (for example, the process used to research the law in relation to that case, as well as the case result).33

After grading

Pressure to standardise clinical assessment

It is now common practice in university assessment regimes to standardise results so that the relative performance of a particular cohort of students can be measured in a steadily ascending and descending two-dimensional gradient (or ‘bell curve’ or grading bands), with relatively few ‘fails’, a considerable number of modest ‘passes’ and ‘credits’ (the top of the bell or in the highest band) and relatively few very high grades. The standardising process is intended to smooth out anomalous high and low results that can be attributed to assessment error.

Standardising is achieved by applying an algorithm (an equation) to a set of results and modifying each result to a greater or lesser extent to fit the institutional expectation as to how many students in an average cohort should fail, pass, pass very well and achieve distinction.

The exact dimensions of each bell curve are very much the result of a policy decision by the law school and to that extent are artificial. However, they still represent well-intentioned attempts to limit inherent inaccuracies in conventional assessment of particular courses, particularly when that assessment is restricted to relatively few and crude measures of performance where teachers and students have comparatively little personal interaction.

On occasion, law schools can decide on a one-size-fits-all approach and, where the assessment regime is greater than pass/fail, apply general course algorithms to clinical courses. This is not a good idea, for several reasons. First, the algorithms applied to standardise assessment are commonly based on a mathematical premise that there will be a minimum number of students in each cohort, usually at least 50 and preferably many more. This is an application of the general statistical truth that the bigger the sample, the more reliable the analysis. If the cohort is too small, then the mathematics of the algorithm will demand too big an alteration in the marks of both very poorly performing and very strongly performing students. In other words, the ends of the bell curve will be distorted so that, instead of a bell shape, the gradient can tend to look much more like a rectangle, with the possibility of fewer fails and fewer high marks. Since most clinical courses tend to have many fewer than 50 students in any one cohort, an algorithm can result in unfair final assessment.

Secondly, clinical method is a premier method of learning and teaching. It is intensive, with frequent one-on-one teacher/student interaction. Much clinical work by nature engages students’ hearts and minds in the problems of their clients, triggering a personal desire to perform. Typically, this personal element translates into significantly higher performance and the wider community notices them and their law school. Law schools increasingly see good clinics as important for their overall reputation and expect their students will achieve a great deal in a relatively short period of time. They invest in that expectation by providing a high staff–student ratio, and they expect their clinical students to work in and excel in highly collaborative professional environments. In that preparation-for-work team culture, another essential bell curve premise—that of highly individualised performance—is misplaced. A clinical bell curve embodies a contradiction, for this reason.

Thirdly—and whether or not the particular algorithm in a clinical course is unfair or misconceived—clinical students’ complaints about the substantial differences between their ‘raw’ mark and their lower standardised mark—which can amount to 10 to 12 marks—can quickly snowball into systemic criticisms of the course. Since clinical courses often contain highly motivated, self-selecting students who receive much personal and highly targeted formative assessment through close supervision, the opportunities for student improvement and performance success are substantial. In a real sense, clinics with close supervision and mentoring arrangements could be said to be engaged in continuing assessment. High raw marks are common, and no law school can easily justify a substantial reduction in marks and be perceived by students to be competent and caring of students’ experiences.

Fourthly, clinical courses—as is the case with many electives—are courses where students do better because they are choosing what to study. A bell curve does not recognise this.

Fifthly, and more fundamentally, clinical assessment is perhaps the most thorough and personal process that a law student will ever encounter. It is profoundly formative, personal and individual and contains no conceptual assessment gap requiring the generalised ‘rescue remedy’ of an algorithm. Clinical assessment tends to be accurate because, on average, each student is well known to their teachers.

Strengthening formation—recognising metacognition

While student reflection contributes to the wider concept of formation, the assessment of that formation requires some specific discussion, particularly if clinicians are to help students shift their focus during a clinical semester from producing a specific activity (for example, a brief to counsel, advocacy letter or written negotiation strategy) towards the process of their own current and future learning.

The objective here is to assess the degree or otherwise of students’ growing understanding of their most effective learning process—that is, their ‘metacognition’ of how they learn best now and how they will learn best once in paid employment. Essentially, this is a reflective activity (see Chapter 7). The understanding and embedding of metacognitive awareness is emphasised by both Stuckey’s Best Practices34 and the Carnegie Report35 as critically important to revitalising legal education in general. It is highly significant for legal education as a whole that clinical methods are tailor-made to achieve the best in students’ metacognition.36 Hyams observes that:

Self-reflection is a large part of the focus of clinical pedagogy in the US and is a key aspect of the teaching in various US clinics … The skill of self-reflection is often implicit in clinic work and is used by clinicians to assist students with their metacognitive abilities. By asking a student: ‘How would you go about finding the resolution to this dispute? What might be the appropriate approach?’ and ‘How would you do this differently next time?’, we are achieving a dual purpose: 1. modelling a lawyering practice which is careful and reflective, and 2. providing tools for improving metacognition (that is, problem solving) skills.37

Formative assessment is the best way, and possibly the only cost-effective way, to tackle that objective within clinics. Niedwiecki states it simply: ‘Essentially, the goal of formative assessment should be to move legal education away from a focus on an end product—a memorandum, motion, negotiation, oral argument, etc.—to the underlying process of developing these products.’38

Metacognition is not difficult to grasp. Many students instinctively understand what is meant by it once they have examples in front of them. It includes basic areas of self-knowledge, for example, knowing what sort of physical environment (quiet/noisy, light/dark, close to others/separate from others) is best for an individual lawyer when trying to comprehend new written material. It also covers more cerebral issues such as visual versus text-based learning preferences and knowing when to revise and how to self-test one’s own comprehension. Niedwiecki provides this description:

Essentially, metacognition is the ability to regulate and control one’s learning. There are many definitions of metacognition, but … put simply, it is the process of ‘thinking about thinking’ and the ability to self-regulate one’s learning with the goal of transferring learned skills to new situations. There are many metacognitive skills that everyone employs in the learning process: monitoring one’s reading comprehension, evaluating one’s process of learning, understanding the influence of outside stimuli on one’s learning, and knowing when one lacks motivation, just to name a few.

… Metacognition also can be described as the internal voice people hear when they are engaged in the learning process – the voice that will tell them what they have to do to accomplish a task, what they already know, what they do not know, how to match their previous learning to the new situation, when they do not understand what they are reading or learning, and how to evaluate their learning. It is this internal reflection and conscious control of the learning process that goes to the heart of metacognition.39

Conclusion

In the current stringent financial climate, conscious decisions to link clinical assessment regimes to the learning outcomes of each clinic can only strengthen their graduates’ experience and hence the reputation of each clinic. In most law schools it should not be difficult to strengthen and realign any clinical assessment regimes that do not approach best practice. But assessment of students’ performance and development is not the only dimension to clinical assessment.

It will also be necessary in time to assess clinical programs as a whole. Evans and Hyams have already jumped the boundary fence to some degree and made a case for periodic review and assessment of each clinic.40 The potential positive flow-on effect to the particular law school is already well established. Law schools’ investment in coming to grips with assessment pedagogy and then applying it consistently, not just to students’ efforts but to their entire programs, is therefore an investment in the reputation and viability of the law school itself.


1 Victoria Murray and Tamsin Nelson, ‘Assessment – Are Grade Descriptors The Way Forward?’ (2009) 14 International Journal of Clinical Legal Education 59. See also Ann Marie Cavazos, ‘The Journey Toward Excellence in Clinical Legal Education: Developing, Utilizing and Evaluating Methodologies for Determining and Assessing the Effectiveness of Student Learning Outcomes’ (2010–11) 40 Southwestern Law Review 1.

2 See Council of Australian Law Deans, Learning and Teaching Academic Standards Statement, December 2010, Threshold Learning Outcomes for the LLB degree, at perma.cc/BY6N-6SRF.

3 The Australian Qualification Framework (AQF) is a broad, all-sector set of standards that all education and training providers are required to meet. See www.aqf.edu.au/.

4 See AQF qualification levels at perma.cc/8CWE-RF4Z.

5 See footnote 2.

6 For example, Newcastle University and the University of New South Wales (UNSW) in Australia, and Northumbria University at Newcastle-Upon-Tyne in the United Kingdom.

7 Full responses are available at www.monash.edu/law/about-us/legal/olt-project.

8 These techniques and concepts are edited and paraphrased from recorded responses to the regional surveys. See footnote 7.

9 See footnote 7.

10 See, generally, R Grimes and J Gibbons, ‘Assessing experiential learning – us, them and the others’ (2016) 23(1) International Journal of Clinical Legal Education 107–36.

11 AA Amsterdam, ‘Telling Stories and Stones about Them’ (1994) 1 Clinical Law Review 9, 39.

12 Roy Stuckey and others, Best Practices in Legal Education: A Vision and a Road Map (2007) Clinical Legal Education Association. Stuckey was the principal author, but not the only contributor to this influential work. See also Roy Stuckey, ‘Can We Assess What We Purport to Teach in Clinical Law Courses’ (2006) 9 International Journal of Clinical Legal Education 177 (cited hereafter as Stuckey (2006)).

13 Roy Stuckey and others, cited at footnote 12, 236.

14 Roy Stuckey and others, cited at footnote 12, 238–39.

15 Simon Rice, ‘Assessing – But Not Grading – Clinical Legal Education’ (2007) Working Paper No 2007–16, Macquarie University; available at SSRN: perma.cc/QR7X-7KQL.

16 Simon Rice, cited at footnote 15.

17 Stacy Brustin and David Chavkin, ‘Testing the Grades: Evaluating Grading Models in Clinical Legal Education’ (1997) 3 Clinical Law Review 299, 316; Simon Rice, cited at footnote 15, 2: ‘In 1991, in a Kingsford Legal Centre Student Survey almost 60% of graduates of the clinic preferred pass/fail to graded assessment.’

18 Simon Rice, cited at footnote 15, 2: ‘The Brustin and Chavkin research, at 313, showed that “the majority of students would have registered for clinic regardless of whether performance was graded on a pass fail basis”’, referring to Stacy Brustin and David Chavkin, cited at footnote 17, 312–13.

19 Simon Rice, cited at footnote 15, 9.

20 Comment made at ALTC Project Stakeholder Meeting, Melbourne, December 2011. However, PLT providers would concede that other areas of experiential legal education, e.g. reflective journalling, could be graded.

21 As required in Threshold Learning Outcomes 1(c) and 2(c). See Council of Australian Law Deans, Learning and Teaching Academic Standards Statement, December 2010, Threshold Learning Outcomes for the LLB degree, at perma.cc/BY6N-6SRF. .

22 Simon Rice, cited at footnote 15, 7–8.

23 Simon Rice, cited at footnote 15, 10.

24 Simon Rice, cited at footnote 15, 13.

25 Ross Hyams, ‘Student assessment in the clinical environment – what can we learn from the US experience?’ (2006) 9 International Journal of Clinical Legal Education 77, 88.

26 Stuckey (2006), cited at footnote 12, 13, citing Judith Wegner, ‘Thinking Like a Lawyer About Law School Assessment’ (Draft 2003, 55; unpublished manuscript on file with Roy Stuckey).

27 Roy Stuckey and others, cited at footnote 12, 244.

28 Roy Stuckey and others, cited at footnote 12, 245. Stuckey refers to and approves of Sophie Sparrow, ‘Describing the Ball: Improve Teaching by Using Rubrics – Explicit Grading Criteria’ (2004) Michigan State Law Review 1, 28–29. See also, generally, Adrian Evans and Clark Cunningham, ‘Speciality Certification as an Incentive for Increased Professionalism: Lessons from Other Disciplines and Countries’ (2003) 54(4) South Carolina Law Review 987–1009.

29 Rob East, cited without further information in JP Ogilvy with Karen Czapanskiy, Clinical Legal Education: An Annotated Bibliography (2001), at digitalcommons.law.umaryland.edu/fac_pubs/268.

30 Useful discussions of assessment issues appear in, e.g. Hugh Brayne, Nigel Duncan and Richard Grimes, Clinical Legal Education: Active Learning in Your Law School (1998) Blackstone Press; Jerry R Foxhoven, ‘Beyond Grading: Assessing Student Readiness to Practice Law’ (2009) 16 Clinical Law Review 335; Karen Barton, Clark D Cunningham, Gregory Todd Jones and Paul Maharg, ‘Valueing What Clients Think: Standardised Clients and the Assessment of Communicative Competence’ (2006–07) 13 Clinical Law Review 1.

31 David J Nicol and Debra Macfarlane-Dick, ‘Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice’ (2006) 31 Studies in Higher Education 199, 200.

32 Anthony Niedwiecki, ‘Teaching for Lifelong Learning: Improving the Metacognitive Skills of Law Students through More Effective Formative Assessment Techniques’ (2012) 40 Capital University Law Review 149, 187–90.

33 Anthony Niedwiecki, cited at footnote 32, 181.

34 Roy Stuckey and others, cited at footnote 12, 192.

35 William Sullivan, Anne Colby, Judith Welch Wegner, Lloyd Bond and Lee S Shulman, Educating Lawyers: Preparation for the Profession of Law (2007) Jossey Bass (the Carnegie Report), 107.

36 Ross Hyams, cited at footnote 25, 83.

37 Ross Hyams, cited at footnote 25, 83.

38 Anthony Niedwiecki, cited at footnote 32, 152 (emphasis in original).

39 Anthony Niedwiecki, cited at footnote 32, 155–57.

40 See Adrian Evans and Ross Hyams, ‘Independent Evaluations of Clinical Legal Education Programs: Appropriate Objectives and Processes in an Australian Setting’ (2008) 17 Griffith Law Review 52. See also Adrian Evans, ‘Normative Attractions to Law and their Recipe for Accountability and Self-Assessment of Justice Education’ in Frank Bloch (ed), The Global Clinical Movement: Educating Lawyers for Social Justice (2011) Oxford University Press, Chapter 24, which provides a possible metric for a law school to self-assess its effectiveness in delivering justice (including clinical legal) education.


Previous Next