Previous

Critical Perspectives on the Scholarship of Assessment and Learning in Law

Chapter 1

Of tails and dogs: Standards, standardisation and innovation in assessment

Paul Maharg and Julian Webb

Introduction: Policy, standards, innovation

The title of the conference from which some of the chapters in this book spring was ‘50 Years of Assessment in Legal Education’. The conference was an opportunity to look back, but also to look forward and think about how our legacy was formed in the last half century, and what of it we wanted to carry forward and shape differently in the future. In this chapter, we shall begin by giving a brief snapshot of legal education reform movements currently taking place in the Common Law world. We shall take one example of a recent consultation project in England and Wales, namely the Legal Education and Training Review (LETR), and analyse the project’s view on assessment.1 We shall consider then some of the hegemonic values and practices in assessment and why they can make change difficult to achieve. That it can take place, though, is evidenced by the assessment practices outlined in this book. It is also evidenced in other disciplines and other jurisdictions, and we shall consider some examples of that before ending with some examples of more radical assessment practices.

Before we embark on this, we should make our methodological stances clear. From the outset it should be said that, in general, the evaluation of student knowledge and skill across any form of boundary – a single classroom, an institution, a jurisdiction, a country, one profession against another, one world region against another – is highly problematic. In a comparison of Scottish and English school inspectorate regimes and practices, for instance, Clarke and Ozga point out the discourse and performative problems inherent in any evaluation of educational practice:

Gathering performance data, conducting audits and carrying out inspections involve different devices and techniques; construct different relationships and generate different forms of knowledge (and power). Such modes are combined in particular governance architectures or assemblages (including complexly overlapping and intersecting jurisdictional spaces: the local, the (multi-)national and the European, for example).2

PISA, the OECD’s Programme for International Student Assessment, is perhaps the best-known example of an attempt to evaluate educational outcomes across national boundaries. The tools of comparison operate in highly complex and differentiated contexts: countries have different systems of education, different points of assessment, different forms of assessment, different learning and teaching content and cultures and therefore different assessment practices and outcomes. In addition, education is often viewed as comprising sets of practices integral to the nation-state: how we educate is part of how we view our identity, the values we think we espouse, the political, historical and cultural embodiment of formation. Evaluation and comparison of results is therefore less a process of scientific measurement of results and more a reflection upon why differences exist and what they tell us about different educational systems. If, for instance, government inspection in England is seen as core to school audit and drives school attainment, how did Finland manage to attain a position high in the PISA rankings without any inspection regime at all?3

In spite of these difficulties there is much to be gained by promoting policy dialogue among OECD countries, and between OECD and non-OECD countries. Dialogue though is always value-laden, and in the process of creating that dialogue the OECD has become a player in the game, espousing the values that are created by the form of evaluation it employs in its highly complex evaluation programs. Such an evaluation sets out to be a ‘non-curriculum-based measure of comparative educational performance of students at the end of compulsory schooling in literacy, mathematics, science and problem-solving’.4 It seeks to be free of curriculum content, arguing that if there is to be a comparative element it must be in the practical application of knowledge in real-world tasks.5 In the end though, as commentators have pointed out, the testing regime inevitably operates within a policy framework, which operates as a pressure upon national countries as they seek to improve PISA ratings. PISA is thus only the start of a process of realignment of local and national educational systems to conform to the construction of education as defined by OECD policy.6

We hold that it is possible to learn much about assessment practices in legal education by comparing our practices in the legal education classroom, the profession and the jurisdiction with those beyond – with other Common Law jurisdictions worldwide, with medical education learning groups, or with historians, accountants and other professions. But we need to be aware of the values and grounds of our assumptions and our approaches in doing so, and our reasons for attempting such a comparison. For us here, a core theme is that the adoption of standards and standardisation in assessment, in many respects a welcome approach, also has consequences and outcomes that we should be aware of when we analyse the effects of such standards.

Legal education reform in the Common Law world

From even a superficial reading of the history of the last century of legal education, it is clear that reform has been a central feature in the landscape, enacted as change within institutions, or the establishment of new institutions, or as regulation imposed from without the institution.7 Linked occasionally to significant moments of change in either the history of universities or the professions, outside regulation had been at first occasional and relatively slow, picking up speed in the 1970s. In recent decades it has accelerated in pace and intensity.8 In the last decade alone we can cite at least nine such movements, including the LETR report, discussed further below.9

In 2006–09, the Law Society of Scotland laid aside a small-scale review of the primary program in professional training to review, nationally, the entire legal educational process, from day one of law school through to point of qualification after traineeship (and there was also consideration of Continuing Professional Development, CPD).10 In Canada, in 2007, the Federation of the Law Societies of Canada (FLSC) carried out, like the Law Society of Scotland, two years of national consultation relating to criteria for approving Common Law degrees for the purpose of entry into bar admission programs in Canada. As part of this process, the FLSC Task Force report, for the first time, laid out a set of competences for the degree, together with input standards regarding program design and resources, length of courses, staffing, facilities, information technology, and law library, as well as specifying approval, compliance and reporting processes.11

Activity in the USA also began with the 2007 Carnegie Report, which, though it had no regulatory force, provided an impetus for change that gained considerable momentum following the onset of the global financial crisis (GFC). The Carnegie Report was highly critical of legal education’s failure to adequately develop either practice skills or the ethical and social dimensions of professionalism. The report also argued that law schools have lagged behind other professional schools in the ways they assess learning and provide feedback that improves learning outcomes.12

The GFC resulted in significant downturn in the numbers of positions for young lawyers, and subsequently the numbers of students entering law schools – a situation that is still a serious issue for US law schools. In its wake, a growing number of law schools began (and are continuing) independently to take ameliorative measures, with many implementing significant cuts in class size,13 as well as taking steps to reform the curriculum, often in line with Carnegie’s preferences for a more experiential curriculum. In the midst of these changes, the American Bar Association (ABA) launched, in 2013, a new legal education task force, which took little over a year to report on the perceived crisis in US law schools. In its report the task force left many of the critical questions about the cost of legal education unresolved. However, it did recommend reducing the burden of regulation imposed by the ABA Standards. These were identified as both a cause of high costs and a brake on innovation. The task force also broadly followed the Carnegie Report in emphasising the need for law schools to develop more practice-related curricula, and endorsed a move to more outcomes-based education.14

Concurrently with US developments, the Canadian Bar Association also began the first comprehensive study of the state of the Canadian legal market, called the Legal Futures Initiative, which culminated in a report completed in 2014 called Futures: Transforming the Delivery of Legal Services in Canada.15 Significantly, the report and the initiative went hand-in-hand with another called the Equal Justice Initiative.16 Both of these reports emphasised the need for innovation in legal services, and the role of legal education and training in contributing to both a more innovative and fairer legal services market. The Futures Report in particular called for more flexible models of education and training, and greater emphasis on innovation in legal education, with an eye both to reducing the cost of training, and to better preparing students for a professional environment where a much broader set of capabilities are now seen as critical, including, for example, emotional intelligence, digital and financial literacy, risk and project management, marketing skills, and so on. Innovation in legal education was also a theme of the Reaching Equal Justice report, with law schools encouraged to develop more clinical education programs, and to involve themselves in legal incubator projects.17

In the midst of these other initiatives, the three leading regulators of professional education in England and Wales, ILEx Professional Standards (now CILEx Regulation, the regulatory body for legal executives), the Bar Standards Board (BSB) and the Solicitors Regulation Authority (SRA) commenced the lengthy process of reviewing professional legal education in what eventually became known as the Legal Education and Training Review (2011–13).18 The context for the review included the effects of liberalisation of the legal services market, implemented by the Legal Services Act 2007. Phase 1, a consultation over the current situation and future alternatives that also included a substantial literature review, was completed in 2013; and the SRA and BSB are currently involved in Phase 2 with proposals including a Solicitors Qualifying Exam (SQE) (of which more below), and continuing debate over the need for a ‘qualifying’ law degree.

On the other side of the world, the Law Admissions Consultative Committee, a committee of the Law Council of Australia, in 2014 signalled its own intention to review legal educational processes and standards in a (proposed) Limited Review of Academic Requirements. In their initial report (completed in 2015) they noted the great variety of standards, codes and outcomes populating the regulatory space in Australia, and cited the LETR report as follows:

the [LETR] report notes the lack of an overall and coherent legal education system as such. That being so, and in order to avoid a tournament of regulators as to who will regulate whom, the regulators are encouraged to consider greater collaboration … The report also identifies a number of over-arching issues for the regulators, designed to promote common learning outcomes and consistency.19

Most recently, the Standing Committee on Legal Education in Hong Kong has instituted a ‘comprehensive’ review of legal education, which reported in 2018, in the wake of a growing debate on the need for a common entry examination for solicitors at the end of vocational training.20

The increased activity globally in the regulation of legal education is indicative of an increased anxiety about scope, quality and standards, in the context of both a rapidly changing legal services market, and a growing, global crisis in access to justice. Conventional legal knowledge and skills, while still very important, are no longer seen as enough. The need for greater practice-readiness is a recurrent theme, as is the need for a capacity for innovative thinking, ‘business solutions’ and also enhanced ethicality. This call for a wider range of competences is being matched by a general shift to more outcomes-based education and regulation. Strikingly, however, many of these reports say little of substance about the impact of these changes on assessment practices as such. Such neglect is hardly new; assessment is a critical part of the culture of learning in law schools, yet we still have very limited empirical evidence of its impact. Does it give us useful measures of student learning? What is it that we are actually measuring in law schools? Should there be standardised measures of attainment across law schools? Is it possible or even desirable to do a PISA for global legal education? The silence around assessment indicates uncertainty about the outcomes of evaluation of law school activities. Do they accurately reflect student learning, and could law school evaluations be better calibrated for the variety of stakeholders interested in such results?

LETR and assessment

LETR tried to answer at least some of these questions in the field of Legal Services Education and Training (LSET). One of our main concerns focused on the absence of ‘assurance of a consistent quality of outcomes and standards of assessment, particularly for those professions where an element of education or training is delivered by a range of semi-autonomous providers’.21 We saw this as one of a constellation of related concerns:

The key weaknesses in the system are: its reliance on relatively shallow, vague or narrow conceptions of competence; too great a reliance on initial qualification as a foundation of continuing competence; insufficient clarity and consistency around standards at points of entry; the absence, in general, of robust mechanisms for standardising assessment and a lack of coherence as regards transfer and exemption between regulated titles.22

As a result, Recommendation 2 stated:

Such guidance [i.e. that ‘learning outcome statements should be prescribed for the knowledge, skills and attributes expected of a competent member of each of the regulated professions’, and that the statements should be supported by ‘additional standards and guidance’] should require education and training providers to have appropriate methods in place for setting standards in assessment to ensure that students or trainees have achieved the outcomes prescribed.23

However, our recommendation needs to be tempered with the understanding that, here as elsewhere in legal educational research, the evidence base is weak, and consists largely of:

  1. small scale qualitative studies
  2. under-defined or undefined success criteria
  3. few longitudinal studies or follow-ups (thus open to recency effects and other biases)
  4. few systematic attempts at replication or meta-analysis.

In Chapter Four of the report we outline the move in a number of jurisdictions and professions to outcomes-based education and training. In medical education, we note the growing recognition of two concerns. First, that ‘effective medical education must be more than a scientific education’, and that among the widening base of assessable outcomes were the doctor’s ‘capacity to understand and respond to the clinical, ethical, personal and social dimensions of illness and disease (Callahan 1998; Harden et al. 1999)’. Second, we noted that medical education has in recent years focused ‘more on the doctor’s accountability to and partnership with patients and the wider profession (Frank and Danoff, 2007; Stern et al. 2010; General Medical Council, 2009)’. In the process, we observed that competences themselves had altered, becoming more ‘complex, dynamic, developmental and context-dependent (Epstein 2002; [Frenk et al 2010])’.24

The implications for learning and assessment in legal education in England and Wales were considerable. Drawing on the range of data we had gathered on LSET within LETR we gave a broad outline of knowledge and skills gaps: the variability in the development of research skills and digital literacy; oral communications skills; commercial and social awareness, skills in the domains of the affective, the moral and in ‘habits of mind’.25 Many of these gaps also were problematic for assessment practices: as one contributor to the consultation pointed out, ‘they do not lend themselves to assessment through the conventional means of assessment regarded as the norm by the regulators’.26 We also highlighted the lack (relative to medicine) of robust techniques for standardising and validating assessment tools and outcomes;27 concerns regarding the practice validity of at least some assessments on vocational courses;28 and the debate more generally about centralised assessment.29 Unfortunately, in retrospect, we made no final recommendation with regard to the latter, but our views were plain from earlier sections of the report.

It is neither the place nor the purpose of this chapter to offer a detailed evaluation of the regulatory responses to LETR. Nonetheless, there are some observations that can relevantly be made. First, the regulatory response, thus far, has been largely disappointing. Contrary to the report, there has been little coordination or attempt to set baseline standards of competence across regulated occupations. As we noted in the report, ‘“Ultimately, all standards are policy decisions” … consequently the critical first question is not so much what the standard is but how it is derived’.30 The SRA’s work on day-one outcomes and standards raises real concerns in this regard. It has, at least arguably, resulted in little more than a repackaging of existing knowledge areas. There is little evidence of (consumer) risk-based thinking, and insufficient attention to many of the wider occupational capabilities that the report (and other projects, such as the Canadian Futures Initiative) highlights. At a minimum there is an argument that the outcomes and associated standards are thus both critically over- and under-inclusive.

Second, work so far on the proposed centralised Solicitors Qualifying Examination also fails to reassure that the critical risks in such a process are being adequately addressed. The SRA has designed a separate two-part assessment of knowledge and skills, along the lines of the Qualified Lawyers Transfer Scheme (QLTS). The modularised assessment of knowledge must be completed first, and is likely to be assessed via computer-based objective testing across a range of knowledge areas,31 plus a skills assessment in legal research and writing. The second part will involve standardised practical exercises akin to the objective structured clinical examinations (OSCEs) used by medical schools and in the current QLTS. Whilst rather more detail has emerged over the two consultation processes and recent implementation papers, much remains to be developed for the deadline of 2021. Hence, our observations here are, perforce, general. We also accept, consistent with our discussion below, that there is much to commend in the move, in Stage 2 SQE, to a more realistic skills-based and client-centred form of assessment. Though even here decisions on critical details such as intensity, timing and task specificity of the assessments have the potential to make or mar the process. The greater concern in this chapter is the potential systemic risks and individual consequences for learning and assessment of Stage 1. Key issues include:

  • The SQE as gatekeeper: The doubling-up of assessment and over-inclusiveness of the SQE 1 ‘curriculum’, noted above, creates a real risk that the SQE will increase opportunities to fail (and hence deny access to the profession) on grounds that are, at best, poorly correlated to actual professional competence, let alone future capability.
  • The SQE as built-in obsolescence: the extent to which the knowledge requirements, in drawing heavily on established regulation, also deliver a framework of knowledge and skills more suited to the 1990s than the 2020s remains a matter of some debate.32
  • The SQE as professional tail that wags the academic dog: the SRA proposals assume that there will need to be some specific preparation for the SQE, which law schools may or may not integrate into their curriculum. If institutions choose to integrate, rather than ignore the SQE, or bolt on a (substantial) ‘crammer’ preparation course (akin to the US Bar preparation courses), the impact of the SQE on undergraduate curricula and assessment practices will be profound. This is not least because the SQE includes substantial subject matter currently taught at the vocational stage. Even where schools do not integrate the SQE, it will likely have an attentional impact on student attitudes and behaviours.
  • The SQE as a drag on innovation and diversification of intellectual approaches: it follows that an unintended (or perhaps from a regulatory perspective, unimportant) consequence of the SQE may be to reduce the breadth of degree courses, by focusing time and attention far more on SQE ‘basics’. It may also increase reluctance, and even capacity, to innovate in teaching and assessment, particularly in areas covered by ‘the test’.
  • The SQE as market changer: it also follows that the effects of the SQE could be radical in terms of things that have nothing to do with professional competence – for example, directly influencing marketing and recruitment in law schools; generating new quality indicators (e.g. attempts to rank law schools by SQE pass rates); and enabling both primary and secondary markets in SQE preparation courses (with consequent impacts on access and diversity).33

Challenging hegemonies

Assessment, as LETR acknowledges, exists within a frame of existing presumptions about what knowledge and skill is and does, how it relates, how it is relevant to legal education, and how it is enacted in the classroom. Frequently these presumptions, because often unquestioned, move from becoming presumptions to becoming a hegemonic way of teaching law. This has consequences for assessment, not least because the way that legal education is learned creates assumptions about forms of assessment. We can illustrate this in Table 1:

Table 1: Learning and assessment patterns

If learning …

then assessment may …

1

Is teacher-focused

Be teaching-centred, not learner-centred

2

Follows a transmission model of education

Be focused only on what’s supposed to have arrived and/or been delivered

3

Focuses only on the individual

Be individual, alienating, where in-depth collaborative peer-review or self-review is difficult to bring about

4

Consists of monolithic and doctrinal legal content

Lack interdisciplinarity, with little assessment of skills, values, attitudes as well as knowledge

5

Is constructed as taking place in either academy or professional practice programs

Be problematic, because content and forms of academic assessments can’t transfer well to professional learning and formation of identity; and transfer from academic to practice programs is awkward

On point 5 in the table, such hegemony restricts the contexts of learning and assessment. Ever since the work of Godden and Baddeley we have known that context can be a powerful determinant of learning and memory.34 Where class-restricted learning is the dominant mode, though, meaningful assessment of learning in knowledge, skills and values rarely takes place in anything but another version of the classroom, and there is little space in the curriculum for situated learning. The literature on this in healthcare is overwhelmingly persuasive.35

In the following case studies, we give examples of alternative modes of assessment that are applicable to undergraduate and postgraduate (both academic and vocational) education, and which give alternatives to the situations outlined in Table 1.

Adapting from other disciplines – the case of client-centred assessment

With the exception of clinical legal education, one of the striking features about legal education is the almost complete absence from it of those whose lives are affected by the law and justice systems studied in law school. Law is frequently taught as if it were a corpse: dissected, analysed, used to explain the effects of policy, rule-making and social consequence. But rarely do we hear from those whose lives are affected by legal decision-making. The Simulated Client Initiative (SCI) is one attempt to change that situation.36 It involves training lay people as ‘simulated clients’ (SCs) to do two things well: to simulate the narrative that a client brings to a law office, and to assess the client-facing skills of the lawyer. It is based upon substantial literature from the medical fields, where simulated patients are used extensively in the training and education of doctors, both in primary education and in ongoing assessment of medical professionals’ skills and patient-facing attitudes.37

The heuristic is used in many other fields of course, most of them in health studies and medicine. In those disciplines there is a body of literature demonstrating the inutility of prior systems of assessment, and the move to create new, fairer, more valid and more reliable forms of assessment. Thus in one major study the National Board of Medical Examiners in the USA, during three years of examinations involving analysis of 10,000 students, found that the correlation of evaluations by two examiners of candidates in a single oral assessment of student performance with a patient (a fairly standard form of assessment of knowledge and skill) was low: less than 0.25.38 The results from such studies led to the development of assessments such as standardised or simulated patients (SPs) and objective structured clinical examinations (OSCEs).

The literature on the development of these forms of assessment is large and growing – not just primary studies, but systematic reviews and meta-reviews as well. Thus, one review of the literature identified that the feedback by SPs was important for students;39 in another, students appreciated the use of both of SPs and real patients, and for different reasons.40 In one typical study of the use of SPs in physical therapy, ‘the use of an SP and a series of well-designed evaluation instruments were found to possess a high degree of validity and reliability for measuring clinical performance’.41 In another, on the use of ‘virtual patients’, Consorti et al. tested for ‘clinical reasoning’ and found that the ‘pooled ES [effect size] for studies addressing communication skills and ethical reasoning was lower than for clinical reasoning outcome’.42 We shall return to this below. In general these methods are now used in high-stakes competency examination for medical and health-related licensure in many countries.

The SCI began with the publication of a study in 2006 that proved by a correlative statistical study that the use of simulated clients (SCs) was a reliable and valid method of assessing client interviewing skills.43 We asked the following questions:

  1. Was our current system of teaching and assessing interviewing skills sufficiently reliable and valid?
  2. Could the standardised patient method be translated successfully to the legal domain?
  3. Was the method of standardised client training and assessment cost-effective?
  4. Was the method of standardised client training and assessment more reliable, valid and cost-effective than the current system?

It was clear from our research that our then-current system of teaching and assessing interviewing skills was low on reliability and validity. The results of the pilot proved that the SP method could be translated successfully to legal studies, that SC training and assessment was cost-effective, and that it was more reliable, valid and cost-effective than the then-current system of using students, actors and tutors to educate in and assess interviewing skills (effectively a variant on the practices still current in many law schools).44

What is significantly different about the assessment is that salience is given to the client’s experience of the interview, and most of the grade is given by the client. Not all aspects of client interviewing, of course, can be assessed by clients, but much of it can. The assessment is also highly flexible and can be embedded alongside other assessments of skills, knowledge and values, particularly in OSCEs (objective structured clinical examinations). Above all it is rigorous. The SRA’s assessment of the skills and knowledge of lawyers qualified in other jurisdictions and wishing to practice in England has adopted this approach extensively in their Qualified Lawyers Transfer Scheme (QLTS). As Fry, Crewe and Wakeford observed of their evaluation of the QLTS methodology,

Overall the test quality is remarkably good for such a new set of assessment procedures and challenging targets for a new high stakes assessment have largely been met.

And they observed that the assessment in the QLTS proved to be both valid and reliable:45

Assessment by standardised clients proved to be very reliable, with the six standardised client assessments conducted for each candidate by a total of 45 different actors having an alpha coefficient of 0.81 and SEm of 5.07% in OSCE #2.46

The SCI holds much significance for legal education generally. Among other points, it exposes the cognitive poverty of much conventional law school assessment; it makes prominent the ethics of the client encounter; and it demonstrates that legal education as a discipline has much to learn from forms of assessment in other disciplines.

Learning from other jurisdictions – the future of digital simulation assessment

Digital simulations come in many forms, but all have in common a number of basic features: they simulate forms of legal process, they engage students as persons within a role-play, and they use digital technologies to create the ‘realia’ of a simulated transaction. As a result, such simulations can be used for both formative and summative assessment, and are highly flexible. We can, for instance, assess:

  • professionalism and ethical performance
  • skilled performance to benchmarked levels
  • substantive knowledge of law
  • procedural knowledge
  • many other categories of assessable experience.

Underpinning this range of assessment activity is a model of learning from simulation that supports the diversity of aims – transactional learning.47 This model is multi-level. At its most superficial it describes the learning that students draw from immersion in disciplinary and professional transactions, whatever they may be. At a deeper level, it is created by the alignment and oscillation between teaching practice and student performance.48 At an even deeper level of educational philosophy, it references John Dewey’s anti-epistemology of knowledge, where thought itself becomes existential, fused with the act of enquiry and its ineluctable context. Learning is a transaction: ‘not the acquisition of knowledge about the world … but the acquisition, coordination and practice of habits, impulses and dispositions towards action in the world’.49

Rendering this dispositive model into practical guidelines for sim learning, Maharg drew up a model of learning in and from sims with seven key characteristics:

active learning

through performance in authentic transactions

involving reflection in & on learning,

deep collaborative learning, and

holistic or process learning,

with relevant professional assessment

that includes ethical standards50

One example of a sim environment is the SIMPLE Project, which created a case management application that could be adapted to other forms of knowledge representation; for example, maps, communications, etc.51 Within the simulated environment, assessment was highly flexible and adaptive to the form of transaction and learning outcome. In order of sophistication, it could include the following forms:

  1. Discrete tasks; for example, drafting, letter-writing, research
  2. Whole transactional file + performative skill; for example, advocacy or negotiation
  3. Whole transactional file + specific tasks, where students were required to complete the entire transaction, but only certain files or nodal points in the transaction are assessed
  4. Whole transactional file + specific tasks + performative skill – as in point 3 above, but with the addition of specific skills that are added; for example, collaboration with other students, or interviewing witnesses or legal research.

If we take the simplest of these forms of assessment, namely the first, we can see how adaptive the assessment could be, in terms of what might be called the topography of an assessment task. A designer could:

  1. Set the context of the task for students in granular detail. Or not: the designer could let students figure that out for themselves, which could be part of the assessment task.
  2. Set the task itself. The question here is how much detail is included – for example, is the task supported with templates, guidelines, commented examples?
  3. Design feed-forward, without doing the task for students.
  4. Deadline a task precisely, within a timeline of other tasks, or leave the task’s completion date to students to organise for themselves.
  5. On completion of task, students send it to staff in role or out of role.
  6. On completion of assessment, staff send feedback to students in role or out of role.
  7. Staff and students debrief, either in role or out of role.

The key issue is that a member of staff becomes a designer of assessment, and as with any decision made by any designer, there are inescapable characteristics of these decisions that are both functional and aesthetic. Both are present as one moves up the scale of sophistication, and the interaction between the two becomes richer and more complex the further one moves up the scale.

So far we have considered the design of a sim task as assessment, but the guided construction of learning around the task is essential too. In sims, either those carried out individually or in a group such as a ‘virtual firm’, such support enhances the assessment. Barton and Westwood described how coaching could be developed within a Practice Management module that was used to support student learning and assessment in virtual firms.52 This was elaborated in other jurisdictions, notably in New Hampshire University School of Law and in The Australian National University School of Legal Practice, where variants of the model developed at Strathclyde were used to develop professional identity, support disruptive pedagogy and enhance student wellbeing.53

There are of course complexities to this approach to assessment. It is naive simply to outsource human behaviour to technology and expect no change in that behaviour, for technology always changes human behaviour, sometimes in profound and hidden ways. A microwave oven changes how we cook, how we arrange our time, what we eat, our health and bodily functions. We set store by reputation scores in online sites such as eBay and Amazon; and their applications nudge us via likes and dislikes into forms of behaviour that the corporate apps designers manipulate for profit.54 The literature on knowledge management systems in law firms is a good example, where the design and implementation of systems based upon a materialist perspective of human relations and firm profit often produce unexpected outcomes in both relations within the firm and between fee-earners and clients.55

Thus the digital environment changes student behaviour, but not necessarily as teachers and designers may wish it.56 This extends to forms of thinking about the law, and forms of education. As Leith pointed out some time ago, expert systems in the 1980s were popular with lawyers in part at least because lawyers had been brought up on simplistic rule-oriented views of law’s reasoning.57 The failure of a deep AI to develop at this stage in legal technology was partly a lack of hardware heft, partly a pre-internet lack of applications development; but was also due to a faulty model of jurisprudence applied to law that had little basis in social need and almost no model of social development. In developing legal education applications, in the domain of simulation and elsewhere, we need to remember this early failure, and be aware of the meta-model of jurisprudential thinking that emerges from our educational interventions.

Final words

Through this chapter we have sought to demonstrate two things: how the relationship between standards, assessment and competence is problematic, and how it remains radically underdetermined in much legal academic and regulatory practice.

The failure of regulators and teachers to engage with best academic practice and innovation, in particular, is a continuing problem, as is manifest in some aspects of the reforms that have followed from the LETR process. We have expressed reservations at the current drive towards centralisation of assessment in England and Wales, and the risks that it will impose an unhelpful hegemony of old forms of legal knowledge and praxis over both legal education and the provision of legal services. We worry that conventional assumptions about forms of assessment are not being sufficiently challenged in these processes, and have sought to highlight both the potential for new thinking and new assessment practices, and the need to be aware of unintended consequences in implementing new and old assessment techniques.

As regulators in jurisdictions such as Hong Kong and Australia take up the reformist baton, we acknowledge the difficulty of their task. Assessment is a powerful tail, with the potential to send the legal education dog in some very unhelpful directions. It must not be relegated to the usual afterthought, and yet (as the work undertaken in LETR shows) existing research on assessment in law is often lacking in rigour and replicability, and it is a major task to gather and interpret lessons for law from beyond the discipline. The work of engagement, synthesis and reflection in workshops, conferences and series such as this is, however, an important start.

References

Alter A, ‘How Technology Gets Us Hooked’ The Guardian (London, 28 February 2017) <www.theguardian.com/technology/2017/feb/28/how-technology-gets-us-hooked> accessed 3 January 2017.

Barton K and others, ‘Valuing What Clients Think: Standardized Clients and the Assessment of Communicative Competence’ (2006) 13 Clinical Law Review 1.

Barton K, Garvey J and Maharg P, ‘“You Are Here”: Learning Law, Practice and Professionalism in the Academy’ in Zenon Bankowski, Maksymilian Del Mar and Paul Maharg (eds), The Arts and the Legal Academy: Beyond Text in Legal Education (Ashgate Publishing 2012).

Barton K and Westwood F, ‘From Student to Trainee Practitioner – A Study of Team Working as a Learning Experience’ (2006) 2006 Web Journal of Current Legal Issues <www.bailii.org/uk/other/journals/WebJCLI/2006/issue3/barton-westwood3.html>.

Bernstein B and Solomon J, ‘“Pedagogy, Identity and the Construction of a Theory of Symbolic Control”: Basil Bernstein Questioned by Joseph Solomon’ (1999) 20 British Journal of Sociology of Education 265. doi.org/10.1080/01425699995443.

Bieber T and Martens K, ‘The OECD PISA Study as a Soft Power in Education? Lessons from Switzerland and the US’ (2011) 46 European Journal of Education 101. doi.org/10.1111/j.1465-3435.2010.01462.x.

Bokken L and others, ‘Feedback by Simulated Patients in Undergraduate Medical Education: A Systematic Review of the Literature’ (2009) 43 Medical Education 202. doi.org/10.1111/j.1365-2923.2008.03268.x.

——, ‘Students’ Views on the Use of Real Patients and Simulated Patients in Undergraduate Medical Education’ (2009) 84 Academic Medicine 958. doi.org/10.1111/j.1365-2923.2008.03268.x.

Briscoe F, Brivot M and Tsai W, ‘Don’t Talk to Strangers? Technology-Enabled Relational Strategies and Value Creation’ (2015) 2015 Academy of Management Annual Meeting Proceedings 146. doi.org/10.5465/AMBPP.2015.31.

Callahan D, ‘AMEE Guide No. 14: Outcomes-Based Education: Preface – Medical Education and the Goals of Medicine’ (1998) 20 Medical Teacher 85. doi.org/10.1080/01421599881147.

Canadian Bar Association, Reaching Equal Justice Report: An Invitation to Envision and Act <www.cba.org/CBAMediaLibrary/cba_na/images/Equal%20Justice%20-%20Microsite/PDFs/EqualJusticeFinalReport-eng.pdf> accessed 18 January 2018.

——, Futures: Transforming the Delivery of Legal Services in Canada (2014) <www.cba.org/CBA-Legal-Futures-Initiative/Reports/Futures-Transforming-the-Delivery-of-Legal-Service> accessed 18 January 2018.

——, ‘Equal Justice Initiative’ (No date) <www.cba.org/CBA-Equal-Justice/Equal-Justice-Initiative> accessed 19 January 2018.

Clarke J and Ozga J, ‘Governing by Inspection? Comparing School Inspection in Scotland and England’ (2011) Paper for Social Policy Association Conference, University of Lincoln, 4–6 July <https://pdfs.semanticscholar.org/91f0/4b01104075d7f27e76df3770c7ebb99afc0d.pdf> accessed 12 July 2019.

Consorti F and others, ‘Efficacy of Virtual Patients in Medical Education: A Meta-Analysis of Randomized Studies’ (2012) 59 Computers & Education 1001. doi.org/10.1016/j.compedu.2012.04.017.

Davies M, ‘Changes to the Training of English and Welsh Lawyers: Implications for the Future of University Law Schools’ (2018) 52 The Law Teacher 100.

Epstein RM, ‘Defining and Assessing Professional Competence’ (2002) 287 JAMA: The Journal of the American Medical Association 226. doi.org/10.1001/jama.287.2.226.

Federation of Law Societies of Canada, ‘Common Law Degree Implementation Committee, Final Report’ (Federation of Law Societies of Canada 2011) <http://docs.flsc.ca/Implementation-Report-ECC-Aug-2011-R.pdf> accessed 18 January 2018.

Frank JR and Danoff D, ‘The CanMEDS Initiative: Implementing an Outcomes-Based Framework of Physician Competencies’ (2007) 29 Medical Teacher 642. doi.org/10.1080/01421590701746983.

Frank JR and others, ‘Toward a Definition of Competency-Based Education in Medicine: A Systematic Review of Published Definitions’ (2010) 32 Medical Teacher 631. doi.org/10.3109/0142159X.2010.500898.

Frenk, J, Chen, L. ‘Health Professionals for a New Century: Transforming Education to Strengthen Health Systems in an Interdependent World’ (2010) 276 The Lancet 1923. doi.org/10.1016/S0140-6736(10)61854-5.

Fry E, Crewe J and Wakeford R, ‘The Qualified Lawyers Transfer Scheme: Innovative Assessment Methodology and Practice in a High Stakes Professional Exam’ (2012) 46 The Law Teacher 132. doi.org/10.1080/03069400.2012.681174.

General Medical Council, ‘Tomorrow’s Doctors: Outcomes and Standards for Undergraduate Medical Education’ (GMC, 2009) <www.ub.edu/medicina_unitateducaciomedica/documentos/TomorrowsDoctors_2009.pdf>.

George S, Haque MS and Oyebode F, ‘Standard Setting: Comparison of Two Methods’ (2006) 6 BMC Medical Education. doi.org/10.1186/1472-6920-6-46.

Gerkman A and others, ‘Ahead of the Curve. Turning Law Students into Lawyers. A Study of the Daniel Webster Scholar Honors Program at the University of New Hampshire School of Law’ (Institute for the Advancement of the American Legal System 2015).

Godden DR and Baddeley AD, ‘Context-Dependent Memory in Two Natural Environments: On Land and Underwater’ (1975) 66 British Journal of Psychology 325. doi.org/10.1111/j.2044-8295.1975.tb01468.x.

Grek S, ‘OECD as a Site of Coproduction: European Education Governance and the New Politics of “Policy Mobilization”’ (2014) 8 Critical Policy Studies 266. doi.org/10.1080/19460171.2013.862503.

Harden RM, Crosby JR and Davis MH, ‘AMEE Guide No. 14: Outcome-Based Education: Part 1 – An Introduction to Outcome-Based Education’ (1999) 21 Medical Teacher 7. doi.org/10.1080/01421599979969.

Hubbard JP and others, ‘An Objective Evaluation of Clinical Competence – New Technics Used by the National Board of Medical Examiners’ (1965) 272 The New England Journal of Medicine 1321. doi.org/10.1056/NEJM196506242722505.

James C and Koo J, ‘The EU Law “Core” Module: Surviving the Perfect Storm of Brexit and the SQE’ (2018) 52 The Law Teacher 68.

Ladyshewsky R and others, ‘Evaluating Clinical Performance in Physical Therapy with Simulated Patients’ (2000) 14 Journal of Physical Therapy Education 31. doi.org/10.1097/00001416-200001000-00008.

Leith P, ‘Legal Expertise and Legal Expert Systems’ (1986) 2 International Review of Law, Computers & Technology 1. doi.org/10.1111/j.1365-2923.2007.02990.x.

Maharg P, ‘The Gordian Knot: Regulatory Relationship and Legal Education’ (2017) 4 Asian Journal of Legal Education 79. doi.org/10.1177/2322005817700185.

——, Transforming Legal Education: Learning and Teaching the Law in the Early Twenty-First Century (Ashgate Publishing 2007).

Peets AD and Ayas NT, ‘Simulation in Pulmonary and Critical Care Medicine’ in Adam I Levine and others (eds), The Comprehensive Textbook of Healthcare Simulation (Springer Science+Business Media New York 2013). doi.org/10.1007/978-1-4614-5993-4_37.

Rethans J-J and others, ‘Unannounced Standardised Patients in Real Practice: A Systematic Literature Review’ (2007) 41 Medical Education 537. doi.org/10.1111/j.1365-2929.2006.02689.x.

Ruessler M and others, ‘Simulation Training Improves Ability to Manage Medical Emergencies’ (2010) 27 Emergency Medicine Journal 734. doi.org/10.1136/emj.2009.074518.

Shepard RT, Report and Recommendations. American Bar Association. Task Force on the Future of Legal Education (American Bar Association 2014).

Sielk M and others, ‘Do Standardised Patients Lose Their Confidence in Primary Medical Care? Personal Experiences of Standardised Patients with GPs’ (2006) 56 The British Journal of General Practice 802.

Standing Committee on Legal Education and Training, Comprehensive Review of Legal Education and Training in Hong Kong: Final Report of the Consultants (April 2018) <www.sclet.gov.hk/eng/pub.htm> accessed 14 June 2018.

Sullivan WM and others, Educating Lawyers: Preparation for the Profession of Law (Jossey-Bass 2007) <http://archive.carnegiefoundation.org/pdfs/elibrary/elibrary_pdf_632.pdf>.

Webb J and others, ‘Setting Standards: The Future of Legal Services Education and Training Regulation in England and Wales’ (SRA, BSB, IPS 2013).


1 Julian Webb and others, ‘Setting Standards: The Future of Legal Services Education and Training Regulation in England and Wales’ (SRA, BSB, IPS 2013).

2 John Clarke and Jenny Ozga, ‘Governing by Inspection? Comparing School Inspection in Scotland and England’ (2011) Paper for Social Policy Association Conference, University of Lincoln, 4–6 July <https://pdfs.semanticscholar.org/91f0/4b01104075d7f27e76df3770c7ebb99afc0d.pdf> accessed 12 July 2019.

3 ibid 6.

4 Sotiria Grek, ‘OECD as a Site of Coproduction: European Education Governance and the New Politics of “Policy Mobilization’ (2014) 8 Critical Policy Studies 266, 270.

5 See, for example, PISA 2018 Draft Analytical Frameworks at <www.oecd.org/pisa/data/PISA-2018-draft-frameworks.pdf>.

6 Tonia Bieber and Kerstin Martens, ‘The OECD PISA Study as a Soft Power in Education? Lessons from Switzerland and the US’ (2011) 46 European Journal of Education 101.

7 Examples from change in the universities include the founding in England of the University of London in the mid-19th century that provided an alternative to a college-based system of university education, based in Oxford and Cambridge. Also significant was the establishment in 1858 of the university’s external studies program, and the consequent uncoupling of its examinations from study at a particular institution. Within legal education itself, the profession in England, at first dominant in legal education, has gradually relinquished control over many areas of legal education to higher education. The manoeuvre warfare between the two camps continues to this day.

8 Webb and others (n 1) summarised this in their literature review, and brought up to date earlier analyses of the reform movement. Numerous articles confirm this.

9 It should be acknowledged that this account is extremely partial; it focuses primarily on the most developed, large, Common Law jurisdictions in the UK, North America and Australia; reform measures in India, South Africa (and other parts of the Anglophone subcontinent), or in smaller jurisdictions such as Singapore and New Zealand, have not been considered.

10 For a brief summary of the changes made to the program, see Paul Maharg, ‘The Gordian Knot: Regulatory Relationship and Legal Education’ (2017) 4 Asian Journal of Legal Education 79.

11 Federation of Law Societies of Canada, ‘Common Law Degree Implementation Committee, Final Report’ (Federation of Law Societies of Canada 2011) <http://docs.flsc.ca/Implementation-Report-ECC-Aug-2011-R.pdf> accessed 18 January 2018.

12 See William M Sullivan and others, Educating Lawyers: Preparation for the Profession of Law (Jossey-Bass 2007) <http://archive.carnegiefoundation.org/pdfs/elibrary/elibrary_pdf_632.pdf>.

13 Including institutions in the US ‘top 50’ law schools: for example, between 2011 and 2015, Michigan Law cut its first-year class by 26 per cent. See <www.bloomberg.com/news/articles/2016-01-26/the-best-law-schools-are-attracting-fewer-students>.

14 Randall T Shepard, Report and Recommendations. American Bar Association. Task Force on the Future of Legal Education (American Bar Association 2014).

15 Canadian Bar Association, Futures: Transforming the Delivery of Legal Services in Canada (2014) <www.cba.org/CBA-Legal-Futures-Initiative/Reports/Futures-Transforming-the-Delivery-of-Legal-Service> accessed 18 January 2018.

16 Canadian Bar Association, ‘Equal Justice Initiative’ (nd) <www.cba.org/CBA-Equal-Justice/Equal-Justice-Initiative> accessed 19 January 2018.

17 Canadian Bar Association, Reaching Equal Justice Report: An Invitation to Envision and Act <www.cba.org/CBAMediaLibrary/cba_na/images/Equal%20Justice%20-%20Microsite/PDFs/EqualJusticeFinalReport-eng.pdf> accessed 18 January 2018.

18 Webb and others (n 1).

19 ibid vii. The proposal for a ‘limited review’ received significant pushback from stakeholders, with the consequence that a separate ‘Assuring Professional Competence Committee’ (APCC) was established in late 2017 to undertake a more substantial (though not research-led) review: see the APCC landing page at <www.lawcouncil.asn.au/resources/law-admissions-consultative-committee/assuring-professional-competence-committee> accessed 14 June 2018.

20 See Standing Committee on Legal Education and Training, Comprehensive Review of Legal Education and Training in Hong Kong: Final Report of the Consultants (April 2018) <www.sclet.gov.hk/eng/pub.htm> accessed 14 June 2018. It should be noted that the Law Society and Bar Association continue to be the primary regulators of legal training in Hong Kong.

21 Webb and others (n 1) xii.

22 ibid xii–xiii.

23 ibid xiii (italics in original).

24 ibid 120. The references we cite are as follows, in order of citation: D Callahan, ‘AMEE Guide No. 14: Outcomes-Based Education: Preface – Medical Education and the Goals of Medicine’ (1998) 20 Medical Teacher 85; RM Harden, JR Crosby and MH Davis, ‘AMEE Guide No. 14: Outcome-Based Education: Part 1 – An Introduction to Outcome-Based Education’ (1999) 21 Medical Teacher 7; Jason R Frank and Deborah Danoff, ‘The CanMEDS Initiative: Implementing an Outcomes-Based Framework of Physician Competencies’ (2007) 29 Medical Teacher 642; John Frenk and Lee Chen, ‘Health Professionals for a New Century: Transforming Education to Strengthen Health Systems in an Interdependent World’ (2010) 376 The Lancet 1923; General Medical Council, ‘Tomorrow’s Doctors: Outcomes and Standards for Undergraduate Medical Education’ (GMC, 2009) <www.ub.edu/medicina_unitateducaciomedica/documentos/TomorrowsDoctors_2009.pdf>; RM Epstein, ‘Defining and Assessing Professional Competence’ (2002) 287 JAMA: The Journal of the American Medical Association 226; Jason R Frank and others, ‘Toward a Definition of Competency-Based Education in Medicine: A Systematic Review of Published Definitions’ (2010) 32 Medical Teacher 631.

25 Webb and others (n 1) 131–140.

26 ibid 140.

27 ibid 144, 212.

28 ibid 147–148.

29 ibid 148.

30 ibid 150, quoting S George, MS Haque and F Oyebode, ‘Standard Setting: Comparison of Two Methods’ (2006) 6 BMC Medical Education, doi.org/10.1186/1472-6920-6-46.

31 Stage 1 is intended to comprise six ‘functional knowledge assessments’ covering: (i) Principles of Professional Conduct, Public and Administrative Law, and the Legal Systems of England and Wales; (ii) Dispute Resolution in Contract or Tort; (iii) Property Law and Practice; (iv) Commercial and Corporate Law and Practice; (v) Wills and the Administration of Estates and Trusts; and (vi) Criminal Law and Practice. The emphasis on ethics and ‘practice’ in these areas, together with the addition of commercial and corporate law, radically distinguishes Stage 1 from the existing academic ‘core’.

32 See Cherry James and John Koo, ‘The EU Law “Core” Module: Surviving the Perfect Storm of Brexit and the SQE’ (2018) 52 The Law Teacher 68.

33 See Mark Davies, ‘Changes to the Training of English and Welsh Lawyers: Implications for the Future of University Law Schools’ (2018) 52 The Law Teacher 100.

34 DR Godden and AD Baddeley, ‘Context-Dependent Memory in Two Natural Environments: On Land and Underwater’ (1975) 66 British Journal of Psychology 325.

35 Adam D Peets and Najib T Ayas, ‘Simulation in Pulmonary and Critical Care Medicine’ in Adam I Levine and others (eds), The Comprehensive Textbook of Healthcare Simulation (Springer Science+Business Media New York 2013); Miriam Ruessler and others, ‘Simulation Training Improves Ability to Manage Medical Emergencies’ (2010) 27 Emergency Medicine Journal 734.

36 See The Simulated Client Initiative <http://zeugma.typepad.com/sci> accessed 18 January 2018.

37 For example, in one study on inter-doctor variation on managing headaches, SPs were used with real GPs, unannounced. In post-consultation discussion of their experiences the SPs were ‘very dissatisfied with the majority of GPs visited’, and their confidence in primary care was shaken by their experiences. See Martin Sielk and others, ‘Do Standardised Patients Lose Their Confidence in Primary Medical Care? Personal Experiences of Standardised Patients with GPs’ (2006) 56 The British Journal of General Practice 802. See also this meta-review: Jan-Joost Rethans and others, ‘Unannounced Standardised Patients in Real Practice: A Systematic Literature Review’ (2007) 41 Medical Education 537.

38 John P Hubbard and others, ‘An Objective Evaluation of Clinical Competence – New Technics Used by the National Board of Medical Examiners | NEJM’ (1965) 272 The New England Journal of Medicine 1321.

39 Lonneke Bokken and others, ‘Feedback by Simulated Patients in Undergraduate Medical Education: A Systematic Review of the Literature’ (2009) 43 Medical Education 202.

40 Lonneke Bokken and others, ‘Students’ Views on the Use of Real Patients and Simulated Patients in Undergraduate Medical Education’ (2009) 84 Academic Medicine 958.

41 Richard Ladyshewsky and others, ‘Evaluating Clinical Performance in Physical Therapy with Simulated Patients’ (2000) 14 Journal of Physical Therapy Education 31.

42 Fabrizio Consorti and others, ‘Efficacy of Virtual Patients in Medical Education: A Meta-Analysis of Randomized Studies’ (2012) 59 Computers & Education 1001.

43 See Karen Barton and others, ‘Valuing What Clients Think: Standardized Clients and the Assessment of Communicative Competence’ (2006) 13 Clinical Law Review 1. Interestingly, while there was high correlation between tutors and SCs, there was little correlation between students’ self-assessment of their performances and either tutor or SC assessment; which showed us that there was considerable work to be done to improve student self-awareness of their own performance and skill level. This would not have become apparent, of course, had we not undertaken the study.

44 Currently (2019) SCs are used in Strathclyde Law School’s Diploma in Legal Professional Practice, the Signet Accreditation of the WS Society in Edinburgh, the University of New Hampshire’s Daniel Webster Scholars Program, Northumbria Law School’s LLB, Kwansei Gakuin University Law School, Osaka, the SRA’s Qualified Lawyers’ Transfer Scheme (QLTS), the Law Society of Ireland (CPD), Hong Kong University Faculty of Law and the Chinese University of Hong Kong PCLL programs, the University of Adelaide Law School’s LLB, Nottingham Law School in Nottingham Trent University, and Osgoode Hall Law School, Ontario.

45 This was also proven in the independent evaluation of the heuristic on New Hampshire University Law School’s Daniel Webster Honours Scholars program, which, if students complete it, constitutes an exemption from most of the New Hampshire Bar Exam. See Alli Gerkman and others, ‘Ahead of the Curve. Turning Law Students into Lawyers. A Study of the Daniel Webster Scholar Honors Program at the University of New Hampshire School of Law’ (Institute for the Advancement of the American Legal System 2015).

46 Eileen Fry, Jenny Crewe and Richard Wakeford, ‘The Qualified Lawyers Transfer Scheme: Innovative Assessment Methodology and Practice in a High Stakes Professional Exam’ (2012) 46 The Law Teacher 132.

47 P Maharg, Transforming Legal Education: Learning and Teaching the Law in the Early Twenty-First Century (Ashgate Publishing 2007).

48 Basil Bernstein and Joseph Solomon, ‘“Pedagogy, Identity and the Construction of a Theory of Symbolic Control”: Basil Bernstein Questioned by Joseph Solomon’ (1999) 20 British Journal of Sociology of Education 265.

49 Quoted in Maharg (n 47). 11.

50 ibid 175.

51 For information on the SIMPLE (SIMulated Professional Learning Environment), see <http://simplecommunity.org>.

52 Karen Barton and Fiona Westwood, ‘From Student to Trainee Practitioner – A Study of Team Working as a Learning Experience’ (2006) Web Journal of Current Legal Issues <www.bailii.org/uk/other/journals/WebJCLI/2006/issue3/barton-westwood3.html>.

53 The feasibility and cost of setting up such a structure of learning and assessment is addressed in Karen Barton, John Garvey and Paul Maharg, ‘“You Are Here”: Learning Law, Practice and Professionalism in the Academy’ in Zenon Bankowski, Maksymilian Del Mar and Paul Maharg (eds), The Arts and the Legal Academy: Beyond Text in Legal Education (Ashgate Publishing 2012).

54 For a graphic example, see the attempt by Rameet Chawla to change user behaviour around the like/dislike algorithm on Instagram. His app, called Lovematically, automatically ‘liked’ every picture that arrived in his feed. When he ran the app on his account as an individual experiment, Chawla discovered that his follower profile massively increased over a short period of time as others reciprocated with likes and followed him. What happened next, as described by the journalist Adam Alter, is a lesson in corporate control of user behaviour:

On Valentine’s Day 2014, Chawla allowed 5,000 Instagram users to download a beta version of the app. After only two hours, Instagram shut down Lovematically for violating the social network’s terms of use.

Chawla makes an interesting analogy:

‘I knew way before launching it that it would get shut down by Instagram,’ Chawla said. ‘Using drug terminology, you know, Instagram is the dealer and I’m the new guy in the market giving away the drug for free.’ Adam Alter, ‘How Technology Gets Us Hooked’ The Guardian (London, 28 February 2017) <www.theguardian.com/technology/2017/feb/28/how-technology-gets-us-hooked> accessed 3 January 2017.

55 Forrest Briscoe, Marion Brivot and Wenpin Tsai, ‘Don’t Talk to Strangers? Technology-Enabled Relational Strategies and Value Creation’ (2015) 2015 Academy of Management Annual Meeting Proceedings 146.

56 Maharg (n 47).

57 Philip Leith, ‘Legal Expertise and Legal Expert Systems’ (1986) 2 International Review of Law, Computers & Technology 1.


Previous