Cost-effective computer tutorials
Alan L Tyree

Abstract

A new form of computer tutorial system was tried in International Law at the University of Sydney in the first semester of 1992. The form and theory of the tutorials are described and the outcome of the trial.

The Course

International Law is a compulsory course usually taken in the 4th year by Sydney University Combined Law students or in the 2nd year of the Graduate stream. It is a "two unit" course which means that it normally meets twice a week. Each meeting is two hours in length, and the course has traditionally been taught using lecture methods supplemented by eight one-hour tutorial periods. The total duration of the course is one semester.

The course covers both private and public international law. There has been an attempt in recent years to teach the two topics in an "integrated" fashion. The tutorials have generally not reflected this integrated approach. There have been four tutorials primarily devoted to private international law topics and four devoted to public international law.

The total enrollment in the course is approximately 260 students. There are three different groups, and in accordance with Faculty policy at the University of Sydney each group may be examined separately if the teachers so desire. The aim of this policy is to permit and encourage diversity of approach to subject matter.

In the first semester of 1992, two of the three groups were taught jointly. The teachers of these two groups agreed to a pilot study which would use the CRES computer examination system (see below) as a vehicle for computer based tutorials in the subject.

The Computer Tutorial Experiment

Students in these two groups were offered a choice of ordinary tutorials or computer based tutorials. They were given a comprehensive demonstration of the computer system, so that their choice should have been informed.

Of the 154 students present at the time when the choices were made, 70 chose the computer tutorials. 166 students sat the first examination in the course, 73 of them having participated in the computer tutorials.

The computer tutorials were structured so as to parallel as closely as possible the existing tutorial structure. Each computer tutorial was on the same topic as the corresponding traditional tutorial, and the design of the computer tutorials attempted to ensure that the time required was approximately the same as for the corresponding traditional tutorial.

Why "Cost effective"?

CRES tutorials (described in detail below) do not follow the traditional style of computer based teaching (CBT). That traditional style might be called "pseudo-Socratic". A certain amount of information is provided to the student and a question usually multiple choice) is then asked. On the basis of the response to the question, the system then presents the next unit of information.

The problem is that to be effective, the builder of the traditional system must anticipate all possible responses that the user might give. Not only that, but the builder must anticipate the reason for the response since the system must "branch" on the basis of the response.

This traditional approach to CBT is very demanding. Research in the area repeatedly shows that 100 - 400 hours of work is required to build a one hour tutorial. Further, the construction methods are such that modification is nearly as expensive as the original construction, a devastating prospect if we are considering building good tutorial systems for dynamic areas of law.

By contrast, a good CRES question requires about one and a half hours to build. A one hour tutorial requires seven or eight CRES questions. The tutorials are "modular" in the sense that a change in the law will usually require change only to a few questions in the tutorial, and these changes may often be minor ones which can be accommodated with a few minutes work.

The Theory of the Tutorials

The tutorials are based on ideas developed by the educational theorist B S Bloom. He and his colleagues at the University of Chicago developed teaching methods which were the first to yield measurable improvements in student performance. At about the same time F S Keller was developing his "personalised system of instruction" which also showed dramatic improvements in student performance. The story begins in 1968.

Dubin & Taveggia 1968

In 1968, Dubin & Taveggia published the results of a study which made a detailed examination of 50 years of research into teaching methods. Methods studied and compared included lectures, various forms of discussion methods, lectures plus tutorials and also included self-study programs. Their conclusions are disturbing:

"In the foregoing paragraphs we have reported the results of a reanalysis of the data from 91 comparative studies of college teaching technologies conducted between 1924 and 1965. These data demonstrate clearly and unequivocally that there is no measurable difference among truly distinctive methods of college instruction when evaluated by student performance on final examinations." Dubin, R and Taveggia, T, "The Teaching-Learning Paradox", Center for the Advanced Study of Educational Administration, University of Oregon, 1968, at p35.

Is Law Teaching Different?

Many law teachers seem to believe that the subject matter of law is so different from other subjects that the results of the Dubin and Taveggia study do not apply to the teaching of law. There is little experimental evidence, but what there is gives little comfort to this parochial attitude: Teich, P "Research on American Law Teaching: Is there a case against the case system?" 35 J Legal Education 167 (1986).

Mastery Learning Models

At about the time when the Dubin and Taveggia study appeared, a number of research papers began appearing which identified teaching methods which did consistently result in an improvement in student performance on final examinations. The methods are known collectively as "mastery learning" models. The salient characteristics of the method are that the students are given very precise information on what they are expected to learn and they are tested regularly to ascertain if they have in fact met the stated objectives.

Bloom and his associates at the University of Chicago have identified a number of factors which may be manipulated in various teaching models and have measured the "effect size" of successful manipulation of the variables. The interested reader should consult Bloom, BS, "The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring" (1984) Educational Researcher 4-16. The paper identifies mastery learning as the single most influential factor in improving student performance.

Keller's influential paper "Good-Bye Teacher…" (1968) Journal of Applied Behavior Analysis 79-89 appeared in the same year as Dubin and Taveggia. Keller's method marries mastery learning with reinforcement learning theory. The essential features of Keller's method are self-pacing, unit mastery and positive rewards for achievement. See Rawson and Tyree "Fred Keller Goes to Law School" (1990-91) 2 Legal Education Review 253 for a discussion of a law school implementation the Keller Plan. See also Tyree and Rawson "Fred Keller Studies Intellectual Property", paper presented to the Intellectual Property Interest Group of this conference.

Both Bloom's mastery learning models and Keller's method show significant increases in student performance when measured by student performance in the final examination. In both cases, the increase in performance is independent of student ability in the sense that good students and poor students both improve by about the same percentages. For both methods, the advantage over students taught by "ordinary" methods increases with time: Kulik, Kulik and Cohen, "A Meta-Analysis of Outcome Studies of Keller's Personalized System of Instruction" (1979) 34 American Psychologist 307; Bloom, BS, "The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring (1984) Educational Researcher 4-16".

Why does it work?

It is not magic. Student performance improves as a direct result of a change in the behaviour of both teacher and student. For the teacher, the requirement of expressing clearly defined objectives results in better planning and more focussed teaching. For the student, the objectives clarify the task and the tests guard against the lack of self-assessment skills. The feedback from the tests assists the student to remedy deficiencies early before being trapped by "incremental ignorance".

We did not have the time or the resources to implement either a full scale mastery learning or a Keller Plan course in International Law. However, we hypothesized that providing regular testing would achieve some of the benefits of those methods. Rather than impose the testing model on all students, we believed that it would be viable and useful to offer it as an alternative to the existing tutorial problem. This was done for both pedagogical and practical reasons. The pedagogical reason is that we believe that students learn in different ways, and that there is no good reason to impose a teaching method on students unless there is very good reason for doing so.

The practical reason for making the system optional relates to Faculty politics. The Keller Plan experiments have not been warmly received by some students and staff, even though the overwhelming response from the participants in the Keller Plan courses has been positive. Some of the reasons for this are discussed below, but we reasoned that there could be little objection to offering the CRES tutorials as an option. There would be a valuable side benefit in that we could compare the performance of the "computer" students with the "human" students.

How should it be done?

In order to understand the details of the CRES tutorials, it is valuable to make a short digression into the theory of educational testing.

Taxonomy of testing

Formative and summative assessment

"Formative assessment" is assessment which is used for the sole purpose of assisting the student to determine areas of weakness. It is not part of the final assessment mark. The theory is that students lack the critical ability to assess their own knowledge. See Boud and Tyree "Self and Peer Assessment in Education: a preliminary study in Law" (1980) 15 Journal of the Society of Public Teachers of Law 65-74; Rawson and Tyree, "Self and Peer Assessment in Legal Education", (1989) 1 Legal Education Review 135

"Summative assessment" is the assessment at the end of the course, or at the end of a segment of the course, which is intended to be a measure of performance. It is the assessment which is used to determine the final course mark.

Norm referenced and criterion referenced assessment

Norm referenced testing is designed to discriminate between students. Most examinations in higher education are norm referenced.

Criterion referenced testing is designed to check that a student has obtained a certain level of development in an area of knowledge. Criterion referenced testing is most suitable for formative testing.

See Heywood, J Assessment in Higher Education, 2nd Edition, Wiley, New York, 1989 for a general discussio of norm and criterion referenced testing.

Types of questions

Multiple choice

Most law teachers believe that MCQ's cannot test the subtleties of the law. This belief is not justified in the light of educational testing literature: see Heywood, J Assessment in Higher Education, 2nd Edition, Wiley, New York, 1989; Ebel RL and Frisbie, DA Essentials of Educational Measurement, 4th Edition, Prentice-Hall, Englewood Cliffs, 1986. However, the construction of good MCQ's is very labour intensive, so they are not likely to be used widely even if the prejudice against them could be overcome.

Short answer

The form of these questions is familiar. We are not generally accustomed to making them criterion referenced, but the knack is not difficult to learn. They are generally very much easier to write than MCQs, and they form a good basis for formative testing.

Computer administered short answer questions

While it is obvious that a computer can offer advantages in the administration of MCQs, it is less clear how the computer can assist with short answer questions. This section describes the CRES and SAGES systems.

What is a CRES question?

CRES is an acronym for Critical Review Examination System. CRES was developed as part of the examination system for Keller Plan Project at the University of Sydney Law School.

The Keller Plan is a "test intensive" teaching method. A typical plan will require a student to pass 15 - 20 half hour tests, but since there is no penalty for failure in the Keller Plan, each test must have three or four versions. In our early experiments, we used multiple choice questions, but these proved unsatisfactory for several reasons, not the least being the labour required to construct effective questions.

While the Keller Plan was being used in Technology Law, other developments taking place. In 1991, Christine Chinkin and Alan Tyree tried using short answer questions as a form of tutorial in International Law. The method used was primitive: the student would answer the question, then the screen would split and the student had the opportunity to compare his or her answer with a model answer prepared by the teachers. In spite of the simplicity of the method the student feedback was generally positive, and we concluded that the tutorial form was worth pursuing.

The success of the Chinkin/Tyree experiment encouraged us to consider further the use of short answer style questions in the Keller Plan. The problem, of course, was to find some way of marking the questions since there is no presently known method of programming a computer to understand free form language. The answer was simple and effective: CRES essentially requires a student to mark his or her own answer. The key is to ask the student a number of multiple choice questions not about the primary subject matter, but about the answer given by the student to a short answer question.

The CRES system presents the student with a question which requires a short answer, usually eight to ten lines. When the student feels that a suitable answer has been written, the answer is "submitted". The answer is then displayed at the top of the screen and a series of questions, the Critical Review questions, is asked. The Critical Review questions are similar to notes that would be given to a marker. They should identify all of the relevant issues that the student should have discussed.

Following the Critical Review the student is told whether the question is passed or failed. In either case, tutorial feedback is provided. Particularly in the case of a failed question, this feedback should identify the essential issues of the question and indicate which material should be studied further.

This takes longer to describe than it does to demonstrate. Appendix A contains a sample CRES question. This question will be demonstrated to the Interest Group during the presentation.

CRES is presently running under UNIX on "keller@sulaw.law.su.oz.au", the Faculty SUN Sparc II. We have demonstration systems running on PCs under MS-DOS and expect to have a "production" system available soon. We hope to be able to deliver the system eventually on bottom of the line machines.

SAGES (Short Answer General Examination System)

The CRES system is adequate to administer purely formative assessment. The student can "cheat", but the only person injured is the student user. If CRES is to be used in summative assessment then some safeguard is necessary.

We originally developed the CRES system for use with Keller Plan courses which combine formative and summative assessment. The temptation to cheat on the CRES system is contained by using SAGES, an AI program which uses a collection of pass/fail models to classify a new answer. SAGES is used as a "watchdog" in the Keller Plan courses. It marks every question. In the event that the student marked CRES result differs from the SAGES result, the question is referred automatically to the teacher for adjudication.

SAGES is currently operating under UNIX. There is no immediate likelihood of it being incorporated into the PC version of the system.

Results in International Law

The International Law CRES tutorial system consists of eight CRES examinations. Each examination contains between six and ten CRES questions. Students may use the system at any time when the Faculty Computer Laboratory is supervised. During the course of the experiment, the Laboratory was supervised four hours a day, five days a week.

Private International Law was examined separately in the two sections which participated in the CRES experiments. The examination was conducted immediately before the Easter recess and after the students had completed four tutorial session.

The CRES students did slightly better in the examination, although the results were not statistically significant. The average mark for the "standard" students was 63.39 with a standard deviation of 12.65. The average mark for the CRES students was 65.36 with a standard deviation of 11.70.

The results of the final examination were not available at the time of writing. A student evaluation of the CRES system is planned, but again the results are not available at the time of writing.

These results need to be treated with a great deal of care. Both the "human" and the CRES tutorials were optional, and detailed participation records were not kept. A student was included in the CRES group if he or she had used the system at least once.

Assuming that the groups are statistically indistinguishable, it must further be remembered that both groups are self-selected. There is no evidence to jump to the conclusion that CRES tutorials are as good for everyone.

What kind of student selected the CRES tutorials and why? We do not know at this time. It may be that the student evaluation will throw some light on this. Informal discussions with the CRES students provided some clues: "timetable freedom"; "Some students dominate the normal tutorial"; "I am able to sit in an ordinary tutorial without doing any work"; "Comfortable with computers"; "Just thought I would like to try something different". Colleagues opposed to the idea of computer assisted learning have other explanations: "did not like to get involved"; "poor verbal skills"; "too lazy to prepare for tutorials".

Consequences

We judge the CRES tutorials to be a success. There is a certain amount of Faculty suspicion and some outright opposition. There is also some opposition from the student politicians which appears to be based on a fear that we are planning on replacing human tutors with computers.

Much of the suspicion and opposition appears to be based on the informal comparison of computer tutorials (of any kind) with human tutorials. We are prepared to accept that human tutors are better than computers, although we note that there is not the slightest shred of evidence that this is so if the tutorial groups are of the size that we have at the University of Sydney.

In our view, the argument based on this comparison simply misses the point. The choice is not between human tutors and computers. It is between computer tutorials and no tutorials at all. It is this latter situation which is the norm in most of our classes at the University of Sydney.

What our efforts have shown is that the computer CRES tutorials are useful and that we can afford them. This means that we can now proceed to introduce them in classes where we presently cannot afford to provide tutorials in any other form. In those classes where we do provide human tutorials, we will encourage CRES tutorials to be offered as an option to students who prefer them. Such an option has the dual advantage of offering students a greater choice and reducing the size of the "human" tutorials.

Acknowledgements

The Keller Plan Project and the development of the CRES and SAGES software have been supported by generous grants from the Law Foundation of New South Wales. The Keller Plan Project is managed by Alan Tyree and Shirley Rawson.

The International Law tutorial option is partially funded by a Law School/Law Foundation grant to the following applicants: Hutchinson, Musgrave, Opeskin, Rawson, Rothwell, Skapinker, Tyree

Date: 2013-01-12 Sat

Author: Alan L Tyree

Created: 2023-12-01 Fri 08:06

Validate