Hlavac, J. et al. Intake Tests for a Short Interpreter-Training Course: Design, Implementation, Feedback

Hlavac, J. et al. (2012) Intake Tests for a Short Interpreter-Training Course: Design, Implementation, Feedback. International Journal of Interpreter Education 4:1.

This article describes an assessment test designed and implemented as a pre-training admission process for a short (30-hour) community interpreter training program in Victoria, Australia.  The training was intended for speakers of “new and emerging languages” (p. 22).  The training program was offered in two cities, to two groups of students.  The trainers were not involved the design or administration of the test.  At the end of the training program, both trainers and trainees were asked to evaluate the intake test by completing a survey. 3 out of 5 trainers and 16 out of 25 trainees completed the evaluation survey. The authors’ focus was on judging the authenticity of the test–that is, whether “the relationship between test contents and the elicitation of skill performance during the test are those skills that were the focus of post-trest training.” 

The test under discussion in this article was seen as a very important to the success of the training, as the sponsoring agency set very few restrictions on who might apply to participate. The training program did not expect the potential trainees to have prior experience in interpreting or translation nor a minimum level of prior education.  The test did not directly evaluate the testees’ capabilities in their Language other than English (LOTE), but did assess English skills (listening & speaking primarily, but also basic competence in reading and writing).The test was designed to evaluate testees’ functional capacity (that is, their ability to competently perform tasks in context).

The authors give a brief overview of previous publications on entrance tests for community interpreter training–of which, so far, there are not very many.  This section (section 2) provides useful references for those wishing to read further on the subject, as well as a discussion of different competencies seen as relevant by different authors. The authors of this study also reference the difficulty of language proficiency testing for languages that have only recently been standardized or codified.

In sections 3 and 4, the authors outline the 10 areas of focus of the test, the testers’ credentials, the methodology, and the content of the test. Of particular interest is the detailed explanation of the test content and delivery. The authors provide many examples of test questions, and explain the rationale behind the items and their presentation. Several questions were asked and answered orally; others were responded to in writing. There are a series of different questions designed to elicit information about language skills (speaking, understanding, reading, and writing); these are presented in a variety of ways, which allows the testers to make inferences about abilities in the LOTE. Other sections ask about education, employment, interest/knowledge/experience of interpreting, and motivation.  The test includes basic exercises in reading for understanding and writing, and more extensive exercises in listening, note-taking, and memory.  I very much appreciated the explanations of the reasons both for the inclusion of the various questions and the design of the items.  The authors comment on the relationship between each item and related interpreting skills/competencies. 

The final sections report on the criteria used to select participants from the group of testees, and on the post-course evaluation survey that was completed by trainers and trainees.  It also reviews the content and methodology of the training course. 

The post-course survey asked trainers and trainees to rate the importance of various parameters (such as education, language skills, knowledge of terminology, motivation) for admission the training course. I felt that the section on conclusions and findings stopped short of its potential, as the authors report their findings, but stop short of stating to what extent they felt the test met the stated benchmark of authenticity. They do report on what skills/knowledge their sample of trainers felt were more and less important for trainees to have before beginning the course, but it is important to keep in mind that this was a very small sample taken from one very specific context.  The conclusion section would benefit from suggestions for further research. 

This is a very interesting and thought-provoking study, from which I gained a lot to think about. I teach in a language-neutral setting, and we are struggling with how to screen potential students before entrance to the program.  The authors’ examples of various ways to elicit information about proficiency in a LOTE that the trainer/tester does not command were extremely helpful to me.

Advertisements