Investigating the effects of reducing linguistic complexity on EAL student comprehension in first-year undergraduate assessments

https://doi.org/10.1016/j.jeap.2019.100804Get rights and content

Abstract

Academic writing across disciplines is often linguistically complex, characterized by abstract ideas densely packed into nominal groups (Biber & Gray, 2010; Halliday & Martin, 1993; McCabe & Gallagher, 2008), along with infrequent lexis and content requiring specific cultural knowledge. This linguistic complexity presents a significant comprehension challenge, contributing to an increase in the performance gap between English as an additional language (EAL) students and their non-EAL peers (Abedi & Gándara, 2006). This study presents the outcome of a collaborative project between Psychology, Sociology, and EAP instructors teaching within a pathway program at a Canadian university combining first-year university courses with language-linked EAP courses. One key outcome of this collaboration has been greater awareness of the comprehension challenges that assessments pose for students, particularly in the case of multiple choice question (MCQ) exams. To investigate the effects of linguistic complexity, the research team analyzed whether unpacking MCQs by reducing the linguistic complexity in test questions improves comprehension for EAL students. Our findings indicate that EAL students are more likely to score higher on unpacked assessment questions, highlighting the importance of reducing the complexity of language in assessments to provide linguistic space for novice students to demonstrate their knowledge of disciplinary content.

Introduction

The transition into first-year university can often be challenging, with students adjusting to greater academic demands, managing their own time, forming social support networks, and generally taking more responsibility for their learning. This transition may be particularly challenging for first-year international students, especially for English as an additional language (EAL) students (Andrade, 2006, Hyland, 2004). A main part of this challenge is due to the intense comprehension demands of academic language. Research has shown that academic writing across disciplines is generally considered linguistically complex (Biber and Gray, 2010, McCabe and Gallagher, 2008, Staples et al., 2016). Previous studies in this area have mainly analyzed clausal complexity, with a focus on the measurement of T-units: the combination of dependent subordinate clauses connected to an independent clause (Ellis and Yuan, 2004, Vyatkina, 2012). However, recent research has focused on phrasal complexity, based on evidence that academic writing is more characterized by nominal group density rather than clausal complexity (Ansarifar, Shahriari, & Pishghadam, 2018), placing high importance on nominalization (Fang et al., 2006, Halliday and Martin, 1993), and characterized by abstract ideas densely packed into nominal groups, making meaning less explicit for the reader (Biber and Gray, 2010, McCabe and Gallagher, 2008, Staples et al., 2016).

This linguistic complexity in academic discourse not only presents a significant comprehension challenge, but also contributes to an increase in the performance gap between EAL students and their non-EAL peers on assessments (Abedi and Gándara, 2006, Abedi and Lord, 2001). As student success in academia requires an ability to understand and make meaning within specific disciplinary contextual practices, instructors are faced with the challenge of making concepts written in a highly abstract manner intelligible to novice students (Moore, 2007). Most of these studies on clausal and phrasal complexity have focused on student writing, specifically the development of complexity in their writing, as opposed to comprehension of this linguistic complexity.

In response to this challenge, a growing number of higher education institutions have attempted to support EAL students through the creation of a wide variety of language supported educational programs. This support ranges from a “study skills” approach, generally centred around short workshops and courses that are often delivered by university support centres, to a more “academic socialization” approach guided by an explicit instructional focus on teaching learners about the key features of the relevant genres and text types they will encounter (Hyland, 2006), often in the form of full semester university courses. This socialization may take the form of EAP courses that seek “to prepare students for a wide variety of target situations or English for specific purpose offerings,” which “tend to be designed in consultation with discipline specialists and are informed by a genre analysis of relevant assessment tasks” (Storch, Morton, & Thompson, 2016, p. 479).

This study was conducted within an innovative credit-bearing pathway program at a university in Western Canada that combines first-year university courses of various disciplines relevant to students' programs with language-linked EAP courses. Within this program, EAP instructors work closely with instructors of various disciplines to deliver custom-designed language courses that provide ongoing language support relevant to the specific courses and disciplines students study. The EAP courses are guided by a Systemic Functional Linguistics (SFL) framework. Motivated by the influence and pedagogical applicability of SFL and Halliday's functional grammar in educational contexts, this focus on register is intended to help students with the challenges posed by disciplinary discourse (Moore, 2007). The application of SFL theory along with the close collaboration between instructors in the program aim to help students identify, comprehend and adopt disciplinary practices that allow students to acquire and demonstrate their knowledge of course content in the various disciplines they study (Ferreira & Zappa-Hollman, 2019).

One key outcome of these collaborations has been a greater awareness of the comprehension challenges that assessments pose for students, particularly in the case of multiple choice question (MCQ) exams. Psychology and Sociology instructors teaching within the program have reported that the EAL students from our transitions program require additional time to complete MCQ exams, ask more clarifying questions, and score lower on such exams compared to students admitted directly into first-year UBC programs. These observations of differences in student performance are supported by previous research suggesting that language use in assessments that is above the proficiency level of learners can pose a cognitive burden and lead to lower student scores, specifically as a result of the linguistic complexity of test items (Abedi et al., 2004, Abedi and Lord, 2001, Parkes and Zimmaro, 2016). At the same time, this research suggests that reducing the linguistic complexity of test items improves student performance, helping to decrease the performance gap between EAL and non-EAL students. In their evaluation of linguistic complexity in MCQs for elementary school students assessing their mathematics knowledge, Abedi and Lord (2001) found that student performance improved when the complexity of the language used in the MCQ items was reduced, especially in the case of EALs and lower performing students.

In order to reduce this cognitive burden in MCQ exams, Parkes and Zimmaro (2016) call for language use “appropriate to your students’ academic level and position relative to the profession” (p. 24) as it would be an unfair assessment of content knowledge if the reading level is beyond the students. Despite the call for increased language based accommodations on assessments for EAL students (Rivera, Stansfield, Scialdone, & Sharkey, 2000), Abedi et al. (2004) found that only a small percentage of test accommodations included reductions in linguistic complexity. Expanding on previous research, the present study reports the outcome of a collaborative project between the disciplinary instructors teaching first-year Psychology and Sociology courses as well as EAP instructors teaching within the university program. Given the prominence of the MCQ in higher education assessment (Haladyna, Downing, & Rodriguez, 2002) and the previously discussed research connecting a performance gap in assessments to complex language use, the research team analyzed whether unpacking MCQs by reducing the linguistic complexity in test questions improves student performance on test items.

Matruglio, Maton & Martin's (2013) work on semantic gravity depicts how instructors tend to already be very practiced at helping students unpack disciplinary content. Instructors often help students unpack key course concepts by providing more concrete examples and explanations of abstract, discipline-specific concepts through the use of more colloquial language in the classroom, which tends to be more easily applicable and understood. In comparison to the academic written discourse students are expected to engage in, language use revolving around key concepts in the classroom is typically identified and unpacked to aid understanding and learning. Given this natural attention to using more unpacked forms of language in the classroom, along with the relative lack of emphasis on reducing the linguistic complexity of written assessments (Abedi et al., 2004), the essence of the present study is to rewrite MCQs in order to make the meaning of the question more accessible to all students so they can better understand the question and thus better demonstrate their content knowledge.

This intervention must be understood within the context of our program, particularly with the academic language support in mind. The foremost goal of this study is to determine the usefulness of this intervention on student test performance; however, the broader goal of our program is to produce learners who are capable of not only comprehending complex academic discourse but who are also able to produce it themselves. In the next section, we will review the three main linguistic interventions used in this study: reducing nominal group complexity, vocabulary substitutions and making cultural references more explicit. We will also discuss how this broader goal of our program is addressed in the academic language courses.

Clear linguistic differences exist between spoken and written language (Biber et al., 2011, Halliday, 1987). Halliday states that spoken language is generally characterized by higher clausal complexity whereas writing relies on higher lexical density to communicate messages with more frequent use of longer nominal groups. Whereas spoken language tends to be focused around the verbal group, resulting in greater clausal complexity, Halliday (2007) explains that the nominal group, which “construes reality as entities (objects, including instructional and abstract objects and their quantities, qualities and types)” is the central focus of written language (p. 109). This focus, particularly in the case of academic writing, results in a more abstract representation of ideas through more sophisticated use of nominal groups (McCabe & Gallagher, 2008).

When analyzing nominal groups using functional grammar from an SFL perspective, Halliday and Matthiessen (2014) direct attention to how a head noun (thing) can be pre- and post-modified. The pre-modification structure of a nominal group can consist of Deictics (determiners), Numeratives (numerals), Epithets (most often adjectives), and Classifiers (most often nouns or adjectives) while Qualifiers, such as prepositional phrases, embedded clauses and embedded non-finite clauses, can serve as post-modification of the head noun, additionally packing more information into the nominal group. Fig. 1 shows how each of these elements can be distinguished within a densely packed nominal group:

As readers transition through various stages from young learners in primary, elementary and high school, to undergraduate university students and experts in a specific discipline, they encounter increasingly dense nominal groups which place progressively greater demands on comprehension (Thompson, 2014). This complexity in academic texts as a result of high lexical density and nominalization poses significant reading comprehension challenges for EAL students as meaning becomes difficult to unpack (Ventola, 1996).

Along with the difficulty of unpacking dense nominal groups, international EAL students face challenges deciphering infrequent lexis and content requiring specific cultural knowledge. The greater amount of unfamiliar vocabulary in academic texts increases reading comprehension difficulties, which may result in an inability to identify and understand the main ideas and significant details of a text (Hu & Nation, 2000). Hu and Nation suggest that learners must be able to understand as much as 98% of a written text in order to adequately comprehend it. In their review of the vocabulary literature, Hacking and Tschirner (2017) explain that it is common for learners to progress from higher to lower frequency words as their reading comprehension improves, citing “robust correlations between vocabulary knowledge and reading proficiency” (p. 513), concluding that “authentic literary texts are thus mainly beyond the reach of all but the most advanced students” (p. 515). In response to this challenge, Hacking and Tschirner note recent efforts that aim to select vocabulary appropriate to student proficiency levels in order to increase student comprehension and advance reading proficiency. Given that the minimum proficiency level of students in our program is 70 on the Test of English as a Foreign Language (TOEFL) or 5.5 overall on the International English Language Testing System (IELTS) entry requirement, these EAL students are certainly still in the earlier stages of developing their academic vocabulary.1

In addition, instructors often use cultural references as “hooks” to exemplify and situate course concepts in hopes of intriguing students; however, for international students, these cultural references can be an additional barrier to decode and interpret (Lee, 1997). These references often exist in the form of infrequent vocabulary that is context-sensitive, compounding the challenges in not only understanding lexical items, but also in the comprehension of the surrounding cultural context, which can greatly hinder a learner's ability to make meaning (Hsu & Yang, 2013). Duff and Zappa-Hollman (2012) emphasize the significant cultural and linguistic background knowledge required to understand popular culture references in the language classroom and the importance for instructors to be aware of the comprehension challenges these pose for EAL learners and newcomers. Furthermore, international students often do not recognize the cause of their confusion as a result of a lack of contextual knowledge, and instead often think it is their language skills that are causing the problem (Andrade, 2006), which may prevent students from asking for clarification about cultural references.

To investigate the effects of linguistic complexity on MCQ test performance for EAL students, five complex MCQs were identified and unpacked in midterm and final tests over two university terms and three first year courses: two Psychology courses and one Sociology course. A combination of three criteria were identified as contributing to the complexity of MCQs: nominal group complexity, vocabulary frequency as determined through the Corpus of Contemporary American English (COCA), and cultural knowledge specific to North America. This cultural knowledge was identified by two EAP instructors in the program based on 7–10 years of experience teaching EAL learners in higher education institutions. Once the complex MCQs were identified and unpacked, students in Psychology and Sociology courses received both complex (original) and unpacked (revised) versions of MCQs on each of their exams throughout two terms. Our goal was to determine whether such unpacking impacts comprehension, and therefore performance on unpacked test items. Thus, this study attempts to address the following research question:

To what extent does unpacking multiple choice questions support EAL students’ ability to comprehend and demonstrate their knowledge?

Section snippets

Context & participants

In order to answer this research question, we collected data from a group of first-year EAL international students completing an Arts or Management degree who attended a specialized pathway program at the University of British Columbia in Western Canada. Students attending this program are academically strong and meet the competitive scholastic requirements for entry to the university. They do not, however, meet the linguistic requirements for admission typically scoring a minimum of 70 on

Results and discussion

Overall, results confirm our initial hypothesis that unpacking increases students’ ability to demonstrate their knowledge by answering more MCQs correctly. On average, students were 8% more likely to answer an unpacked question correctly when compared to complex questions. These findings show that these EAL students are more likely to score higher on assessment questions that have been unpacked by EAP instructors in the effort to reduce linguistic complexity, indicating that unpacking increases

Conclusions and implications

Students’ increased performance on unpacked MCQs highlights the importance of unpacking dense academic language used in assessments to provide linguistic space for novice students to better understand the complexity commonly found in academic assessments. This study has several pedagogical implications for educators in both EAP and non-EAP contexts, as well as settings in which instructors collaborate across these disciplines. Two overarching questions arise from these findings. Should

Funding

This work was supported by the SoTL Seed program funded by the Institute for the Scholarship of Teaching and Learning (ISoTL) and the Center for Teaching, Learning and Technology (CTLT) at the University of British Columbia.

Declaration of competing interest

None.

Daniel Riccardi teaches EAP at the University of British Columbia, Vantage College, and received the 2019 Vantage One Teaching award. He holds an MA in Language and Literacies Education from the Ontario Institute for Studies in Education, University of Toronto, and has taught EAP in South Korea, Chile and Canada.

References (37)

  • D. Biber et al.

    Should we use characteristics of conversation to measure grammatical complexity in L2 writing development?

    TESOL Quarterly

    (2011)
  • P. Duff et al.

    Using popular culture in language teaching

  • R. Ellis et al.

    The effects of planning on fluency, complexity, and accuracy in second language narrative writing

    Studies in Second Language Acquisition

    (2004)
  • Z. Fang et al.

    Understanding the language demands of schooling: Nouns in academic registers

    Journal of Literacy Research

    (2006)
  • A.A. Ferreira et al.

    Disciplinary registers in a first-year program: A view from the context of curriculum

  • J.F. Hacking et al.

    The contribution of vocabulary knowledge to reading proficiency: The case of College Russian

    Foreign Language Annals

    (2017)
  • T.M. Haladyna et al.

    A review of multiple-choice item-writing guidelines for classroom assessment

    Applied Measurement in Education

    (2002)
  • M.A.K. Halliday

    Spoken and written modes of meaning

  • Cited by (3)

    • What is complexity? Grammatical issues in assignment prompts

      2021, Journal of English for Academic Purposes
      Citation Excerpt :

      Some of the difficulties that students have with crafting their university assignments may be due to the demonstrated complexity of academic writing in different text types (Staples et al., 2016), including the nature of assignment prompts or questions (Miller et al., 2016; Riccardi et al., 2020) and the source texts used as part of the assignment (Miller et al., 2016). Previous work in this area has mostly examined complexity at the level of the sentence and the clause (Riccardi et al., 2020); however, more recent studies indicate that complexity at the level of the group or phrase, in particular the nominal group, may cause more of an issue for students (Ansarifar et al., 2018). In addressing this issue, we look back to the original work of Halliday (1985) on the differences between spoken and written language.

    Daniel Riccardi teaches EAP at the University of British Columbia, Vantage College, and received the 2019 Vantage One Teaching award. He holds an MA in Language and Literacies Education from the Ontario Institute for Studies in Education, University of Toronto, and has taught EAP in South Korea, Chile and Canada.

    Jennifer Lightfoot teaches EAP at the University of British Columbia, Vantage College, holds an MA in English Language Teaching and Applied Linguistics from King's College London, and has taught within Canada and the UK.

    Mark Lam is a Lecturer in the Department of Psychology at the University of British Columbia. His research interests include scholarship of teaching and learning, and health psychology.

    Katherine Lyon is an Instructor in the Department of Sociology at the University of British Columbia. She is a recipient of the Vantage One Teaching Award and the SAGE Teaching Innovations and Professional Development Award. Her research interests include experiential pedagogies, accessible assessment, and teaching with augmented reality.

    Nathan Roberson is a PhD student in Measurement, Evaluation, and Research Methodology at the University of British Columbia. His research interests include: measurement, econometrics, immigration, quality of life, and bilingual education. He holds an M.A. in International Relations from the Central European University in Budapest, Hungary.

    Simon Lolliot is an Instructor in the Department of Psychology at the University of British Columbia. His research interests include diversity, intergroup contact, belongingness, and how these relate to success (pedagogical and otherwise) at university; scholarship of teaching and learning and educational leadership.

    View full text