USING MOBILE-BASED FORMATIVE ASSESSMENT IN ESL/EFL SPEAKING

Article History Received: January 2021 Revised: January 2021 Published: January 2021 With the widespread application of smartphones in and outside the classroom, mobile-based teaching and learning is drawing much attention and hence being extensively practised nowadays across the globe. Recently, using smartphones for assessment purposes has been a new phenomenon and the researchers are still examining what processes the use of mobile-based assessment tools may include and what outcomes and challenges they can cause to teachers and students in terms of learning/teaching performance, motivation and attitudes. There have been a good number of research studies on the use of Mobile Assisted Language Learning (MALL) or Mobile Learning (ML) in EFL or ESL classroom but not much literature is known about the mobile-based language assessment, especially mobile-based formative assessment (MBFA). Hence, this study attempts to shed light on MBFA and review the recent literature available on it and its effective utilization in developing ESL/EFL speaking skill. This paper uses a qualitative research method that exclusively uses the relevant secondary references/works available on the topic. The literature revealed that MBFA practices in ESL/EFL speaking classes are effective to a certain extent and some tools and procedures seem to be more effective than others depending on the design principles and strategies used by teachers or app developers.


INTRODUCTION
Since the invention in 2007, smartphones have overtaken many roles of traditional Personal Computers (PCs) or laptops. Recently, advanced technologies such as 5G networks, touch screens, photos and video options, code reading capabilities, voice and image recognition, MP3/4 players, screen sharing facilities, GPS, SMS, MMS, email, internet and newly introduced mobile-based applications like Google translate, YouTube, Facebook, and Web 2.0 resources have helped smartphones overcome the limitations the PCs and laptops had in the late 1990s and early 2000s. Moreover, portability and affordability have made such mobile devices more popular and given an upper hand over PCs or laptops. Nowadays, smartphones appear to be more useful than PCs and laptops and are being widely used as a popular educational tool in modern-day classrooms. Many research studies demonstrate that mobile-based teaching and learning is becoming immensely popular throughout the world. Wang and Smith's (2013) study about using mobile phones in teaching reading and grammar, Li and Hegelheimer's (2013) research on using "Grammar Clinics" a web-based mobile application, as an autonomous learning tool in helping learners make their English writing better through self-editing options and Kim, Rueckert, Kim, Seo's (2013) study about students' engagement in learning activities outside the classroom from their community of practice greatly speak of the extensive application of smartphones.
Besides the above uses, smartphones are also being used for assessment purposes in recent years. Tarighat and Khodabakhsh (2016) investigated the feasibility of WhatsApp on smartphones in assessing speaking as a summative assessment tool and found that students had a mixed attitude towards mobile-based summative assessment. Samaie, Mansurri, Nejad and Qaracholloo (2016) conducted a study on 30 English learners in Iran and found that most of the learners showed a negative attitude towards using WhatsApp as a self-assessment and peer assessment tool on their smartphones. Laborda et al. (2014) carried out a dynamic assessment with a group of learners and proposed a powerful low-cost mobile-based assessment tool for the language paper of the college entrance exam. Thus, it is evident that mobile-based assessment is quite recent and researchers are still experimenting with the feasibilities, challenges, impacts on student learning performance, motivation and teacher/learner attitudes around mobile-based assessment tools.
This study reviews the current research studies on formative assessment or assessment for learning (AforL) and attempts to show how smartphone-based formative assessment is being practised in English as a Second Language (ESL) or English as a Foreign Language (EFL) classroom. There has been a considerable amount of research on how Mobile Assisted Language Learning (MALL) or Mobile Learning (ML) has been used in EFL or ESL classroom but not much literature is known about the use of mobile-based language assessment, especially mobile-based formative assessment (MBFA). The most comprehensive review on mobile-based assessment was carried out by Nikou, and Economides (2018). The authors studied 43 articles on mobile-based assessment published in different research journals from January 2009 to February 2018 and found that the majority of mobile-based formative assessment research studies were done with elementary students and on STEM (science, technology, engineering and mathematics) subjects. Hence, this study aims to review the recent literature on MBFA in developing ESL/EFL speaking skills. This paper thus attempts to investigate the following three research questions: 1) how does MBFA support the contemporary view of formative assessment? 2) what are some current practices of MBFA in ESL/EFL speaking classes? and 3) what are the effective design principles for MBFA in ESL/ EFL speaking?
Based on the current mobile based assessment practices and research on L2 formative assessment, this paper recommends some strategies that ELS/EFL teachers can use for making MBFA more effective. As the mobile-based assessment is a new arena in ESL/EFL assessment, the literature that only supports mobile-based ESL/EFL formative speaking assessment was considered to be studied and reviewed. This paper is divided into four sections. The first section gives a brief overview of the formative assessment and mobilebased formative assessment. The second part examines the relevant/available literature on MBFA in ESL/EFL speaking class. The third section describes the affordances and limitations of MBFA in ESL/EFL speaking class. The last portion recommends strategies that teachers can use in designing MBFA in their ESL/EFL speaking classes.

RESEARCH METHOD
Using the qualitative research method, this review article uses already available materials and looks for dominant themes or recurring ideas and tries to find new research directions. The goal of qualitative research is to uncover emerging themes, patterns, concepts, insights, and understandings from the existing literature (Patton, 2002). Thus, this research reinforces support for prevalent theories, adds knowledge to current literature on this research topic and provides an excellent overview of the current literature on the topic.
Based on the secondary references/works, this study includes relevant articles and books in order to re-analyze, interpret, or review the past available data in this area. As this is a small-scale study, all the materials available on the subject of the present research are not included. However, we attempt to find such literature that fits the topic and follows a particular set of inclusion and exclusion criteria while selecting the research materials for review. Although we found that studies on Mobile Based Formative Assessment in ESL/EFL speaking are scarce at any level of education, the review for this paper focused only on the studies at university level adult ESL/EFL speaking classrooms. For studying the theoretical background of relevant theories and research, peer-reviewed journal articles and books, which got published during the last few decades, were reviewed. This paper also used the York University library database, Google Scholar, and online peer-reviewed journals as data source. To gather data, this study searched articles with keywords such as 'MALL-based formative assessment, 'the efficacy of MALL-based assessment', 'Formative assessment in MALL-based teaching/learning', 'Mobile formative assessment tools', 'perceptions of MBFA', 'formative assessment' and 'formative assessment in speaking'. To find additional supporting information for the review purpose, a reference list of selected articles-both conceptual and empirical was meticulously chosen for answering the research questions. All findings and relevant information were recorded with specific reference and then a general synthesis of the arguments was drawn to look for coherence among concepts or themes.

RESEARCH FINDINGS AND DISCUSSION
Formative assessment generally contrasted with summative assessment uses the assessment to guide teachers/learners to decide what the next step in the teaching/learning process would be (Green, 2014, p.14). It is often used interchangeably with assessment for learning (AforL) (Lee, 2007, p. 200). Growing Success (2010), a document published by the Ministry of Education, Ontario, Canada which is also refereed as a guideline for assessment, evaluation, and reporting in Ontario Schools, defines AforL as the "process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go, and how best to get there" (p. 31). It occurs before and during the instruction to let teachers determine students' readiness and interest, their knowledge and skills, and preferred learning strategies.
Teachers gather information through observation, conversation or students' works to determine how a learner is progressing in achieving the lesson objectives. They usually provide descriptive feedback and coaching for improvement, scaffold instruction, make adjustment to learning approaches and differentiate instruction. Growing Success (2010) also mentions the framework for AforL which includes some processes and strategies to support student learning. The processes are: knowing where the students are going, where they are in and what needs to be done to help students learn. In AforL, a teacher firstly uses designed tasks, questions, observations and group or pair work to elicit information about student learning and uses these data to adjust information. Second, based on the collected data the teachers provide descriptive feedback to reduce the gap between students' current position and the learning goals.
The feedback motivates students to produce their best work and also teaches how to selfassess, peer asses and group assess by providing models to follow. Finally, self, peer and group assessments help students identify their individual needs and actions they need to take as a short and long-term individual goal. The goal of AforL is "to move each student from guided practice to independent practice, based on the student's readiness" (Growing Success, 2010, p. 35). The strategies used in AforL consist of identifying learning objectives, using effective prior learning activities to elicit information about student learning, giving feedback that supports learners' needs, engaging students as learning resources for one another and developing individual goal setting (pp. 32-34).
Green (2014) argues for two types of formative assessment approaches: interventionist and interactionist. In the former, the teacher gives the learners a pretest and sees how the learners perform. In the latter, the teacher participates in the assessment test by hints and clues and tries to teacher formative assessment module for assessing learner performance and learner feedback. The PELS automatically collects data for individual learners during the formative assessment process. The following is a simplified description of the PELS steps. First, learners log in to the system through individualized devices such as mobile phones, tabs, laptops etc. Second, the learners perform personalized tasks based on the course materials. All records are stored in the database. Third, the teacher can assess learning performance during the learning process. He/ she can check the student attendance, responses for teacher questions, and time spent on learning, and make comments for individual learners. Fourth, the formative assessment and feedback are analyzed. Fifth, based on the analysis, inferences are made on the learning performances of individual learners. Sixth, the teacher conveys the evaluation results to the individual learners to assist in learning reflections and adjust learning strategies. Finally, based on the results, the teacher can adjust his/her teaching strategies. PELS thus gathers the following key factors: a) reading rates b) total time spent c) learner ability assessed by PELS d) correct response rate e) time spent for individual task and f) final grade. The teachers' Personal Digital Assistants (PDA) which is embedded in the PELS, provides a) attendance b) accumulated score to questions and answers c) degree of concentration and d) accumulated score on teacher comments (p. 259-260). Although the study is designed for another discipline, it is evident that the PLES steps follow the same processes and strategies prescribed in Growing Success (2010) and can apply to any ESL/ESL classroom. To determine the effectiveness of Personalized E-learning System (PELS) tool, Chen and Chen (2009), conducted a study with two three grade classes in Taipei. The pretest and posttest results showed the learners' abilities in the subject area increased by using the proposed assessment tools. The researchers were also satisfied with the tool design interface and function though they mentioned that correct data for the gathered learning portfolio needed to be promoted.
Previous research demonstrates that formative assessment can be used successfully in web based learning. Orsmond, Merry, and Callaghan (2004) reveal how web-based formative assessment is useful for peer and self-assessment and can contribute to overall learning. Buchanan (2001) shows how web-based immediate feedback helps learners analyze learners' needs. Wang (2010) concludes that practice, immediate feedback and revision options on web-based assessment tool allow learners to learn better.

Effectiveness and Limitations of Mobile-based Formative Speaking Assessment
This section primarily deals with the tools and strategies used in mobile-based formative assessent along with highlighting the effectiveness as well as the limitations of MBFA in ESL/ EFL speaking class. Tarighat and Khodabakhsh (2016) used a mobile-based assessment in their EFL speaking classroom. To assess speaking in a general English language course, they asked the students to respond to specific tasks through WhatsApp. Each student was asked to record their two-minute responses to a pair of photos on WhatsApp and share with the specific group. The students were not provided with a score; instead, descriptive feedbacks in the form of comments were given on grammar, vocabulary, pronunciation and overall performance. Scores were assigned when all students submitted their responses. Majority of the participants in the research stated that they could hear their voice, take part in an English learning activity outside the classroom, comment on other learners' speech, spend time speaking solo for one or two full minutes, find an opportunity to do extra practice at home without stress (p. 412). However, They mentioned that some of the students cheated on the task. Instead of speaking on their phone, they read a prewritten response. Some students also complained that the tasks required them to record over and over and it was time-consuming. A few students also thought that mobile-based task (homework) was good but not the assessment.
Samaie, Nejad and Qaracholloo (2016) explored the efficacy of WhatsApp as a tool for formative assessment study with 30 Iranian EFL students. Each participant had to rerecord for 5 minutes on a given topic and share the recorded audio on WhatsApp with the group members. Each participant had to self-assess and peer-assess based on the rubric provided. The rubric included linguistic features such as grammar, accent, pronunciation, vocabulary range and fluency, content, topic, relevance and cohesion (p. 116). After the assessment, participants were interviewed over the tasks. The findings showed that mobile-based assessment had negative effects on the participants' attitudes. The participants suggested that mere assigning grades or scores did not help them learn and demanded a proficient assessor who could provide feedbacks on their work. In other words, students argued that "a single grade assigned by a peer of the same proficiency level could not lead to pedagogical benefits" (p. 120). The students' negative responses echo the theoretical underpinning formative assessment that a formative assessment should be accompanied by a descriptive feedback so that learners can understand their strengths and needs. But their study does not follow this basic formative assessment strategy. This study also demonstrates that learners could not learn much as there was no interaction and negotiation of meaning among leaners (Interactionist approach). Students also thought that the tasks were not authentic as learners usually did not respond to one another through audio recording. They concluded that any tool being used for formative assessment purposes for language skills should be based on communicative pedagogical features.
Gromik (2012) studied the effect of using video recording features of mobile phones on EFL students in Japan. All participants had to record themselves on phone for about 30 seconds in English for 14 weeks and emailed their performance to teachers for feedback on their grammar and pronunciation competence. The data collected over a period of 14 weeks showed that there was a "46% improvement in word production and a 37% increase in words uttered per second" (p. 226). The participants explained that viewing others' videos and getting their feedback on their videos helped them improve. Most of the students mentioned that they felt motivated and they enjoyed working in the project (p. 227). He also mentioned that he based his project on the constructivist approach where learners were able to negotiate and co-create meaning. The activities such as viewing others' video and getting feedback support the basic process of formative-based interactionalist approach of assessment. Hu and Gallagher (2013) reported a technology-aided undergrad biology formative assessment project throughout the semester. Students had to take 16 tests to self-assess over 11 weeks. The assessment included vocabulary and quizzes which provided unlimited attempts throughout the semester. Students could access through their phones or computer. They were also encouraged to produce videos of the surrounding test environment so that the designers could know in which situation students were using the apps or devices. The result of the collected data shows that students took the tests in their free time in their usual study location or on travel (p. 555) and the response to the quizzes rose over the semester. However, the use of the selfassessment tool shows "a significant correlation between the frequency with which a student used the quizzes and his/her performance in the final QMP4 test" (p. 556).
The above literature suggest that teachers/researchers use a variety of Mobile based formative speaking assessment tools and strategies and some worked better than the others. However, most of the tools followed the basic formative assessment process and strategies. For example, Tarighat and Khodabakhsh (2016) used designed task (two-minute speaking on WhatsApp), descriptive feedback (comments on speaking items by teachers), self-assessment (see individual score), and peer assessment (comment on other students' speech), and thus elicited information about students' needs and took actions according to the teachers' needs. The assessment tool also motivated students to move from guided practice (speaking solo on WhatsApp) to independent practice (extra practice at home), giving and receiving feedback from teachers and peers and consequently engage students as learning resources for one another and develop individual learning goals (Growing Success, 2010, pp. 32-34). Although the WhatsApp appears to be an effective tool for most of the students, some students mention the tool as timeconsuming and not good for assessment. Other researchers also used different tools but followed the basic AforL Framework at a varied degree and found that despite having some limitations, Mobile based formative speaking assessment is effective to a certain degree and it encourages learners to participate asynchronously, motivates learners and contributes to overall learning.

RECOMMENDATIONS
It is evident from the above discussion that there is a mixed experience of teachers and learners about mobile-based formative speaking assessment. Some research studies claim that they could apply mobile-based formative assessment quite effectively and successfully while others could not. Therefore, this section examines to see why some formative assessment designs are more successful than the others. Hu and Gallagher (2013) claim to have successfully implemented their MBFA tool and mention several design principals in the paper. Below is the simplified description of their design principles. Firstly, the learners should have the open access to the assessment tool which means that the learners can access the tools as many times as they want over some time. They claim that open access requires learners to take "responsibility for managing their own time and learning" (p. 554). However, they mentioned that some learners chose not to access at all. Therefore, they suggest that some part of the formative assessment needs to be included to the summative assessment which is usually graded at the end of the term. Secondly, the mobile-based assessment quizzes should be short and limit the uses of description or pictures. They argue that as students use phones on travel, they lose interest if the test includes items which are long or difficult to read. Thirdly, the task should be designed in a way that provides changes in descriptive feedback and revision. This encourages self-learning, self-testing, continuous learning (as opposed to the typical end-of-semester cramming), thereby providing greater opportunities for developing conceptual integration across material from the entire course (p. 554). Samaie, Nejad and Qaracholloo (2018) suggest that while using apps like WhatsApp or other social media platforms in academic settings, "pre-established all-inclusive educational plans and policies" are required to implement or control the process of self and peer-assessment. Otherwise, they warn that "there would not be any guarantee for a successful assessment" (p. 123). Hwang and Chang (2011) claim that mobile-assisted learning strategies or tools need to be situated in a real-world environment so that learners can get the authentic task to learn from.
In terms of the design framework for ESL/EFL formative speaking assessment, Chen and Chen's (2009) Personalized E-learning System (PELS) can be used to design any mobile-based speaking task. They recommended a flowchart for web-based formative assessment on page 260 of their paper. Their model followed the formative assessment processes and strategies as well as steps prescribed on page 28 in Growing Success (2010). Therefore, it is recommended that ESL/EFL teachers can use this tool as a design framework in their ESL/EFL speaking class. In addition, this paper puts forward 5 principles adapted from Stockwell and Hubbbard (2013) for MBFA tools. First, teachers should think of the affordances and limitations of the activities, tasks and apps before designing the assessment. For example, teachers should think if a tool like WhatsApp is suitable for a speaking task or a writing task or how students might react/respond to the apps if the tools do not support their learning styles or strategies. Second, teachers should use short and scaffolded tasks to create a stress-free learning environment for learners. For example, a pronunciation task may focus only on identifying contrastive features of sound and the next task can focus on the production of the sounds. Third, they should provide accommodation for all levels of learners. For example, the teacher can give multiple opinions to respond to (fill in the blanks or multiple choice questions) or allow students to participate in synchronously or asynchronously. Fourth, teachers should learn the features of any specific apps and then train students on how to use them inside or outside the classroom. They should co-construct rules or policies of the use of the apps in that specific assessment. Lastly, they should maintain equity, that is, a teacher needs to make sure that all students have access to the internet and the types of devices required to carry out certain tasks or activities.

CONCLUSION
This review article endeavours to study the recent relevant literature on mobile-based formative assessment (MBFA) in ESL/EFL speaking class. It demonstrates the current practices of MBFA in ESL/EFL speaking classes and reveals that MBFA practices in EFL speaking classes are effective to a certain extent. Based on the design principles and strategies used by teachers or app developers, some tools and strategies appear to be more effective than others. This paper also highlights some current design principles for MBFA in ESL/ EFL speaking classroom. The review thus concludes that since MBFA is a new area in ESL/EFL research, further research is required to investigate the fundamental design principles for such tools in the language classroom. Future research may focus on MBFA in learning/teaching reading, writing and listening in order to see how the design principles apply and explore the strengths and limitations of such tools, and strategies in MBFA. In the end, we recommend that ESL/EFL teachers should experiment with the tools in their classroom and use the tool that suits their lesson purposes.