Melo, Monteza, Colson and Zhang (2022: 15) found that “students have a clear preference for supporting experiential learning-based assessments, such as lab activities and simulations or analysis of case studies, and a strong dislike for assessments that involve supervised and time-constrained assessments (e.g., proctored exams)”. Furthermore, students also value professional presentations and final projects, they indicate. They further say that “most students would prefer assessments that are driving (develop different skills such as creative thinking) and realistic (develop skills transferable to the real world)” (p. 1). They state that “How students perceive assessment methods is critical because it can affect their learning experience and academic achievements”.
Melo, et al. (2022: 12) provide, in Fig 3 below, an intuitive interpretation of their research results. The “results reveal that participating in lab practices and simulations (presented to participants as practical activities that support technical skills rehearsal) and analyzing and discussing case studies (subjects were told that knowledge of the subject, good writing, and coherence are evaluated) were the most and second most desirable options that contribute to student learning”. Professional presentations, involving the use of audio-visual materials to assess knowledge of a topic, shown as the third most important form of assessment. A final project, although it demands more time and effort from students, is considered the fourth-most important form of assessment. The fifth most important is continuous quizzes of open-ended questions.
Melo, et al. (2022: 1) observe that “previous literature has not analyzed, from the student perspective, what methods are preferred for evaluating performance in experiential learning”. They made use of the best-worst scaling approach in two online surveys, involving 218 undergraduate students that were enrolled in experiential learning-based programs during the COVID-19 pandemic. They presented students with a description of several assessment formats and is confident that students understood the options. Table 1 below lists the assessment formats and table 2 the attributes that were identified for the research.
In contrast to rating scales, such as Likert scales, or approve/disapprove questions, Melo, et al. (2022) used the best-worst scaling (BWS) approach to adequately research preferences for various assessments. Unlike conventional approaches, where respondents are required to rank from “most important” to “least important”, BWS requires respondents to make trade-offs among alternatives. BWS further prevents a bias effect, and rankings among best-worst (BW) choices is deemed of lower cognitive difficulty for participants, relative to discrete choice experiments.
Melo, G., Monteza, D., Colson, G., Zhang, Y.Y. (2022). How to assess? Student preferences for methods to assess experiential learning: A best-worst scaling approach. PLoS ONE, 17(10). Electronically accessible from https://doi.org/10.1371/journal.pone.0276745.
Comments
You can follow this conversation by subscribing to the comment feed for this post.