Date | May 2022 | Marks available | 22 | Reference code | 22M.Paper 1.HL.TZ1.5 |
Level | HL only | Paper | Paper 1 | Time zone | TZ1 |
Command term | Evaluate | Question number | 5 | Adapted from | N/A |
Question
Evaluate one method used to study the interaction between technology and cognitive processes.
Markscheme
Refer to the paper 1 section B assessment criteria when awarding marks. These can be found under the “Your tests” tab > supplemental materials.
The command term “evaluate” requires candidates to make an appraisal by weighing up the strengths and limitations of one method used to study the interaction between technology and cognitive processes.
Although the discussion of both strengths and limitations is required, it does not have to be evenly balanced to gain high marks.
Relevant methods may include, but are not limited to:
- experimental method
- correlational studies
- surveys.
Relevant studies may include, but are not limited to:
- Mueller and Openheimer’s (2014) experiment on the use of laptops versus paper in note-taking by college students
- Chou and Edges’s (2012) use of a survey in the study of the availability heuristic in thinking
- Rosen et al.’s (2013) correlational study on the influence of induced multi-tasking on cognitive processes
- Sparrow et al.’s (2011) experiments on transactive memory and digital amnesia.
Evaluation of the method may include, but is not limited to:
- the appropriateness of the methods for the aim
- issues of validity and reliability
- sample choice and size
- ease and cost of the procedure
- the generalizability of findings.
If a candidate evaluates more than one method, credit should be given only to the first evaluation. However, candidates may address other methods and be awarded marks for these as long as they are clearly used to evaluate the one main method addressed in the response.
If the candidate addresses only strengths or only limitations, the response should be awarded up to a maximum of [3] for criterion D: critical thinking. All remaining criteria should be awarded marks according to the best fit approach.
Examiners report
This was the essay that candidates struggled the most with and it was clear that most who attempted it were ill-prepared. Stronger responses clearly described and explained in detail the key features of one research method used to study the interaction between technology and cognitive processes. Most of these candidates selected the experimental method and provided relevant studies that effectively demonstrated the use of the method. Such responses also addressed the strengths and limitations of the method itself so that the essay was clearly focused on the demands of the question as set and did not just provide a perfunctory evaluation of the supporting studies.
A number of candidates showed a complete misunderstanding of the demands of the question and it was clear that either they had not understood that this question was from the higher level extension or were totally unprepared. Several candidates focused on the use of brain scanning technology such as MRI instead of the required research method so that the response was completely off topic. In addition, these responses included research studies that were of no direct relevance to the question or the cognitive approach such as Maguire (2000) and Draganski (2004). In the weaker responses that did focus on digital technology, evaluation of the selected research method was often generic, repetitive and lacked explanation.