Principal Supervisor: Prof Mavrikis Manolis (IOE, UCL’s Faculty of Education and Society)
Co-Supervisor: Prof Tim J. Smith (Birkbeck University College)
Project Description
Executive function (EF) and self-regulation skills are critical predictors of school success (Blair & Razza, 2007), wellbeing and general life outcomes (Moffitt et al., 2011). While there are clear indications of the benefits of training EF in early developmental stages (Blair, 2016) and of helping children think explicitly about their own learning (Hattie, 2012), schools and parents are not equipped to support learners with this fundamental skill. This is partly because little is known about the exact mechanisms involved in EF from a cognitive psychology perspective, how to support self-reflection in young children from a pedagogical perspective (Larkin, 2010) and critically how EF operates during real-world problem solving.
Neuromonitoring provides a methodological platform that has the potential to offer both research insights into our collective understanding of EF, but also real-time support through feedback for reflection and self-regulation.
This project will design and empirically evaluate an intervention using real-time neuromonitoring and feedback that supports the training of executive function in a real-world scenario. This is possible due to the highly unique facilities provided by BBK’s Wellcome-funded ToddlerLab Neuroimaging CAVE (Cave Automatic Virtual Environment), a world-first facility allowing children to interact with augmented virtual/real environments (e.g. AR/VR) whilst wearable technology tracks their movements (via motion capture), attention (via an eye tracker), and brain activity (via functional Near Infra-Red Spectroscopy; fNIRS).
By integrating techniques from the field of Artificial Intelligence in Education (AIED) such as User Modelling (UM) a computational architecture could be developed to enable the provision of real-time feedback (e.g. pre-emptive attention prompts) and generation of reflection opportunities (e.g. recordings of critical moments) that have been shown to improve children’s EF. Previous work has demonstrated the potential of such approaches in supporting learning in real-time e.g., when students interact with digital problem-solving environments such as games and simulations (Grawemeyer, Mavrikis et al. 2016). Additionally, learning analytics (LA) and open learner modelling (OLM), while usually targeted to teachers (Mavrikis et al. 2019) have also been used as tools for reflection and metacognition to support self-regulation even for primary school children (Bull, McKay, 2004; Jones et al., 2018). The challenge is leveraging the insights of learner modelling from screen-based interactions and applying them to realistic everyday 3D scenarios.
Requirements
We are looking for a highly-motivated candidate with strong quantitative, analytical and/or programming skills and a desire to make an impact in the intersection of education with developmental psychology.
The candidate must have evidence of outstanding undergraduate academic performance in either cognitive science, psychology or computer science, artificial intelligence, 3D modelling/games design and ideally have (or be predicted to obtain) a strong Master’s degree in Cognitive/Developmental Neuroscience, Artificial Intelligence in Education, computational data science, or any cognate field (candidates will be asked to demonstrate how their background provides solid foundations to allow them to focus on the core aspects of this studentship). Candidate must also demonstrate solid foundations in academic writing and presenting, in independently organising aspects of their research (e.g. through a previous dissertation if not publications) and experience of working with young children.
Subject areas/keywords
Artificial Intelligence in Education, Developmental Science, Cognitive Psychology, Executive function, Neuroscience
Key References
Jones, A., Bull, S. & Castellano, G. “I Know That Now, I’m Going to Learn This Next” Promoting Self-regulated Learning with a Robotic Tutor. Int J of Soc Robotics 10, 439–454 (2018).