Open-source method for spatial localization of probe placements in developmental and clinical populations
Sagi Jaffe-Dax and Lauren Emberson, Princeton University
Duration: 90 min
Synopsis: Measuring the exact placement of probes (e.g., electrodes, optodes) on a participant’s head is a notoriously difficult step in acquiring fNIRS data and particularly difficult for any clinical or developmental population. Existing methods require the participant to remain still for a lengthy period of time, are laborious, and require extensive training. We will teach you an innovative video-based method for estimating the probes’ positions relative to the participant’s head, which is fast, motion-resilient, and automatic (and freely available). This method substantially facilitates the use of spatial co-registration methods on developmental and clinical populations, where lengthy, motion-sensitive measurement methods routinely fail. In this course, we will demonstrate the video-based method’s reliability and validity compared to existing state-of-the-art methods. We will also conduct a demonstration of our automatic method in estimating the position of probes on an infant head without lengthy offline procedures, a task which is considered unachievable until now. Participants will have an opportunity to install and use the new method on their own computer and will be given detailed explanation on how to use the new method back in their lab.
Rationale: Knowing where the fNIRS probes were located with respect to the underlying cortical regions is a pre-requisite for making spatial conclusions and to exploit the full benefit of fNIRS. However, existing co-registration methods are not suitable for many developmental and clinical populations and are difficult to implement in the field. These co-registration methods either require long acquisition, are prone to a participant’s movement and are highly sensitive to their environment (3D digitizers), or require a lengthy manual annotation and are prone to experimenter bias (photogrammetry methods). To address all of these problems, we recently developed an automatic video-based method of co-registration that is ideal for developmental, clinical and field studies.
Course structure: In this course, we teach participants how to use our novel video-based method for co-registration. This method, created for early developmental populations, is both easy to implement by novice experimenters and is robust to participant’s head movements (include citation of our paper). Our method requires only ~20 seconds of video using widely-available photographic equipment with the probes already mounted on the scalp. During acquisition, the participant can move his or her head freely without jeopardizing the accuracy of the measurement. We will present both the validity and the reliability of our video-based method compared to the traditional 3D digitizer on a group of adult participants. Importantly, we also demonstrate the feasibility of this approach with early developmental populations. The goal of this mini-course is to present our new automatic video-based co-registration method to fNIRS researchers and to give participants experience using this method. After reviewing current methods of co-registration and their limitations, we will show how our new method overcomes these limitations including reviewing our tests of the methods reliability across a number of conditions. We will then demonstrate the usage of our method and give participants the opportunity to get hands-on experience on their own computers.
Learning objectives: Participants will learn how to use our novel video-based method for co-registration. We will also conduct a demonstration of our automatic method in estimating the position of probes on an infant head without lengthy offline procedures, a task which is considered unachievable until now. Participants will have an opportunity to install and use the new method on their own computer and will be given detailed explanation on how to use the new method back in their lab.