Current Search: Barenholtz, Elan (x)
View All Items
Pages
- Title
- Convexities move because they contain matter.
- Creator
- Barenholtz, Elan
- Date Issued
- 2010-09-22
- PURL
- http://purl.flvc.org/fau/fd/FAUIR000120
- Format
- Citation
- Title
- Increased experience with an unfamiliar language decreases fixations to the mouth during encoding.
- Creator
- Mavica, Lauren Wood, Barenholtz, Elan, Graduate College
- Abstract/Description
-
Previous research has shown infants viewing speaking faces shift their visual fixation from speaker’s eyes to speaker’s mouth between 4-8 mo. Lewkowicz & Tift, 2011. It is theorized this shift occurs to facilitate language learning, based on audiovisual redundancy in speech. We previously found adults gazed significantly longer at speaker’s mouths while seeing and hearing non-native language compared with their native language. This suggested there may be mechanisms in which gaze fixations to...
Show morePrevious research has shown infants viewing speaking faces shift their visual fixation from speaker’s eyes to speaker’s mouth between 4-8 mo. Lewkowicz & Tift, 2011. It is theorized this shift occurs to facilitate language learning, based on audiovisual redundancy in speech. We previously found adults gazed significantly longer at speaker’s mouths while seeing and hearing non-native language compared with their native language. This suggested there may be mechanisms in which gaze fixations to speaking mouths are increased in response to uncertainty in speech. If so, increasing familiarity with speech signals may reduce this tendency to fixate the mouth. To test this, the current study investigated the effect of familiarization to non-native language on the gaze patterns of adults. We presented English-speakers with videos of sentences spoken in Icelandic. To ensure encoding of the speech, participants performed a task in which they were presented with videos of two different sentences, followed by an audio-only recording of one of the sentences, and had to identify whether the first or second video corresponded to the presented audio. In order to familiarize participants with the utterances, the same set of sentences were repeated. These ‘repetition’ blocks were followed by additional ‘novel’ blocks, using sentences not previously presented. We found the proportion of fixations directed at the mouth decreased across repetition blocks, but were restored to their initial rate in the novel blocks. These results suggest that familiarity with utterances, even in a non-native language, serve to reduce auditory uncertainty, leading to reduced mouth fixations.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00005838
- Format
- Document (PDF)
- Title
- Figure-ground assignment to a translating contour: A preference for advancing vs. receding motion.
- Creator
- Barenholtz, Elan, Tarr, M. J.
- Date Issued
- 2009-05-01
- PURL
- http://purl.flvc.org/fau/fd/FAUIR000119
- Format
- Citation
- Title
- Intrinsic and contextual features in object recognition.
- Creator
- Schlangen, Derrick, Barenholtz, Elan
- Date Issued
- 2015-01-28
- PURL
- http://purl.flvc.org/fau/fd/FAUIR000188
- Format
- Citation
- Title
- Task Decoding using Recurrent Quantification Analysis of Eye Movements.
- Creator
- LaCombe, Daniel C. Jr., Barenholtz, Elan, Graduate College
- Abstract/Description
-
In recent years, there has been a surge of interest in the possibility of using machine-learning techniques to decode generating properties of eye-movement data. Here we explore a relatively new approach to eye movement quantification, Recurrence Quantification Analysis RQA— which allows analysis of spatio-temporal fixation patterns — and assess its diagnostic power with respect to task decoding. Fifty participants completed both aesthetic-judgment and visual-search tasks over natural images...
Show moreIn recent years, there has been a surge of interest in the possibility of using machine-learning techniques to decode generating properties of eye-movement data. Here we explore a relatively new approach to eye movement quantification, Recurrence Quantification Analysis RQA— which allows analysis of spatio-temporal fixation patterns — and assess its diagnostic power with respect to task decoding. Fifty participants completed both aesthetic-judgment and visual-search tasks over natural images of indoor scenes. Six different sets of features were extracted from the eye movement data, including aggregate, fixation-map, and RQA measures. These feature vectors were then used to train six separate support vector machines using an n-fold cross validation procedure in order to classify a scanpath as being generated under either an aesthetic-judgment or visual- search task. Analyses indicated that all classifiers decoded task significantly better than chance. Pairwise comparisons revealed that all RQA feature sets afforded significantly greater decoding accuracy than the aggregate features. The superior performance of RQA features compared to the others may be that they are relatively invariant to changes in observer or stimulus; although RQA features significantly decoded observer- and stimulus-identity, analyses indicated that spatial distribution of fixations were most informative about stimulus-identity whereas aggregate measures were most informative about observer-identity. Therefore, changes in RQA values could be more confidently attributed to changes in task, rather than observer or stimulus, relative to the other feature sets. The findings of this research have significant implications for the application of RQA in studying eye-movement dynamics in topdown attention.
Show less - Date Issued
- 2015
- PURL
- http://purl.flvc.org/fau/fd/FA00005892
- Format
- Document (PDF)
- Title
- Categorical congruence facilitates multisensory associative learning.
- Creator
- Barenholtz, Elan, Lewkowicz, David J., Davidson, Meredith, Mavica, Lauren
- Date Issued
- 2014-10-27
- PURL
- http://purl.flvc.org/fau/flvc_fau_islandoraimporter_10.3758_s13423-014-0612-7_1631806039
- Format
- Citation
- Title
- CONNECTING THE NOSE AND THE BRAIN: DEEP LEARNING FOR CHEMICAL GAS SENSING.
- Creator
- Stark, Emily Nicole, Barenholtz, Elan, Florida Atlantic University, Department of Psychology, Charles E. Schmidt College of Science
- Abstract/Description
-
The success of deep learning in applications including computer vision, natural language processing, and even the game of Go can only be a orded by powerful computational resources and vast data sets. Data sets coming from the medical application are often much smaller and harder to acquire. Here a novel data approach is explained and used to demonstrate how to use deep learning as a step in data discovery, classi cation, and ultimately support for further investigation. Data sets used to...
Show moreThe success of deep learning in applications including computer vision, natural language processing, and even the game of Go can only be a orded by powerful computational resources and vast data sets. Data sets coming from the medical application are often much smaller and harder to acquire. Here a novel data approach is explained and used to demonstrate how to use deep learning as a step in data discovery, classi cation, and ultimately support for further investigation. Data sets used to illustrate these successes come from common ion-separation techniques that allow for gas samples to be quantitatively analyzed. The success of this data approach allows for the deployment of deep learning to smaller data sets.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013416
- Subject Headings
- Deep Learning, Data sets, Gases--Analysis
- Format
- Document (PDF)
- Title
- Contextual Modulation of Competitive Object Candidates in Early Object Recognition.
- Creator
- Islam, Mohammed F., Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
Object recognition is imperfect; often incomplete processing or deprived information yield misperceptions (i.e., misidentification) of objects. While quickly rectified and typically benign, instances of such errors can produce dangerous consequences (e.g., police shootings). Through a series of experiments, this study examined the competitive process of multiple object interpretations (candidates) during the earlier stages of object recognition process using a lexical decision task paradigm....
Show moreObject recognition is imperfect; often incomplete processing or deprived information yield misperceptions (i.e., misidentification) of objects. While quickly rectified and typically benign, instances of such errors can produce dangerous consequences (e.g., police shootings). Through a series of experiments, this study examined the competitive process of multiple object interpretations (candidates) during the earlier stages of object recognition process using a lexical decision task paradigm. Participants encountered low-pass filtered objects that were previously demonstrated to evoke multiple responses: a highly frequented interpretation (“primary candidates”) and a lesser frequented interpretation (“secondary candidates”). When objects were presented without context, no facilitative effects were observed for primary candidates. However, secondary candidates demonstrated evidence for being actively suppressed.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004836, http://purl.flvc.org/fau/fd/FA00004836
- Subject Headings
- Pattern recognition systems., Information visualization., Artificial intelligence., Spatial analysis (Statistics), Latent structure analysis.
- Format
- Document (PDF)
- Title
- Eye Fixations of the Face Are Modulated by Perception of a Bidirectional Social Interaction.
- Creator
- Kleiman, Michael J., Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
Eye fixations of the face are normally directed towards either the eyes or the mouth, however the proportions of gaze to either of these regions are dependent on context. Previous studies of gaze behavior demonstrate a tendency to stare into a target’s eyes, however no studies investigate the differences between when participants believe they are engaging in a live interaction compared to knowingly watching a pre-recorded video, a distinction that may contribute to studies of memory encoding....
Show moreEye fixations of the face are normally directed towards either the eyes or the mouth, however the proportions of gaze to either of these regions are dependent on context. Previous studies of gaze behavior demonstrate a tendency to stare into a target’s eyes, however no studies investigate the differences between when participants believe they are engaging in a live interaction compared to knowingly watching a pre-recorded video, a distinction that may contribute to studies of memory encoding. This study examined differences in fixation behavior for when participants falsely believed they were engaging in a real-time interaction over the internet (“Real-time stimulus”) compared to when they knew they were watching a pre-recorded video (“Pre-recorded stimulus”). Results indicated that participants fixated significantly longer towards the eyes for the pre-recorded stimulus than for the real-time stimulus, suggesting that previous studies which utilize pre-recorded videos may lack ecological validity.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004701, http://purl.flvc.org/fau/fd/FA00004701
- Subject Headings
- Eye -- Movements, Eye tracking, Gaze -- Psychological aspects, Nonverbal communication, Optical pattern recognition, Perceptual motor processes, Visual perception
- Format
- Document (PDF)
- Title
- Comprehension of an audio versus an audiovisual lecture at 50% time-compression.
- Creator
- Perez, Nicole, Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
Since students can adjust the speed of online videos by time-compression which is available through common software (Pastore & Ritzhaupt, 2015), it is important to learn at which point compression impacts comprehension. The focus of the study is whether the speaker’s face benefits comprehension during a 50% compressed lecture. Participants listened to a normal lecture or a 50% compressed lecture. Each participant saw an audio and audiovisual lecture, and were eye tracked during the...
Show moreSince students can adjust the speed of online videos by time-compression which is available through common software (Pastore & Ritzhaupt, 2015), it is important to learn at which point compression impacts comprehension. The focus of the study is whether the speaker’s face benefits comprehension during a 50% compressed lecture. Participants listened to a normal lecture or a 50% compressed lecture. Each participant saw an audio and audiovisual lecture, and were eye tracked during the audiovisual lecture. A comprehension test revealed that participants in the compressed lecture group performed better with the face. Eye fixations revealed that participants in the compressed lecture group looked less at the eyes and more at the nose when compared to eye fixations for those that viewed the normal lecture. This study demonstrates that 50% compression affects eye fixations and that the face benefits the listener, but this much compression will still lessen comprehension.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004847, http://purl.flvc.org/fau/fd/FA00004847
- Subject Headings
- Learning--Case studies., Perceptual-motor learning., Nonverbal communication., Internet videos--Education.
- Format
- Document (PDF)
- Title
- DEEP LEARNING OF POSTURAL AND OCULAR DYNAMICS TO PREDICT ENGAGEMENT AND LEARNING OF AUDIOVISUAL MATERIALS.
- Creator
- Perez, Nicole, Barenholtz, Elan, Florida Atlantic University, Department of Psychology, Charles E. Schmidt College of Science
- Abstract/Description
-
Engagement with educational instruction and related materials is an important part of learning and contributes to test performance. There are various measures of engagement including self-reports, observations, pupil diameter, and posture. With the challenges associated with obtaining accurate engagement levels, such as difficulties with measuring variations in engagement, the present study used a novel approach to predict engagement from posture by using deep learning. Deep learning was used...
Show moreEngagement with educational instruction and related materials is an important part of learning and contributes to test performance. There are various measures of engagement including self-reports, observations, pupil diameter, and posture. With the challenges associated with obtaining accurate engagement levels, such as difficulties with measuring variations in engagement, the present study used a novel approach to predict engagement from posture by using deep learning. Deep learning was used to analyze a labeled outline of the participants and extract key points that are expected to predict engagement. In the first experiment two short lectures were presented and participants were tested on a lecture to motivate engagement. The next experiment had videos that varied in interest to understand whether a more interesting presentation engages participants more, therefore helping participants achieve higher comprehension scores. In a third experiment, one video was presented to attempt to use posture to predict comprehension rather than engagement. The fourth experiment had videos that varied in level of difficulty to determine whether a challenging topic versus an easier topic affects engagement. T-tests revealed that the more interesting Ted Talk was rated as more engaging, and for the fourth study, the more difficult video was rated as more engaging. Comparing average pupil sizes did not reveal significant differences that would relate to differences in the engagement scores, and average pupil dilation did not correlate with engagement. Analyzing posture through deep learning resulted in three accurate predictive models and a way to predict comprehension. Since engagement relates to learning, researchers and educators can benefit from accurate engagement measures.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013558
- Subject Headings
- Instruction, Effective teaching, Pupil (Eye), Posture, Deep learning, Engagement
- Format
- Document (PDF)
- Title
- How the Spatial Organization of Objects Affects Perceptual Processing of a Scene.
- Creator
- Rashford, Stacey, Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
How does spatial organization of objects affect the perceptual processing of a scene? Surprisingly, little research has explored this topic. A few studies have reported that, when simple, homogenous stimuli (e.g., dots), are presented in a regular formation, they are judged to be more numerous than when presented in a random configuration (Ginsburg, 1976; 1978). However, these results may not apply to real-world objects. In the current study, fewer objects were believed to be on organized...
Show moreHow does spatial organization of objects affect the perceptual processing of a scene? Surprisingly, little research has explored this topic. A few studies have reported that, when simple, homogenous stimuli (e.g., dots), are presented in a regular formation, they are judged to be more numerous than when presented in a random configuration (Ginsburg, 1976; 1978). However, these results may not apply to real-world objects. In the current study, fewer objects were believed to be on organized desks than their disorganized equivalents. Objects that are organized may be more likely to become integrated, due to classic Gestalt principles. Consequently, visual search may be more difficult. Such object integration may diminish saliency, making objects less apparent and more difficult to find. This could explain why, in the present study, objects on disorganized desks were found faster.
Show less - Date Issued
- 2015
- PURL
- http://purl.flvc.org/fau/fd/FA00004537, http://purl.flvc.org/fau/fd/FA00004537
- Subject Headings
- Image analysis, Optical pattern recognition, Pattern recognition systems, Phenomenological psychology, Visual perception
- Format
- Document (PDF)
- Title
- Peripheral Object Recognition in Naturalistic Scenes.
- Creator
- Schlangen, Derrick, Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
Most of the human visual field falls in the periphery, and peripheral processing is important for normal visual functioning. Yet, little is known about peripheral object recognition in naturalistic scenes and factors that modulate this ability. We propose that a critical function of scene and object memory is in order to facilitate visual object recognition in the periphery. In the first experiment, participants identified objects in scenes across different levels of familiarity and...
Show moreMost of the human visual field falls in the periphery, and peripheral processing is important for normal visual functioning. Yet, little is known about peripheral object recognition in naturalistic scenes and factors that modulate this ability. We propose that a critical function of scene and object memory is in order to facilitate visual object recognition in the periphery. In the first experiment, participants identified objects in scenes across different levels of familiarity and contextual information within the scene. We found that familiarity with a scene resulted in a significant increase in the distance that objects were recognized. Furthermore, we found that a semantically consistent scene improved the distance that object recognition is possible, supporting the notion that contextual facilitation is possible in the periphery. In the second experiment, the preview duration of a scene was varied in order to examine how a scene representation is built and how memory of that scene and the objects within it contributes to object recognition in the periphery. We found that the closer participants fixated to the object in the preview, the farther on average they recognized that target object in the periphery. However, only a preview duration of the scenes for 5000 ms produced significantly farther peripheral object recognition compared to not previewing the scene. Overall, these experiments introduce a novel research paradigm for object recognition in naturalistic scenes, and demonstrates multiple factors that have systematic effects on peripheral object recognition.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004669, http://purl.flvc.org/fau/fd/FA00004669
- Subject Headings
- Context effects (Psychology), Human information processing, Optical pattern recognition, Pattern recognition systems, Recognition (Psychology), Visual perception
- Format
- Document (PDF)
- Title
- The Effect of Stereoscopic Cues on Multiple Object Tracking in a 3D Virtual Environment.
- Creator
- Oliveira, Steven Milanez, Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
Research on Multiple Object Tracking (MOT) has typically involved 2D displays where stimuli move in a single depth plane. However, under natural conditions, objects move in 3D which adds complexity to tracking. According to the spatial interference model, tracked objects have an inhibitory surround that when crossed causes tracking errors. How do these inhibitory fields translate to 3D space? Does multiple object tracking operate on a 2D planar projection, or is it in fact 3D? To investigate...
Show moreResearch on Multiple Object Tracking (MOT) has typically involved 2D displays where stimuli move in a single depth plane. However, under natural conditions, objects move in 3D which adds complexity to tracking. According to the spatial interference model, tracked objects have an inhibitory surround that when crossed causes tracking errors. How do these inhibitory fields translate to 3D space? Does multiple object tracking operate on a 2D planar projection, or is it in fact 3D? To investigate this, we used a fully immersive virtual-reality environment where participants were required to track 1 to 4 moving objects. We compared performance to a condition where participants viewed the same stimuli on a computer screen with monocular depth cues. Results suggest that participants were more accurate in the VR condition than the computer screen condition. This demonstrates interference is negligent when the objects are spatially distant, yet proximate within the 2D projection.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004943, http://purl.flvc.org/fau/fd/FA00004943
- Subject Headings
- Pattern perception., Virtual reality., Interactive multimedia., Computer simulation., Computer vision--Mathematical models., Automatic tracking--Mathematical models.
- Format
- Document (PDF)
- Title
- Informational Aspects of Audiovisual Identity Matching.
- Creator
- Mavica, Lauren Wood, Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
In this study, we investigated what informational aspects of faces could account for the ability to match an individual’s face to their voice, using only static images. In each of the first six experiments, we simultaneously presented one voice recording along with two manipulated images of faces (e.g. top half of the face, bottom half of the face, etc.), a target face and distractor face. The participant’s task was to choose which of the images they thought belonged to the same individual as...
Show moreIn this study, we investigated what informational aspects of faces could account for the ability to match an individual’s face to their voice, using only static images. In each of the first six experiments, we simultaneously presented one voice recording along with two manipulated images of faces (e.g. top half of the face, bottom half of the face, etc.), a target face and distractor face. The participant’s task was to choose which of the images they thought belonged to the same individual as the voice recording. The voices remained un-manipulated. In Experiment 7 we used eye tracking in order to determine which informational aspects of the model’s faces people are fixating while performing the matching task, as compared to where they fixate when there are no immediate task demands. We presented a voice recording followed by two static images, a target and distractor face. The participant’s task was to choose which of the images they thought belonged to the same individual as the voice recording, while we tracked their total fixation duration. In the no-task, passive viewing condition, we presented a male’s voice recording followed sequentially by two static images of female models, or vice versa, counterbalanced across participants. Participant’s results revealed significantly better than chance performance in the matching task when the images presented were the bottom half of the face, the top half of the face, the images inverted upside down, when presented with a low pass filtered image of the face, and when the inner face was completely blurred out. In Experiment 7 we found that when completing the matching task, the time spent looking at the outer area of the face increased, as compared to when the images and voice recordings were passively viewed. When the images were passively viewed, the time spend looking at the inner area of the face increased. We concluded that the inner facial features (i.e. eyes, nose, and mouth) are not necessary informational aspects of the face which allow for the matching ability. The ability likely relies on global features such as the face shape and size.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004688, http://purl.flvc.org/fau/fd/FA00004688
- Subject Headings
- Biometric identification, Eye -- Movements, Nonverbal communication, Optical pattern recognition, Sociolinguistics, isual perception
- Format
- Document (PDF)
- Title
- STREAMLINING CLINICAL DETECTION OF ALZHEIMER’S DISEASE USING ELECTRONIC HEALTH RECORDS AND MACHINE LEARNING TECHNIQUES.
- Creator
- Kleiman, Michael J., Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
Alzheimer’s disease is typically detected using a combination of cognitive-behavioral assessment exams and interviews of both the patient and a family member or caregiver, both administered and interpreted by a trained physician. This procedure, while standard in medical practice, can be time consuming and expensive for both the patient and the diagnostician especially because proper training is required to interpret the collected information and determine an appropriate diagnosis. The use of...
Show moreAlzheimer’s disease is typically detected using a combination of cognitive-behavioral assessment exams and interviews of both the patient and a family member or caregiver, both administered and interpreted by a trained physician. This procedure, while standard in medical practice, can be time consuming and expensive for both the patient and the diagnostician especially because proper training is required to interpret the collected information and determine an appropriate diagnosis. The use of machine learning techniques to augment diagnostic procedures has been previously examined in limited capacity but to date no research examines real-world medical applications of predictive analytics for health records and cognitive exam scores. This dissertation seeks to examine the efficacy of detecting cognitive impairment due to Alzheimer’s disease using machine learning, including multi-modal neural network architectures, with a real-world clinical dataset used to determine the accuracy and applicability of the generated models. An in-depth analysis of each type of data (e.g. cognitive exams, questionnaires, demographics) as well as the cognitive domains examined (e.g. memory, attention, language) is performed to identify the most useful targets, with cognitive exams and questionnaires being found to be the most useful features and short-term memory, attention, and language found to be the most important cognitive domains. In an effort to reduce medical costs and streamline procedures, optimally predictive and efficient groups of features were identified and selected, with the best performing and economical group containing only three questions and one cognitive exam component, producing an accuracy of 85%. The most effective diagnostic scoring procedure was examined, with simple threshold counting based on medical documentation being identified as the most useful. Overall predictive analysis found that Alzheimer’s disease can be detected most accurately using a bimodal multi-input neural network model using separated cognitive domains and questionnaires, with a detection accuracy of 88% using the real-world testing set, and that the technique of analyzing domains separately serves to significantly improve model efficacy compared to models that combine them.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013326
- Subject Headings
- Alzheimer's disease, Electronic Health Records, Machine learning
- Format
- Document (PDF)
- Title
- THE INFLUENCE OF CONTEXT AND PERCEPTUAL LOAD ON OBJECT RECOGNITION.
- Creator
- Islam, Mohammed, Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
Forster and Lavie (2008) and Lavie, Lin, Zokaei and Thoma (2009) have demonstrated that meaningful stimuli, such as objects, are ignored under conditions of high perceptual load but not low. However, objects are seldom presented without context in the real world. Given that context can reduce the threshold for object recognition (Barenholtz, 2013), is it possible for context to reduce the processing load of objects such that they can be processed under high load? In the first experiment, I...
Show moreForster and Lavie (2008) and Lavie, Lin, Zokaei and Thoma (2009) have demonstrated that meaningful stimuli, such as objects, are ignored under conditions of high perceptual load but not low. However, objects are seldom presented without context in the real world. Given that context can reduce the threshold for object recognition (Barenholtz, 2013), is it possible for context to reduce the processing load of objects such that they can be processed under high load? In the first experiment, I attempted to obtain similar findings of the aforementioned studies by replicating their paradigm with photographs of real-world objects. The findings of the experiment suggested that objects can cause distractor interference under high load conditions, but not low load conditions. These findings are opposite of what the perceptual literature suggests (e.g., Lavie, 1995). However, these findings are aligned with a two-stage dilution model of attention in which information is first processed in parallel and then selectively (Wilson, Muroi, and MacLeod, 2011). Experiment 2 assessed if this effect was specific to semantic objects by introducing meaningless, abstract objects. The results suggest that the dilution effect was not due to the semantic features of objects. The third experiment assessed the influence of context on objects under load. The results of the experiment found an elimination of all interference effects in both the high and low load conditions. Comparisons between scene-object congruency revealed no influence of semantic information from scenes. It appears that the presentation of a visual stimuli prior to the flanker task diluted attention such that the distractor effects previously observed in the high load condition were minimized. Thus, it does not appear that context reduced the threshold for object recognition under load. All three experiments have demonstrated strong evidence for the dilution approach of attention over perceptual load models.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013319
- Subject Headings
- Perception--Research, Selective attention, Form perception, Context effects (Psychology)
- Format
- Document (PDF)
- Title
- COMPARISON OF CLASSIFYING HUMAN ACTIONS FROM BIOLOGICAL MOTION WITH ARTIFICIAL NEURAL NETWORKS.
- Creator
- Wong, Rachel, Barenholtz, Elan, Florida Atlantic University, Department of Psychology, Charles E. Schmidt College of Science
- Abstract/Description
-
The ability to recognize human actions is essential for individuals to navigate through their daily life. Biological motion is the primary mechanism people use to recognize actions quickly and efficiently, but their precision can vary. The development of Artificial Neural Networks (ANNs) has the potential to enhance the efficiency and effectiveness of accomplishing common human tasks, including action recognition. However, the performance of ANNs in action recognition depends on the type of...
Show moreThe ability to recognize human actions is essential for individuals to navigate through their daily life. Biological motion is the primary mechanism people use to recognize actions quickly and efficiently, but their precision can vary. The development of Artificial Neural Networks (ANNs) has the potential to enhance the efficiency and effectiveness of accomplishing common human tasks, including action recognition. However, the performance of ANNs in action recognition depends on the type of model used. This study aimed to improve the accuracy of ANNs in action classification by incorporating biological motion information into the input conditions. The study used the UCF Crime dataset, a dataset containing surveillance videos of normal and criminal activity, and extracted biological motion information with OpenPose, a pose estimation ANN. OpenPose adjusted to create four condition types using the biological motion information (image-only, image with biological motion, only biological motion, and coordinates only) and used either a 3-Dimensional Convolutional Neural Network (3D CNN) or a Gated Recurrent Unit (GRU) to classify the actions. Overall, the study found that including biological motion information in the input conditions led to higher accuracy regardless of the number of action categories in the dataset. Moreover, the GRU model using the 'coordinates only' condition had the best accuracy out of all the action classification models. These findings suggest that incorporating biological motion into input conditions and using numerical format input data can benefit the development of accurate action classification models using ANNs.
Show less - Date Issued
- 2023
- PURL
- http://purl.flvc.org/fau/fd/FA00014164
- Subject Headings
- Neural networks (Computer science), Human activity recognition, Artificial intelligence
- Format
- Document (PDF)
- Title
- In Pursuit of Perceptions: Priming Intervention during a Goal-Directed Behavioral Task.
- Creator
- Osei, Peter Claudius, Barenholtz, Elan, Florida Atlantic University, Department of Psychology, Charles E. Schmidt College of Science
- Abstract/Description
-
Learning to effectively execute goal-directed tasks generally requires guidance from knowledgeable experts that can direct the performer’s attention toward important environmental features. However, specifying the optimal attentional strategies is difficult due to the subjective nature of perceptions and the complexity of the underlying neural processes. The current skill acquisition literature emphasizes action-based contingencies through Predictive and Ecological models when examining...
Show moreLearning to effectively execute goal-directed tasks generally requires guidance from knowledgeable experts that can direct the performer’s attention toward important environmental features. However, specifying the optimal attentional strategies is difficult due to the subjective nature of perceptions and the complexity of the underlying neural processes. The current skill acquisition literature emphasizes action-based contingencies through Predictive and Ecological models when examining attentional processes, while Perceptual Control Theory advocates for perceptual-based mechanisms. To evaluate the efficacy of these models, this study implicitly primed one hundred fifteen participants to focus on action-based or perceptual-based aspects during an interceptive task. It was predicted that the perceptual-based priming condition would result in faster learning and greater resilience to environmental disturbances. However, the highly variable results did not show significant differences in learning rate or resilience between the action and perceptual-based conditions. Ultimately, the variability in the findings suggests that a superior performance depends on numerous factors unique to each performer. Consequently, instructional methods cannot rely on a single optimal attentional strategy for gathering environmental information. Instead, the dynamic interplay between the individual and the environment must be considered to foster the skill development of novice performers.
Show less - Date Issued
- 2023
- PURL
- http://purl.flvc.org/fau/fd/FA00014290
- Subject Headings
- Perception, Attention, Cognitive psychology--Research
- Format
- Document (PDF)
- Title
- DIMINISHING RETURNS IN COLOR PERCEPTION.
- Creator
- Teti, Emily S., Barenholtz, Elan, Florida Atlantic University, Department of Psychology, Charles E. Schmidt College of Science
- Abstract/Description
-
It is accepted that a perceptually uniform color space cannot be modeled with Euclidean geometry. The next most complex geometry is Riemannian or a geometry with inherent curvature. Riemann, Schrodinger, and Helmholtz introduced and strengthened the theory that a Riemannian geometry can be used to model an ideal color space, to borrow language from Judd. While the addition of curvature in color space increases its ability to capture human color perception, such a geometry is insufficient if...
Show moreIt is accepted that a perceptually uniform color space cannot be modeled with Euclidean geometry. The next most complex geometry is Riemannian or a geometry with inherent curvature. Riemann, Schrodinger, and Helmholtz introduced and strengthened the theory that a Riemannian geometry can be used to model an ideal color space, to borrow language from Judd. While the addition of curvature in color space increases its ability to capture human color perception, such a geometry is insufficient if small distances along a shortest path do not add up to the length of the entire path. This phenomenon is referred to as diminishing returns and would necessitate a more complicated, non-Riemannian geometry to accurately quantify human color perception. This work includes (1) the invention and validation of new analysis techniques to investigate the existence of diminishing returns, (2) empirical evidence for diminishing returns in color space that varies throughout the current standard space (CIELAB), and (3) suggests that paths through perceptual color space may still coincide with paths through the induced Riemannian metric. The new analysis methods are shown to be robust to increased difficulty of a two-alternative forced choice task (2AFC) and a limited understanding of how to quantify stimuli. Using a 2AFC task and the new methods, strong evidence for diminishing returns in the grayscale is demonstrated. These data were collected using a crowd-sourced platform that has very little experimental control over how the stimuli are presented, yet these results were validated using a highly-controlled in-person study. A follow-up study also suggests that diminishing returns exists throughout color space and to varying degrees. Lastly, shortest paths in perceived color space were investigated to determine whether diminishing returns, and hence a non-Riemannian perceptual color space, impact only the perceived size of the differences, or the shortest paths themselves in color space. The results of this study found that, although there was weak evidence the paths do not coincide, this effect was smaller than a response bias. Therefore, we did not find evidence that shortest paths in color space were impacted by the non-Riemannianness of human color perception.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00013887
- Subject Headings
- Color Perception, Color vision--Research, Diminishing returns
- Format
- Document (PDF)