Current Search: Barenholtz, Elan (x)
View All Items
Pages
- Title
- TOWARDS SELF-ORGANIZED BRAIN: TOPOLOGICAL REINFORCEMENT LEARNING WITH GRAPH CELLULAR AUTOMATA.
- Creator
- Ray, Subhosit, Barenholtz, Elan, Florida Atlantic University, Department of Psychology, Charles E. Schmidt College of Science
- Abstract/Description
-
Automatizing optimal neural architectures is an under-explored domain; the majority of deep learning domains base their architecture on multiplexing different well-known architectures together based on past studies. Even after extensive research, the deployed algorithms may only work for specific domains, provide a minor boost, or even underperform compared to the previous state-of-the-art implementations. One approach, Neural architecture search, requires generating a pool of network...
Show moreAutomatizing optimal neural architectures is an under-explored domain; the majority of deep learning domains base their architecture on multiplexing different well-known architectures together based on past studies. Even after extensive research, the deployed algorithms may only work for specific domains, provide a minor boost, or even underperform compared to the previous state-of-the-art implementations. One approach, Neural architecture search, requires generating a pool of network topologies based on well-known kernel and activation functions. However, iteratively training the generated topologies and creating newer topologies based on the best-performing ones is computationally expensive and out of scope for most academic labs. In addition, the search space is constrained to the predetermined dictionary of kernel functions to generate the topologies. This thesis considers neural networks as a weighted directed graph, incorporating the ideas of message passing in graph neural networks to propagate the information from the input to the output nodes. We show that such a method relieves the dependency on a search space constrained to well-known kernel functions over any arbitrary graph structures. We test our algorithms in the RL environment and explore several optimization forays, such as graph attention and PPO to let us solve the problem. We improve upon the slow convergence of PPO using Neural CA approach as a self-organizing overhead towards generating adjacency matrices of network topologies. This exploration towards indirect encoding (an abstraction of DNA in neuro-developmental biology) yielded a much faster algorithm for convergence. In addition, we introduce 1D-involution as a way to implement message passing across nodes in a graph, which further reduces the parameter space to a significant degree without hindering performance.
Show less - Date Issued
- 2024
- PURL
- http://purl.flvc.org/fau/fd/FA00014436
- Subject Headings
- Neural networks (Computer science), Reinforcement learning, Cellular automata
- Format
- Document (PDF)
- Title
- A theory for the visual perception of object motion.
- Creator
- Norman, Joseph W., Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Center for Complex Systems and Brain Sciences
- Abstract/Description
-
The perception of visual motion is an integral aspect of many organisms' engagement with the world. In this dissertation, a theory for the perception of visual object-motion is developed. Object-motion perception is distinguished from objectless-motion perception both experimentally and theoretically. A continuoustime dynamical neural model is developed in order to generalize the ndings and provide a theoretical framework for continued re nement of a theory for object-motion perception....
Show moreThe perception of visual motion is an integral aspect of many organisms' engagement with the world. In this dissertation, a theory for the perception of visual object-motion is developed. Object-motion perception is distinguished from objectless-motion perception both experimentally and theoretically. A continuoustime dynamical neural model is developed in order to generalize the ndings and provide a theoretical framework for continued re nement of a theory for object-motion perception. Theoretical implications as well as testable predictions of the model are discussed.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004221, http://purl.flvc.org/fau/fd/FA00004221
- Subject Headings
- Human information processing, Neurophysiology, Perceptual motor processes, Visual perception
- Format
- Document (PDF)
- Title
- Potential stimulus contributions to counterchange determined motion perception.
- Creator
- Park, Cynthia Louise Smith, Hock, Howard S., Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
Prior research has explored the counterchange model of motion detection in terms of counterchanging information that originates in the stimulus foreground (or objects). These experiments explore counterchange apparent motion with regard to a new apparent motion stimulus where the necessary counterchanging information required for apparent motion is provided by altering the luminance of the background. It was found that apparent motion produced by background-counterchange requires longer frame...
Show morePrior research has explored the counterchange model of motion detection in terms of counterchanging information that originates in the stimulus foreground (or objects). These experiments explore counterchange apparent motion with regard to a new apparent motion stimulus where the necessary counterchanging information required for apparent motion is provided by altering the luminance of the background. It was found that apparent motion produced by background-counterchange requires longer frame durations and lower levels of average stimulus contrast compared to foreground-counterchange. Furthermore, inter-object distance does not influence apparent motion produced by background-counterchange to the degree it influences apparent motion produced by foreground-counterchange.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004313
- Subject Headings
- Motion perception (Vision), Perceptual motor processes, Visual analysis, Visual discrimination, Visual pathways, Visual perception
- Format
- Document (PDF)
- Title
- Self-Organization of Object-Level Visual Representations via Enforcement of Structured Sparsity in Deep Neural Networks.
- Creator
- LaCombe, Daniel C. Jr., Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Psychology
- Abstract/Description
-
A hypothesis for the self-organization of receptive fields throughout the hierarchy of biological vision is empirically tested using simulations of deep artificial neural networks. Results from many fields for topographic organization of receptive fields throughout the visual hierarchy remain disconnected. Although extensive simulation research has been done to model topographic organization in early visual areas, little to no research has investigated such organization in higher visual areas...
Show moreA hypothesis for the self-organization of receptive fields throughout the hierarchy of biological vision is empirically tested using simulations of deep artificial neural networks. Results from many fields for topographic organization of receptive fields throughout the visual hierarchy remain disconnected. Although extensive simulation research has been done to model topographic organization in early visual areas, little to no research has investigated such organization in higher visual areas. We propose that parsimonious structured sparsity principles, that permit the learning of topographic receptive fields in simulated visual areas, are sufficient for the emergence of a semantic topology in object-level representations of a deep neural network. These findings suggest wide-reaching implications for the functional organization of the biological visual system and we conjecture that such observed results in nature could serve as the foundation for unsupervised learning of taxonomic and semantic relations between entities in the world.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004965, http://purl.flvc.org/fau/fd/FA00004955
- Subject Headings
- Dissertations, Academic -- Florida Atlantic University
- Format
- Document (PDF)
- Title
- Sparse Coding and Compressed Sensing: Locally Competitive Algorithms and Random Projections.
- Creator
- Hahn, William E., Barenholtz, Elan, Florida Atlantic University, Charles E. Schmidt College of Science, Center for Complex Systems and Brain Sciences
- Abstract/Description
-
For an 8-bit grayscale image patch of size n x n, the number of distinguishable signals is 256(n2). Natural images (e.g.,photographs of a natural scene) comprise a very small subset of these possible signals. Traditional image and video processing relies on band-limited or low-pass signal models. In contrast, we will explore the observation that most signals of interest are sparse, i.e. in a particular basis most of the expansion coefficients will be zero. Recent developments in sparse...
Show moreFor an 8-bit grayscale image patch of size n x n, the number of distinguishable signals is 256(n2). Natural images (e.g.,photographs of a natural scene) comprise a very small subset of these possible signals. Traditional image and video processing relies on band-limited or low-pass signal models. In contrast, we will explore the observation that most signals of interest are sparse, i.e. in a particular basis most of the expansion coefficients will be zero. Recent developments in sparse modeling and L1 optimization have allowed for extraordinary applications such as the single pixel camera, as well as computer vision systems that can exceed human performance. Here we present a novel neural network architecture combining a sparse filter model and locally competitive algorithms (LCAs), and demonstrate the networks ability to classify human actions from video. Sparse filtering is an unsupervised feature learning algorithm designed to optimize the sparsity of the feature distribution directly without having the need to model the data distribution. LCAs are defined by a system of di↵erential equations where the initial conditions define an optimization problem and the dynamics converge to a sparse decomposition of the input vector. We applied this architecture to train a classifier on categories of motion in human action videos. Inputs to the network were small 3D patches taken from frame di↵erences in the videos. Dictionaries were derived for each action class and then activation levels for each dictionary were assessed during reconstruction of a novel test patch. We discuss how this sparse modeling approach provides a natural framework for multi-sensory and multimodal data processing including RGB video, RGBD video, hyper-spectral video, and stereo audio/video streams.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004713, http://purl.flvc.org/fau/fd/FA00004713
- Subject Headings
- Artificial intelligence, Expert systems (Computer science), Image processing -- Digital techniques -- Mathematics, Sparse matrices
- Format
- Document (PDF)
- Title
- COMPUTATION IN SELF-ATTENTION NETWORKS.
- Creator
- Morris, Paul, Barenholtz, Elan, Florida Atlantic University, Center for Complex Systems and Brain Sciences, Charles E. Schmidt College of Science
- Abstract/Description
-
Neural network models with many tunable parameters can be trained to approximate functions that transform a source distribution, or dataset, into a target distribution of interest. In contrast to low-parameter models with simple governing equations, the dynamics of transformations learned in deep neural network models are abstract and the correspondence of dynamical structure to predictive function is opaque. Despite their “black box” nature, neural networks converge to functions that...
Show moreNeural network models with many tunable parameters can be trained to approximate functions that transform a source distribution, or dataset, into a target distribution of interest. In contrast to low-parameter models with simple governing equations, the dynamics of transformations learned in deep neural network models are abstract and the correspondence of dynamical structure to predictive function is opaque. Despite their “black box” nature, neural networks converge to functions that implement complex tasks in computer vision, Natural Language Processing (NLP), and the sciences when trained on large quantities of data. Where traditional machine learning approaches rely on clean datasets with appropriate features, sample densities, and label distributions to mitigate unwanted bias, modern Transformer neural networks with self-attention mechanisms use Self-Supervised Learning (SSL) to pretrain on large, unlabeled datasets scraped from the internet without concern for data quality. SSL tasks have been shown to learn functions that match or outperform their supervised learning counterparts in many fields, even without task-specific finetuning. The recent paradigm shift to pretraining large models with massive amounts of unlabeled data has given credibility to the hypothesis that SSL pretraining can produce functions that implement generally intelligent computations.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00014061
- Subject Headings
- Neural networks (Computer science), Machine learning, Self-supervised learning
- Format
- Document (PDF)
- Title
- CRACKING THE SPARSE CODE: LATERAL COMPETITION FORMS ROBUST V1-LIKE REPRESENTATIONS IN CONVOLUTIONAL NEURAL NETWORKS.
- Creator
- Teti, Michael, Barenholtz, Elan, Hahn, William, Florida Atlantic University, Center for Complex Systems and Brain Sciences, Charles E. Schmidt College of Science
- Abstract/Description
-
Although state-of-the-art Convolutional Neural Networks (CNNs) are often viewed as a model of biological object recognition, they lack many computational and architectural motifs that are postulated to contribute to robust perception in biological neural systems. For example, modern CNNs lack lateral connections, which greatly outnumber feed-forward excitatory connections in primary sensory cortical areas and mediate feature-specific competition between neighboring neurons to form robust,...
Show moreAlthough state-of-the-art Convolutional Neural Networks (CNNs) are often viewed as a model of biological object recognition, they lack many computational and architectural motifs that are postulated to contribute to robust perception in biological neural systems. For example, modern CNNs lack lateral connections, which greatly outnumber feed-forward excitatory connections in primary sensory cortical areas and mediate feature-specific competition between neighboring neurons to form robust, sparse representations of sensory stimuli for downstream tasks. In this thesis, I hypothesize that CNN layers equipped with lateral competition better approximate the response characteristics and dynamics of neurons in the mammalian primary visual cortex, leading to increased robustness under noise and/or adversarial attacks relative to current robust CNN layers. To test this hypothesis, I develop a new class of CNNs called LCANets, which simulate recurrent, feature-specific lateral competition between neighboring neurons via a sparse coding model termed the Locally Competitive Algorithm (LCA). I first perform an analysis of the response properties of LCA and show that sparse representations formed by lateral competition more accurately mirror response characteristics of primary visual cortical populations and are more useful for downstream tasks like object recognition than previous sparse CNNs, which approximate competition with winner-take-all mechanisms implemented via thresholding.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00014050
- Subject Headings
- Neural networks (Computer science), Machine learning, Computer vision
- Format
- Document (PDF)
- Title
- BEHAVIORAL ANALYSIS OF DEEP CONVOLUTIONAL NEURAL NETWORKS FOR IMAGE CLASSIFICATION.
- Creator
- Clark, James Alex, Barenholtz, Elan, Florida Atlantic University, Center for Complex Systems and Brain Sciences, Charles E. Schmidt College of Science
- Abstract/Description
-
Within Deep CNNs there is great excitement over breakthroughs in network performance on benchmark datasets such as ImageNet. Around the world competitive teams work on new ways to innovate and modify existing networks, or create new ones that can reach higher and higher accuracy levels. We believe that this important research must be supplemented with research into the computational dynamics of the networks themselves. We present research into network behavior as it is affected by: variations...
Show moreWithin Deep CNNs there is great excitement over breakthroughs in network performance on benchmark datasets such as ImageNet. Around the world competitive teams work on new ways to innovate and modify existing networks, or create new ones that can reach higher and higher accuracy levels. We believe that this important research must be supplemented with research into the computational dynamics of the networks themselves. We present research into network behavior as it is affected by: variations in the number of filters per layer, pruning filters during and after training, collapsing the weight space of the trained network using a basic quantization, and the effect of Image Size and Input Layer Stride on training time and test accuracy. We provide insights into how the total number of updatable parameters can affect training time and accuracy, and how “time per epoch” and “number of epochs” affect network training time. We conclude with statistically significant models that allow us to predict training time as a function of total number of updatable parameters in the network.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00013940
- Subject Headings
- Neural networks (Computer science), Image processing
- Format
- Document (PDF)
- Title
- PRESERVING KNOWLEDGE IN SIMULATED BEHAVIORAL ACTION LOOPS.
- Creator
- St.Clair, Rachel, Barenholtz, Elan, Hahn, William, Florida Atlantic University, Center for Complex Systems and Brain Sciences, Charles E. Schmidt College of Science
- Abstract/Description
-
One basic goal of artificial learning systems is the ability to continually learn throughout that system’s lifetime. Transitioning between tasks and re-deploying prior knowledge is thus a desired feature of artificial learning. However, in the deep-learning approaches, the problem of catastrophic forgetting of prior knowledge persists. As a field, we want to solve the catastrophic forgetting problem without requiring exponential computations or time, while demonstrating real-world relevance....
Show moreOne basic goal of artificial learning systems is the ability to continually learn throughout that system’s lifetime. Transitioning between tasks and re-deploying prior knowledge is thus a desired feature of artificial learning. However, in the deep-learning approaches, the problem of catastrophic forgetting of prior knowledge persists. As a field, we want to solve the catastrophic forgetting problem without requiring exponential computations or time, while demonstrating real-world relevance. This work proposes a novel model which uses an evolutionary algorithm similar to a meta-learning objective, that is fitted with a resource constraint metrics. Four reinforcement learning environments are considered with the shared concept of depth although the collection of environments is multi-modal. This system shows preservation of some knowledge in sequential task learning and protection of catastrophic forgetting in deep neural networks.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00013896
- Subject Headings
- Artificial intelligence, Deep learning (Machine learning), Reinforcement learning, Neural networks (Computer science)
- Format
- Document (PDF)