Current Search: Reinforcement learning (x)
View All Items
- Title
- KINOVA ROBOTIC ARM MANIPULATION WITH PYTHON PROGRAMMING.
- Creator
- Veit, Cameron, Zhong, Xiangnan, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
As artificial intelligence (AI), such as reinforcement learning (RL), has continued to grow, the introduction of AI for use in robotic arms in order to have them autonomously complete tasks has become an increasingly popular topic. Robotic arms have recently had a drastic spike in innovation, with new robotic arms being developed for a variety of tasks both menial and complicated. One robotic arm recently developed for everyday use in close proximity to the user is the Kinova Gen 3 Lite, but...
Show moreAs artificial intelligence (AI), such as reinforcement learning (RL), has continued to grow, the introduction of AI for use in robotic arms in order to have them autonomously complete tasks has become an increasingly popular topic. Robotic arms have recently had a drastic spike in innovation, with new robotic arms being developed for a variety of tasks both menial and complicated. One robotic arm recently developed for everyday use in close proximity to the user is the Kinova Gen 3 Lite, but limited formal research has been conducted about controlling this robotic arm both with an AI and in general. Therefore, this thesis covers the implementation of Python programs in controlling the robotic arm physically as well as the use of a simulation to train an RL based AI compatible with the Kinova Gen 3 Lite. Additionally, the purpose of this research is to identify and solve the difficulties in the physical instance and the simulation as well as the impact of the learning parameters on the robotic arm AI. Similarly, the issues in connecting two Kinova Gen 3 Lites to one computer at once are also examined. This thesis goes into detail about the goal of the Python programs created to move the physical robotic arm as well as the overall setup and goal of the robotic arm simulation for the RL method. In particular, the Python programs for the physical robotic arm pick up the object and place it at a different location, identifying a method to prevent the gripper from crushing an object without a tactile sensor in the process. The thesis also covers the effect of various learning parameters on the accuracy and steps to goal curves of an RL method designed to make a Kinova Gen 3 Lite grab an object in a simulation. In particular, a neural network implementation of RL method with one of the learning parameters changed in comparison to the optimal learning parameters. The neural network is trained using Python Anaconda to control a Kinova Gen 3 Lite robotic arm model for a simulation made in the Unity compiler.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00014022
- Subject Headings
- Robotics, Artificial intelligence, Reinforcement learning
- Format
- Document (PDF)
- Title
- FLOW-MEDIATED NAVIGATION AND COORDINATION OF ARTIFICIAL SWIMMERS USING DEEP REINFORCEMENT LEARNING.
- Creator
- Nair, Aishwarya, Verma, Siddhartha, Florida Atlantic University, Department of Ocean and Mechanical Engineering, College of Engineering and Computer Science
- Abstract/Description
-
Aquatic organisms are able to achieve swimming efficiencies that are much higher than any underwater vehicle that has been designed by humans. This is mainly due to the adaptive swimming patterns that they display in response to changes in their environment and their behaviors, i.e., hunting, fleeing, or foraging. In this work, we explore these adaptations from a hydrodynamics standpoint, using numerical simulations to emulate self-propelled artificial swimmers in various flow fields. Apart...
Show moreAquatic organisms are able to achieve swimming efficiencies that are much higher than any underwater vehicle that has been designed by humans. This is mainly due to the adaptive swimming patterns that they display in response to changes in their environment and their behaviors, i.e., hunting, fleeing, or foraging. In this work, we explore these adaptations from a hydrodynamics standpoint, using numerical simulations to emulate self-propelled artificial swimmers in various flow fields. Apart from still or uniform flow, the most likely flow field encountered by swimmers are those formed by the wakes of solid objects, such as roots of aquatic vegetation, or underwater structures. Therefore, a simplified bio-inspired design of porous structures consisting of nine cylinders was considered to identify arrangements that could produce wakes of varying velocities and enstrophy, which in turn might provide beneficial environments for underwater swimmers. These structures were analyzed using a combination of numerical simulations and experiments, and the underlying flow physics was examined using a variety of data-analysis techniques. Subsequently, in order to recreate the adaptations of natural swimmers in different flow regimes, artificial swimmers were positioned in each of these different types of flow fields and then trained to optimize their movements to maximize swimming efficiency using deep reinforcement learning. These artificial swimmers utilize a sensory input system that allows them to detect the velocity field and pressure on the surface of their body, which is similar to the lateral line sensing system in biological fish. The results demonstrate that the information gleaned from the simplified lateral line system was sufficient for the swimmer to replicate naturally found behaviors such as K´arm´an gaiting. The phenomenon of schooling in underwater organisms is similarly thought to provide opportunities for swimmers to increase their energy efficiency, along with the other associated benefits. Thus, multiple swimmers were trained using multi-agent reinforcement learning to discover optimal swimming patterns at the group level as well as the individual level.
Show less - Date Issued
- 2024
- PURL
- http://purl.flvc.org/fau/fd/FA00014413
- Subject Headings
- Reinforcement learning, Hydrodynamics, Computational fluid dynamics, .
- Format
- Document (PDF)
- Title
- PRESERVING KNOWLEDGE IN SIMULATED BEHAVIORAL ACTION LOOPS.
- Creator
- St.Clair, Rachel, Barenholtz, Elan, Hahn, William, Florida Atlantic University, Center for Complex Systems and Brain Sciences, Charles E. Schmidt College of Science
- Abstract/Description
-
One basic goal of artificial learning systems is the ability to continually learn throughout that system’s lifetime. Transitioning between tasks and re-deploying prior knowledge is thus a desired feature of artificial learning. However, in the deep-learning approaches, the problem of catastrophic forgetting of prior knowledge persists. As a field, we want to solve the catastrophic forgetting problem without requiring exponential computations or time, while demonstrating real-world relevance....
Show moreOne basic goal of artificial learning systems is the ability to continually learn throughout that system’s lifetime. Transitioning between tasks and re-deploying prior knowledge is thus a desired feature of artificial learning. However, in the deep-learning approaches, the problem of catastrophic forgetting of prior knowledge persists. As a field, we want to solve the catastrophic forgetting problem without requiring exponential computations or time, while demonstrating real-world relevance. This work proposes a novel model which uses an evolutionary algorithm similar to a meta-learning objective, that is fitted with a resource constraint metrics. Four reinforcement learning environments are considered with the shared concept of depth although the collection of environments is multi-modal. This system shows preservation of some knowledge in sequential task learning and protection of catastrophic forgetting in deep neural networks.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00013896
- Subject Headings
- Artificial intelligence, Deep learning (Machine learning), Reinforcement learning, Neural networks (Computer science)
- Format
- Document (PDF)
- Title
- EXPLORING UNDULATORY SWIMMING BEHAVIORS WITH DEEP REINFORCEMENT LEARNING.
- Creator
- Alvaro, Alejandro, Verma, Siddhartha, Florida Atlantic University, Department of Ocean and Mechanical Engineering, College of Engineering and Computer Science
- Abstract/Description
-
The capability to navigate in the proximity of solid surfaces while avoiding collision and maintaining high efficiency is essential for the effective design and operation of underwater vehicles. The underlying capability involves a variety of challenges, and a potential approach to overcome such obstacles is to rely on biomimetic or bio-inspired design. Through evolution, organisms have developed methods of locomotion optimized for their specific environment. One of the common forms of...
Show moreThe capability to navigate in the proximity of solid surfaces while avoiding collision and maintaining high efficiency is essential for the effective design and operation of underwater vehicles. The underlying capability involves a variety of challenges, and a potential approach to overcome such obstacles is to rely on biomimetic or bio-inspired design. Through evolution, organisms have developed methods of locomotion optimized for their specific environment. One of the common forms of locomotion found in underwater organisms is undulatory swimming. These undulatory swimmers display different swimming behaviors based on the flow conditions in their environment. These behaviors take advantage of changes in the flow field caused by the presence of obstructions and obstacles upstream or adjacent to the swimmer. For example, a free swimmer in near-proximity to a flat plane can experience changes in lift and drag during locomotion. The reduced drag can benefit the swimmer, however, changes in lift may lead to a collision with obstacles. Despite the abundance of qualitative data from observing these undulatory swimmers, there is a lack of quantitative data, creating a disconnect in understanding how these organisms have evolved to exploit the presence of walls and obstacles. By employing a combination of traditional computational fluid dynamics and novel neural network-based techniques it is possible to emulate the evolution of learned behavior in biological organisms. The current work uses deep reinforcement learning coupled with two-dimensional numerical simulations of self-propelled swimmers to better understand behavior observed in nature.
Show less - Date Issued
- 2024
- PURL
- http://purl.flvc.org/fau/fd/FA00014402
- Subject Headings
- Reinforcement learning, Computational fluid dynamics, Autonomous underwater vehicles
- Format
- Document (PDF)
- Title
- TOWARDS SELF-ORGANIZED BRAIN: TOPOLOGICAL REINFORCEMENT LEARNING WITH GRAPH CELLULAR AUTOMATA.
- Creator
- Ray, Subhosit, Barenholtz, Elan, Florida Atlantic University, Department of Psychology, Charles E. Schmidt College of Science
- Abstract/Description
-
Automatizing optimal neural architectures is an under-explored domain; the majority of deep learning domains base their architecture on multiplexing different well-known architectures together based on past studies. Even after extensive research, the deployed algorithms may only work for specific domains, provide a minor boost, or even underperform compared to the previous state-of-the-art implementations. One approach, Neural architecture search, requires generating a pool of network...
Show moreAutomatizing optimal neural architectures is an under-explored domain; the majority of deep learning domains base their architecture on multiplexing different well-known architectures together based on past studies. Even after extensive research, the deployed algorithms may only work for specific domains, provide a minor boost, or even underperform compared to the previous state-of-the-art implementations. One approach, Neural architecture search, requires generating a pool of network topologies based on well-known kernel and activation functions. However, iteratively training the generated topologies and creating newer topologies based on the best-performing ones is computationally expensive and out of scope for most academic labs. In addition, the search space is constrained to the predetermined dictionary of kernel functions to generate the topologies. This thesis considers neural networks as a weighted directed graph, incorporating the ideas of message passing in graph neural networks to propagate the information from the input to the output nodes. We show that such a method relieves the dependency on a search space constrained to well-known kernel functions over any arbitrary graph structures. We test our algorithms in the RL environment and explore several optimization forays, such as graph attention and PPO to let us solve the problem. We improve upon the slow convergence of PPO using Neural CA approach as a self-organizing overhead towards generating adjacency matrices of network topologies. This exploration towards indirect encoding (an abstraction of DNA in neuro-developmental biology) yielded a much faster algorithm for convergence. In addition, we introduce 1D-involution as a way to implement message passing across nodes in a graph, which further reduces the parameter space to a significant degree without hindering performance.
Show less - Date Issued
- 2024
- PURL
- http://purl.flvc.org/fau/fd/FA00014436
- Subject Headings
- Neural networks (Computer science), Reinforcement learning, Cellular automata
- Format
- Document (PDF)
- Title
- SUSTAINING CHAOS USING DEEP REINFORCEMENT LEARNING.
- Creator
- Vashishtha, Sumit, Verma, Siddhartha, Florida Atlantic University, Department of Ocean and Mechanical Engineering, College of Engineering and Computer Science
- Abstract/Description
-
Numerous examples arise in fields ranging from mechanics to biology where disappearance of Chaos can be detrimental. Preventing such transient nature of chaos has been proven to be quite challenging. The utility of Reinforcement Learning (RL), which is a specific class of machine learning techniques, in discovering effective control mechanisms in this regard is shown. The autonomous control algorithm is able to prevent the disappearance of chaos in the Lorenz system exhibiting meta-stable...
Show moreNumerous examples arise in fields ranging from mechanics to biology where disappearance of Chaos can be detrimental. Preventing such transient nature of chaos has been proven to be quite challenging. The utility of Reinforcement Learning (RL), which is a specific class of machine learning techniques, in discovering effective control mechanisms in this regard is shown. The autonomous control algorithm is able to prevent the disappearance of chaos in the Lorenz system exhibiting meta-stable chaos, without requiring any a-priori knowledge about the underlying dynamics. The autonomous decisions taken by the RL algorithm are analyzed to understand how the system’s dynamics are impacted. Learning from this analysis, a simple control-law capable of restoring chaotic behavior is formulated. The reverse-engineering approach adopted in this work underlines the immense potential of the techniques used here to discover effective control strategies in complex dynamical systems. The autonomous nature of the learning algorithm makes it applicable to a diverse variety of non-linear systems, and highlights the potential of RLenabled control for regulating other transient-chaos like catastrophic events.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013498
- Subject Headings
- Machine learning--Technique, Reinforcement learning, Algorithms, Chaotic behavior in systems, Nonlinear systems
- Format
- Document (PDF)