Current Search: Neural networks Computer science--Design (x)
View All Items
- Title
- NEURALSYNTH - A NEURAL NETWORK TO FPGA COMPILATION FRAMEWORK FOR RUNTIME EVALUATION.
- Creator
- Lanham, Grant Jr, Hallstrom, Jason O., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Artificial neural networks are increasing in power, with attendant increases in demand for efficient processing. Performance is limited by clock speed and degree of parallelization available through multi-core processors and GPUs. With a design tailored to a specific network, a field-programmable gate array (FPGA) can be used to minimize latency without the need for geographically distributed computing. However, the task of programming an FPGA is outside the realm of most data scientists....
Show moreArtificial neural networks are increasing in power, with attendant increases in demand for efficient processing. Performance is limited by clock speed and degree of parallelization available through multi-core processors and GPUs. With a design tailored to a specific network, a field-programmable gate array (FPGA) can be used to minimize latency without the need for geographically distributed computing. However, the task of programming an FPGA is outside the realm of most data scientists. There are tools to program FPGAs from a high level description of a network, but there is no unified interface for programmers across these tools. In this thesis, I present the design and implementation of NeuralSynth, a prototype Python framework which aims to bridge the gap between data scientists and FPGA programming for neural networks. My method relies on creating an extensible Python framework that is used to automate programming and interaction with an FPGA. The implementation includes a digital design for the FPGA that is completed by a Python framework. Programming and interacting with the FPGA does not require leaving the Python environment. The extensible approach allows multiple implementations, resulting in a similar workflow for each implementation. For evaluation, I compare the results of my implementation with a known neural network framework.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013533
- Subject Headings
- Artificial neural networks, Neural networks (Computer science)--Design, Field programmable gate arrays, Python (Computer program language)
- Format
- Document (PDF)
- Title
- A VLSI implementable learning algorithm.
- Creator
- Ruiz, Laura V., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a...
Show moreA top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a simple network. This methodology is later applied in the design of a novel architecture, a stochastic neural network. All issues related to algorithmic design for VLSI implementability are discussed and results of layout and timing analysis given over software simulations. A top-down design methodology is presented, including a brief introduction to HDL's and an overview of the software tools used throughout the design process. These tools make it possible now for a designer to complete a design in a relative short period of time. In-depth knowledge of computer architecture, VLSI fabrication, electronic circuits and integrated circuit design is not fundamental to accomplish a task that a few years ago would have required a large team of specialized experts in many fields. This may appeal to researchers from a wide background of knowledge, including computer scientists, mathematicians, and psychologists experimenting with learning algorithms. It is only in a hardware implementation of artificial neural network learning algorithms that the true parallel nature of these architectures could be fully tested. Most of the applications of neural networks are basically software simulations of the algorithms run on a single CPU executing sequential simulations of a parallel, richly interconnected architecture. This dissertation describes a methodology whereby a researcher experimenting with a known or new learning algorithm will be able to test it as it was intentionally designed for, on a parallel hardware architecture.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12453
- Subject Headings
- Integrated circuits--Very large scale integration--Design and construction, Neural networks (Computer science)--Design and construction, Computer algorithms, Machine learning
- Format
- Document (PDF)