Current Search: Department of Computer and Electrical Engineering and Computer Science (x)
View All Items
Pages
- Title
- Estimation of information-theoretics-based delay-bounds in ATM networks.
- Creator
- Wei, Liqun., Florida Atlantic University, Hsu, Sam, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis addresses a method of deducing the statistical upper and lower bounds associated with the cell-transfer delay variations (CDVs) encountered by the cells transmitted in the asynchronous transfer mode (ATM) networks due to cell losses. This study focuses on: (1) Estimating CDV arising from multiplexing/switching for both constant bit rate and variable bit rate services via simulations. (2) Deducing an information-theoretics based new technique to get an insight of the combined BER...
Show moreThis thesis addresses a method of deducing the statistical upper and lower bounds associated with the cell-transfer delay variations (CDVs) encountered by the cells transmitted in the asynchronous transfer mode (ATM) networks due to cell losses. This study focuses on: (1) Estimating CDV arising from multiplexing/switching for both constant bit rate and variable bit rate services via simulations. (2) Deducing an information-theoretics based new technique to get an insight of the combined BER-induced and multiplexing/switching-induced CDVs in ATM networks. Algorithms on the CDV statistics are derived and the lower and upper bounds of the statistics are obtained via simulations in respect of CBR and VBR traffics. These bounds bounds are useful in the cell-admission control (CAC) strategies adapted in ATM transmissions. Inferential remarks indicating the effects of traffic parameters (such as bandwidth, burstiness etc.) on the values of the statistical bounds are presented, and scope for further work is indicated.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15444
- Subject Headings
- Asynchronous transfer mode, Telecommunication, Computer networks, Broadband communication systems
- Format
- Document (PDF)
- Title
- Course scheduling support system.
- Creator
- Khan, Jawad Ahmed., Florida Atlantic University, Levow, Roy B., Hsu, Sam, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The Course Scheduling Support System is designed to facilitate manual generation of the faculty course scheduling process. It aids in assigning faculty to courses and assigning each course section to their time block. It captures historic and current scheduling information in an organized manner making information needed to create new schedules more readily and quickly available. The interaction between user and database is made as friendly as possible so that managing, manipulating,...
Show moreThe Course Scheduling Support System is designed to facilitate manual generation of the faculty course scheduling process. It aids in assigning faculty to courses and assigning each course section to their time block. It captures historic and current scheduling information in an organized manner making information needed to create new schedules more readily and quickly available. The interaction between user and database is made as friendly as possible so that managing, manipulating, populating and retrieving scheduling data is simple and efficient. We have implemented an open source web-based prototype of the proposed system using PHP, MySQL, and the Apache Web Server. It can be invoked with a standard Web browser and has an intuitive user interface. It provides tools for customizing web forms that can be easily used by non-technical users. Our department plans to deploy this system by Fall 2006.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fcla/dt/13343
- Subject Headings
- Scheduling--Data processing, Constraints (Artificial intelligence), Electronic data processing--Distributed processing
- Format
- Document (PDF)
- Title
- DCVS logic synthesis.
- Creator
- Xiao, Kang., Florida Atlantic University, Barrett, Raymond L. Jr., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Implementation of CMOS combinational logic with Differential Cascode Voltage Switch logic (DCVS) may have many advantages over the traditional CMOS logic approaches with respect to device count, layout density and timing. DCVS is an ideal target technology for a logic synthesis system in that it provides a complete function cover by providing the function and its complement simultaneously. DCVS is also more testable due to this. We have developed for IBM's DCVS technology a synthesis...
Show moreImplementation of CMOS combinational logic with Differential Cascode Voltage Switch logic (DCVS) may have many advantages over the traditional CMOS logic approaches with respect to device count, layout density and timing. DCVS is an ideal target technology for a logic synthesis system in that it provides a complete function cover by providing the function and its complement simultaneously. DCVS is also more testable due to this. We have developed for IBM's DCVS technology a synthesis algorithm and a new test generation approach, that are based on topologies rather than individual logic functions. We have found that 19 and 363 DCVS topologies can represent 256 and 65,536 functions, respectively, for the 3- and 4-varaible cases. Physical defect analysis was conducted with the aid of a building block approach to analyze the n-type logic tree and provides a basis for evolving hierarchical test pattern generation for the topologies.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14850
- Subject Headings
- Integrated circuits--Very large scale integration--Data processing, Metal oxide semiconductors, Complementary, Computer-aided design, Electronic systems, Logic design--Data processing
- Format
- Document (PDF)
- Title
- Correcting noisy data and expert analysis of the correction process.
- Creator
- Seiffert, Christopher N., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis expands upon an existing noise cleansing technique, polishing, enabling it to be used in the Software Quality Prediction domain, as well as any other domain where the data contains continuous values, as opposed to categorical data for which the technique was originally designed. The procedure is applied to a real world dataset with real (as opposed to injected) noise as determined by an expert in the domain. This, in combination with expert assessment of the changes made to the...
Show moreThis thesis expands upon an existing noise cleansing technique, polishing, enabling it to be used in the Software Quality Prediction domain, as well as any other domain where the data contains continuous values, as opposed to categorical data for which the technique was originally designed. The procedure is applied to a real world dataset with real (as opposed to injected) noise as determined by an expert in the domain. This, in combination with expert assessment of the changes made to the data, provides not only a more realistic dataset than one in which the noise (or even the entire dataset) is artificial, but also a better understanding of whether the procedure is successful in cleansing the data. Lastly, this thesis provides a more in-depth view of the process than previously available, in that it gives results for different parameters and classifier building techniques. This allows the reader to gain a better understanding of the significance of both model generation and parameter selection.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/13223
- Subject Headings
- Computer interfaces--Software--Quality control, Acoustical engineering, Noise control--Computer programs, Expert systems (Computer science), Software documentation
- Format
- Document (PDF)
- Title
- THE DESIGN OF HIGH FREQUENCY OSCILLATORS: NOISE CHARACTERIZATION, DESIGN THEORY, AND MEASUREMENTS.
- Creator
- VICTOR, ALAN MICHAEL., Florida Atlantic University, Gazourian, Martin G., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A design theory for high frequency oscillators is presented. Emphasis is placed on oscillator design techniques which are applicable to the electrical tuning of LC and transmission line resonators. Attention is paid to design approaches which yield an oscillator with high spectral purity and a large signal to noise ratio. Theory and measurements demonstrate for the oscillator configurations investigated the a small L/C ratio is desirable for improved oscillator signal to noise ratio....
Show moreA design theory for high frequency oscillators is presented. Emphasis is placed on oscillator design techniques which are applicable to the electrical tuning of LC and transmission line resonators. Attention is paid to design approaches which yield an oscillator with high spectral purity and a large signal to noise ratio. Theory and measurements demonstrate for the oscillator configurations investigated the a small L/C ratio is desirable for improved oscillator signal to noise ratio. Equations are developed which define the noise figure the oscillator due to the additive noise of the active device. This analysis demonstrates the need for a high device starting transconductance which should be subsequently reduced during oscillation to minimize the device noise contribution. A relationship is developed between the receiver dynamic range and the oscillator signal to the noise ratio. Oscillator designs in the region 20 Mhz - 200 Mhz verify the analysis. A unified approach to large signal oscillator design is investigated and relationships to oscillator signal to noise ratio using the previously developed theory are noted
Show less - Date Issued
- 1980
- PURL
- http://purl.flvc.org/fcla/dt/14043
- Subject Headings
- Oscillators, Audio-frequency
- Format
- Document (PDF)
- Title
- THE DESIGN OF SWITCHED-CAPACITOR HIGHPASS FILTERS.
- Creator
- LEE, KING FU., Florida Atlantic University, Gazourian, Martin G., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The design of high order switched-capacitor highpass filters is presented. Emphasis is placed on the design procedures of cascaded biquadratic sections and ladder network realizations of switchedcapacitor highpass filters. The stability problem of the doubly terminated switched-capacitor ladder highpass filter is discussed. Design examples are presented to illustrate the design procedures. The sensitivities of the realization methods are discussed. An .analytical equation of the gain...
Show moreThe design of high order switched-capacitor highpass filters is presented. Emphasis is placed on the design procedures of cascaded biquadratic sections and ladder network realizations of switchedcapacitor highpass filters. The stability problem of the doubly terminated switched-capacitor ladder highpass filter is discussed. Design examples are presented to illustrate the design procedures. The sensitivities of the realization methods are discussed. An .analytical equation of the gain deviation for the cascaded biquadratic sections realization is derived. Monte Carlo analysis is performed for the design examples. The results of the analyses are compared to reveal the differences in sensitivities in terms of the order of the filters and the type of realizations.
Show less - Date Issued
- 1983
- PURL
- http://purl.flvc.org/fcla/dt/14169
- Subject Headings
- Switched capacitor circuits, Digital filters (Mathematics)
- Format
- Document (PDF)
- Title
- A very high-performance neural network system architecture using grouped weight quantization.
- Creator
- Karaali, Orhan., Florida Atlantic University, Shankar, Ravi, Gluch, David P., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Recently, Artificial Neural Network (ANN) computing systems have become one of the most active and challenging areas of information processing. The successes of experimental neural computing systems in the fields of pattern recognition, process control, robotics, signal processing, expert system, and functional analysis are most promising. However due to a number of serious problems, only small size fully connected neural networks have been implemented to run in real-time. The primary problem...
Show moreRecently, Artificial Neural Network (ANN) computing systems have become one of the most active and challenging areas of information processing. The successes of experimental neural computing systems in the fields of pattern recognition, process control, robotics, signal processing, expert system, and functional analysis are most promising. However due to a number of serious problems, only small size fully connected neural networks have been implemented to run in real-time. The primary problem is that the execution time of neural networks increases exponentially as the neural network's size increases. This is because of the exponential increase in the number of multiplications and interconnections which makes it extremely difficult to implement medium or large scale ANNs in hardware. The Modular Grouped Weight Quantization (MGWQ) presented in this dissertation is an ANN design which assures that the number of multiplications and interconnections increase linearly as the neural network's size increases. The secondary problems are related to scale-up capability, modularity, memory requirements, flexibility, performance, fault tolerance, technological feasibility, and cost. The MGWQ architecture also resolves these problems. In this dissertation, neural network characteristics and existing implementations using different technologies are described. Their shortcomings and problems are addressed, and solutions to these problems using the MGWQ approach are illustrated. The theoretical and experimental justifications for MGWQ are presented. Performance calculations for the MGWQ architecture are given. The mappings of the most popular neural network models to the proposed architecture are demonstrated. System level architecture considerations are discussed. The proposed ANN computing system is a flexible and a realistic way to implement large fully connected networks. It offers very high performance using currently available technology. The performance of ANNs is measured in terms of interconnections per second (IC/S); the performance of the proposed system changes between 10^11 to 10^14 IC/S. In comparison, SAIC's DELTA II ANN system achieves 10^7. A Cray X-MP achieves 5*10^7 IC/S.
Show less - Date Issued
- 1989
- PURL
- http://purl.flvc.org/fcla/dt/12245
- Subject Headings
- Neural circuitry, Neural computers, Computer architecture
- Format
- Document (PDF)
- Title
- A visual perception threshold matching algorithm for real-time video compression.
- Creator
- Noll, John M., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A barrier to the use of digital imaging is the vast storage requirements involved. One solution is compression. Since imagery is ultimately subject to human visual perception, it is worthwhile to design and implement an algorithm which performs compression as a function of perception. The underlying premise of the thesis is that if the algorithm closely matches visual perception thresholds, then its coded images contain only the components necessary to recreate the perception of the visual...
Show moreA barrier to the use of digital imaging is the vast storage requirements involved. One solution is compression. Since imagery is ultimately subject to human visual perception, it is worthwhile to design and implement an algorithm which performs compression as a function of perception. The underlying premise of the thesis is that if the algorithm closely matches visual perception thresholds, then its coded images contain only the components necessary to recreate the perception of the visual stimulus. Psychophysical test results are used to map the thresholds of visual perception, and develop an algorithm that codes only the image content exceeding those thresholds. The image coding algorithm is simulated in software to demonstrate compression of a single frame image. The simulation results are provided. The algorithm is also adapted to real-time video compression for implementation in hardware.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14857
- Subject Headings
- Image processing--Digital techniques, Computer algorithms, Visual perception, Data compression (Computer science)
- Format
- Document (PDF)
- Title
- A study on the electromagnetic performance of body-worn radio units in the presence of scatterers in the proximity.
- Creator
- Peterson, Vance Howard, Florida Atlantic University, Ungvichian, Vichate, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The research addressed refers to a study on the electromagnetic performance aspects of body-worn radio units operating in the presence of scatterers in close proximity, using analytical, numerical, and experimental methods. The application potentials of such methods include evaluating the integrity of radio units such as cell phones. Consistent with the scope of the study above, considered in this research are specific details on analytical and numerical modeling of the effects of a nearby...
Show moreThe research addressed refers to a study on the electromagnetic performance aspects of body-worn radio units operating in the presence of scatterers in close proximity, using analytical, numerical, and experimental methods. The application potentials of such methods include evaluating the integrity of radio units such as cell phones. Consistent with the scope of the study above, considered in this research are specific details on analytical and numerical modeling of the effects of a nearby conducting cylindrical object on the electromagnetic field near a human-model phantom. Calculations are performed using the Finite Difference Time Domain (FDTD) method. Considered are various separations of the body wearing the test radio unit from the proximal object and polarization of the incident wave. An anechoic chamber and the test setup used for the measurement of EM field amplitudes near a saline-water phantom are described. Within the anechoic chamber, a small shielded loop is used as a field measurement probe and is positioned near the test phantom. The field probe orientation was in the vertical plane for characterizing the prevailing electromagnetic field intensity. This study indicates that variations in the field amplitude near the phantom occur, which are responsive to phantom rotation and measurement distance from the phantom. The electromagnetic field amplitude decreases rapidly with increasing distance between the probe and the surface of the phantom. The analysis is also extended to examine the electromagnetic field distribution in the gap between a human body phantom model and a nearby conducting cylinder. An appropriate three-dimensional FDTD method is presented and applied to a near-field problem of analyzing the influence of proximal conductive objects on fields near a phantom wearing an RF unit.
Show less - Date Issued
- 2004
- PURL
- http://purl.flvc.org/fau/fd/FADT12085
- Subject Headings
- Scattering (Mathematics), Sound-waves (Scattering), Electromagnetic waves--Scattering, Electromagnetism--Computer simulation, Finite differences, Time-domain analysis
- Format
- Document (PDF)
- Title
- A study on glucose metabolism: Computer simulation and modeling.
- Creator
- Leesirikul, Meta., Florida Atlantic University, Neelakanta, Perambur S., Roth, Zvi S., Morgera, Salvatore D., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Sorensen's model of glucose metabolism and regulation is reconstructed using SimulinkRTM. Most of the existing glucose metabolism models consist of several mass balance equations that interact with each others. Graphical format used by SimulinkRTM provides a visualized perspective of such relations so that it is easier to modify the model on ad hoc basis. Type-I and Type-II diabetes with relevant clinical details are simulated. Further, a control strategy is introduced in order to simulate...
Show moreSorensen's model of glucose metabolism and regulation is reconstructed using SimulinkRTM. Most of the existing glucose metabolism models consist of several mass balance equations that interact with each others. Graphical format used by SimulinkRTM provides a visualized perspective of such relations so that it is easier to modify the model on ad hoc basis. Type-I and Type-II diabetes with relevant clinical details are simulated. Further, a control strategy is introduced in order to simulate the control of exogenous insulin pump. Simulated results are consistent with available clinical data. Living systems in general, exhibit both stochastical and deterministic characteristics. Activities such as glucose metabolism traditionally modeled do not include stochastical properties, nor that they are viewed in the large framework of complex system with explicit interaction details. Currently, a complexity system model is developed to describe the glucose metabolism related activities. The simulation results obtained thereof illustrate the bounding domain of variations in some clinically observed details.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/13254
- Subject Headings
- Glucose--Metabolism, Computer simulation, Diabetes--Metabolism, Computer modeling
- Format
- Document (PDF)
- Title
- A selectively redundant file system.
- Creator
- Veradt, Joy L., Florida Atlantic University, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Disk arrays have been proposed as a means of achieving high performance, reliability and availability in computer systems. This study looks at the RAID (Redundant Array of Inexpensive Disks) disk array architecture and its advantages and disadvantages for use in personal computer environments, specifically in terms of how data is protected (redundant information) and the tradeoff required to achieve that protection (sacrifice of disk capacity). It then proposes an alternative to achieving a...
Show moreDisk arrays have been proposed as a means of achieving high performance, reliability and availability in computer systems. This study looks at the RAID (Redundant Array of Inexpensive Disks) disk array architecture and its advantages and disadvantages for use in personal computer environments, specifically in terms of how data is protected (redundant information) and the tradeoff required to achieve that protection (sacrifice of disk capacity). It then proposes an alternative to achieving a real-time method of protecting a user's data, which involves the modification of an operating system's file system to implement selective redundancy at the file level. This approach, based on modified RAIDs, is shown to be considerably more efficient in using the capacity of the available disks. It also provides flexibility in allowing users to tradeoff space for reliability.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14844
- Subject Headings
- Computer files--Reliability, Systems software--Reliability, Databases--Reliability
- Format
- Document (PDF)
- Title
- A system for assisting in the determination of geometric similarity between machined cylindrical parts.
- Creator
- Lockard, Alan A. L., Florida Atlantic University, Hoffman, Frederick, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The costs associated with the design and manufacture of machined components can be significantly reduced by the ability to identify and group similar parts. This activity is generally accomplished by assigning each part a Group Technology code number based on its most significant characteristics. Attempts to accomplish this are hindered by: the relatively small amount of information that can be encoded in a code of manageable length, inconsistencies in human interpretation of design and...
Show moreThe costs associated with the design and manufacture of machined components can be significantly reduced by the ability to identify and group similar parts. This activity is generally accomplished by assigning each part a Group Technology code number based on its most significant characteristics. Attempts to accomplish this are hindered by: the relatively small amount of information that can be encoded in a code of manageable length, inconsistencies in human interpretation of design and manufacturing data, the commitment of resources required to review and encode all candidate components at a facility, and the heuristic nature of determining what constitutes significant similarity for any particular application. These problems are addressed by the development of a system that assists in the determination of similarity by comparing CAD (Computer Aided Design) files, rather than Group Technology codes, in a manufacturing oriented frame-based system.
Show less - Date Issued
- 1989
- PURL
- http://purl.flvc.org/fcla/dt/14497
- Subject Headings
- Computer-aided design, Machine parts, Group technology, Manufacturing processes--Data processing
- Format
- Document (PDF)
- Title
- A new GMDH type algorithm for the development of neural networks for pattern recognition.
- Creator
- Gilbar, Thomas C., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Researchers from a wide range of fields have discovered the benefits of applying neural networks to pattern recognition problems. Although applications for neural networks have increased, development of tools to design these networks has been slower. There are few comprehensive network development methods. Those that do exist are slow, inefficient, and application specific, require predetermination of the final network structure, and/or result in large, complicated networks. Finding optimal...
Show moreResearchers from a wide range of fields have discovered the benefits of applying neural networks to pattern recognition problems. Although applications for neural networks have increased, development of tools to design these networks has been slower. There are few comprehensive network development methods. Those that do exist are slow, inefficient, and application specific, require predetermination of the final network structure, and/or result in large, complicated networks. Finding optimal neural networks that balance low network complexity with accuracy is a complicated process that traditional network development procedures are incapable of achieving. Although not originally designed for neural networks, the Group Method of Data Handling (GMDH) has characteristics that are ideal for neural network design. GMDH minimizes the number of required neurons by choosing and keeping only the best neurons and filtering out unneeded inputs. In addition, GMDH develops the neurons and organizes the network simultaneously, saving time and processing power. However, some of the qualities of the network must still be predetermined. This dissertation introduces a new algorithm that applies some of the best characteristics of GMDH to neural network design. The new algorithm is faster, more flexible, and more accurate than traditional network development methods. It is also more dynamic than current GMDH based methods, capable of creating a network that is optimal for an application and training data. Additionally, the new algorithm virtually guarantees that the number of neurons progressively decreases in each succeeding layer. To show its flexibility, speed, and ability to design optimal networks, the algorithm was used to successfully design networks for a wide variety of real applications. The networks developed using the new algorithm were compared to other development methods and network architectures. The new algorithm's networks were more accurate and yet less complicated than the other networks. Additionally, the algorithm designs neurons that are flexible enough to meet the needs of the specific applications, yet similar enough to be implemented using a standardized hardware cell. When combined with the simplified network layout that naturally occurs with the algorithm, this results in networks that can be implemented using Field Programmable Gate Array (FPGA) type devices.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/11994
- Subject Headings
- GMDH algorithms, Neural networks (Computer science), Pattern recognition systems
- Format
- Document (PDF)
- Title
- A neural network-based receiver for interference cancellation in multi-user environment for DS/CDMA systems.
- Creator
- Shukla, Kunal Hemang., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The objective of this work is to apply and investigate the performance of a neural network-based receiver for interference cancellation in multiuser direct sequence code division multiple access (DSCDMA) wireless networks. This research investigates a Receiver model which uses Neural Network receiver in combination with a conventional receiver system to provide an efficient mechanism for the Interference Suppression in DS/CDMA systems. The Conventional receiver is used for the time during...
Show moreThe objective of this work is to apply and investigate the performance of a neural network-based receiver for interference cancellation in multiuser direct sequence code division multiple access (DSCDMA) wireless networks. This research investigates a Receiver model which uses Neural Network receiver in combination with a conventional receiver system to provide an efficient mechanism for the Interference Suppression in DS/CDMA systems. The Conventional receiver is used for the time during which the neural network receiver is being trained. Once the NN receiver is trained the conventional receiver system is deactivated. It is demonstrated that this receiver when used along with an efficient Neural network model can outperform MMSE receiver or DFFLE receiver with significant advantages, such as improved bit-error ratio (BER) performance, adaptive operation, single-user detection in DS/CDMA environment and a near far resistant system.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/12975
- Subject Headings
- Neural networks (Computer science), Wireless communication systems, Code division multiple access
- Format
- Document (PDF)
- Title
- A metrics-based software quality modeling tool.
- Creator
- Rajeevalochanam, Jayanth Munikote., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In today's world, high reliability has become an essential component of almost every software system. However, since the reliability-enhancement activities entail enormous costs, software quality models, based on the metrics collected early in the development life cycle, serve as handy tools for cost-effectively guiding such activities to the software modules that are likely to be faulty. Case-Based Reasoning (CBR) is an attractive technique for software quality modeling. Software Measurement...
Show moreIn today's world, high reliability has become an essential component of almost every software system. However, since the reliability-enhancement activities entail enormous costs, software quality models, based on the metrics collected early in the development life cycle, serve as handy tools for cost-effectively guiding such activities to the software modules that are likely to be faulty. Case-Based Reasoning (CBR) is an attractive technique for software quality modeling. Software Measurement Analysis and Reliability Toolkit (SMART) is a CBR tool customized for metrics-based software quality modeling. Developed for the NASA IV&V Facility, SMART supports three types of software quality models: quantitative quality prediction, classification, and module-order models. It also supports a goal-oriented selection of classification models. An empirical case study of a military command, control, and communication system demonstrates the accuracy and usefulness of SMART, and also serves as a user-guide for the tool.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12967
- Subject Headings
- Software measurement, Computer software--Quality control, Case-based reasoning
- Format
- Document (PDF)
- Title
- A noninvasive technique for early detection of atherosclerosis using the impedance plethysmograph: Longitudinal study on cynomolgus monkeys.
- Creator
- Kolluri, Sai M. S., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This study evaluates the use of an electrical impedance plethysmograph as a noninvasive technique for early detection of atherosclerosis. The instrument is inexpensive, easily portable and causes no health risks. Thus, the system is ideally suited for mass screemng and epidemiological studies, if proven to be effective. We have conducted experiments usmg a three-channel impedance plethysmograph once every 8 - 10 weeks on a colony of 20 male cynomolgus monkeys (macaca fascicularis). Five...
Show moreThis study evaluates the use of an electrical impedance plethysmograph as a noninvasive technique for early detection of atherosclerosis. The instrument is inexpensive, easily portable and causes no health risks. Thus, the system is ideally suited for mass screemng and epidemiological studies, if proven to be effective. We have conducted experiments usmg a three-channel impedance plethysmograph once every 8 - 10 weeks on a colony of 20 male cynomolgus monkeys (macaca fascicularis). Five monkeys were on a control diet (monkey chow) and fifteen on a high cholesterol diet (1 mg cholesterol/Kcal with 40% of the calories derived from fat). The diet period for the monkeys ranged from 16-28 months (25 months typically). We wrapped a pressure cuff with one pair of electrodes around the upper left leg of the monkey. Two other sets of electrodes were wrapped, one distal to the pressure cuff on the lower left leg and the other as reference on the upper arm. We measured impedance pulses at these three different sites simultaneously using a three channel impedance plethysmograph. The signals were recorded when the pressure in the pressure cuff was changed from 200 mm Hg to 20 mm Hg in steps of 10 mm Hg. Arterial volume change was evaluated from this. Experiments were repeated with the cuffed segment on the right leg, and then on the left arm. The arterial volume change vs cuff pressure (V- Pc) characteristics were used to follow the progression of the disease. The V- Pc characteristic, initially with a well defined peak, changed to a flatter characteristic with increased period on the cholesterol diet. Monkeys on the control diet showed no flattening of the curve with time. In order to understand theoretically the effect of disease on the compliance - transmural pressure (C-Pt) characteristic (and hence V - Pc characteristic), we developed an arterial model to study the pressure - radius relationship of an artery under different disease states. We have also developed an expression for the equivalent incremental modulus of elasticity based on the incremental modulus of elasticity of the individual arterial wall layers. The resulting expressions were used to study the effect of increase in stenosis and calcification on the V - Pc and C-Pt characteristics. The simulation results obtained using the arterial model match our experimentally observed data of decrease m peak compliance with disease. The peak compliance was seen to decrease m amplitude and shift left (towards decreasing transmural pressure) as the artery got thicker with atherosclerotic disease. The V - Pc characteristic, initially with a well defined peak, got flatter with disease. Our simulation results lead us to believe that the noninvasive technique 1s sensitive enough to follow progressiOn of the atherosclerotic disease. Morphometric and histochemical data were collected subsequent to the sacrifice of the monkeys. Evaluation of these data and correlations with our compliance data will lead us to a more definitive statement on the method's sensitivity. This however, is beyond the scope of this dissertation.
Show less - Date Issued
- 1991
- PURL
- http://purl.flvc.org/fcla/dt/12281
- Subject Headings
- Impedance plethysmography, Atherosclerosis--Diagnosis, Atherosclerosis--Animal models, Impedance, Bioelectric, Diagnosis, Noninvasive
- Format
- Document (PDF)
- Title
- A new methodology to predict certain characteristics of stock market using time-series phenomena.
- Creator
- Shah, Trupti U., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The goal of time series forecasting is to identify the underlying pattern and use these patterns to predict the future path of the series. To capture the future path of a dynamic stock market variable is one of the toughest challenges. This thesis is about the development of a new methodology in financial forecasting. An effort is made to develop a neural network forecaster using time-series phenomena. The main outcome of this new approach for financial forecasting is a systematic way of...
Show moreThe goal of time series forecasting is to identify the underlying pattern and use these patterns to predict the future path of the series. To capture the future path of a dynamic stock market variable is one of the toughest challenges. This thesis is about the development of a new methodology in financial forecasting. An effort is made to develop a neural network forecaster using time-series phenomena. The main outcome of this new approach for financial forecasting is a systematic way of constructing a Neural Network Forecaster for nonlinear and non-stationary time-series data that leads to very good out-of-sample prediction. The tool used for the validation of this research is "Brainmaker". This thesis also contains a small survey of available tools used for financial forecasting.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15677
- Subject Headings
- Time-series analysis, Neural networks (Computer science), Stock price forecasting
- Format
- Document (PDF)
- Title
- The notion of aggregation.
- Creator
- Saksena, Monika., Florida Atlantic University, France, Robert B., Larrondo-Petrie, Maria M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Most popular object-oriented modeling techniques (OOMTs) provide good support for the creation of conceptual models of system behavior and structure. A serious drawback of these techniques is that the concepts and notations used are not rigorously defined. This can lead to the creation of ambiguous models, and to disagreements over the proper use and interpretation of modeling constructs. An important modeling construct that is often loosely defined is aggregation. This thesis presents a...
Show moreMost popular object-oriented modeling techniques (OOMTs) provide good support for the creation of conceptual models of system behavior and structure. A serious drawback of these techniques is that the concepts and notations used are not rigorously defined. This can lead to the creation of ambiguous models, and to disagreements over the proper use and interpretation of modeling constructs. An important modeling construct that is often loosely defined is aggregation. This thesis presents a precise characterization of aggregation that can help developers identify appropriate applications of the concept. Our characterization is the result of careful analysis of literature on conceptual modeling, knowledge representation and object-oriented (OO) modeling. We discuss primary and secondary properties of aggregation and propose annotations for UML (Unified Modeling Language). An extensive discussion of the more useful patterns of aggregation helps developers pick a suitable prescription of aggregation.
Show less - Date Issued
- 1998
- PURL
- http://purl.flvc.org/fcla/dt/15537
- Subject Headings
- Object-oriented methods (Computer science), UML (Computer science)
- Format
- Document (PDF)
- Title
- A study of Internet-based control of processes.
- Creator
- Popescu, Cristian., Florida Atlantic University, Zhuang, Hanqi, Wang, Yuan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In certain applications, one needs to control physical plants that operate in hazardous conditions. In such situations, it is necessary to acquire access to the controller from a different (remote) location through data communication networks, in order to interconnect the remote location and the controller. The use of such network linking between the plant and the controller may introduce network delays, which would affect adversely the performance of the process control. The main theoretical...
Show moreIn certain applications, one needs to control physical plants that operate in hazardous conditions. In such situations, it is necessary to acquire access to the controller from a different (remote) location through data communication networks, in order to interconnect the remote location and the controller. The use of such network linking between the plant and the controller may introduce network delays, which would affect adversely the performance of the process control. The main theoretical contribution of this thesis is to answer the following question: How large can a network delay be tolerated such that the delayed closed-loop system is locally asymptotically stable? An explicit time-independent bound for the delay is derived. In addition, various practical realizations for the remote control tasks are presented, utilizing a set of predefined classes for serial communication, data-acquisition modules and stream-based sockets. Due to the presence of a network, implementing an efficient control scheme is a not trivial problem. Hence, two practical frameworks for Internet-based control are illustrated in this thesis. Related implementation issues are addressed in detail. Examples and case studies are provided to demonstrate the effectiveness of the proposal approach.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13073
- Subject Headings
- Time delay systems, Process control, Computer networks--Remote access, World Wide Web
- Format
- Document (PDF)
- Title
- A simplistic approach to reactive multi-robot navigation in unknown environments.
- Creator
- MacKunis, William Thomas., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Multi-agent control is a very promising area of robotics. In applications for which it is difficult or impossible for humans to intervene, the utilization of multi-agent, autonomous robot groups is indispensable. This thesis presents a novel approach to reactive multi-agent control that is practical and elegant in its simplicity. The basic idea upon which this approach is based is that a group of robots can cooperate to determine the shortest path through a previously unmapped environment by...
Show moreMulti-agent control is a very promising area of robotics. In applications for which it is difficult or impossible for humans to intervene, the utilization of multi-agent, autonomous robot groups is indispensable. This thesis presents a novel approach to reactive multi-agent control that is practical and elegant in its simplicity. The basic idea upon which this approach is based is that a group of robots can cooperate to determine the shortest path through a previously unmapped environment by virtue of redundant sharing of simple data between multiple agents. The idea was implemented with two robots. In simulation, it was tested with over sixty agents. The results clearly show that the shortest path through various environments emerges as a result of redundant sharing of information between agents. In addition, this approach exhibits safeguarding techniques that reduce the risk to robot agents working in unknown and possibly hazardous environments. Further, the simplicity of this approach makes implementation very practical and easily expandable to reliably control a group comprised of many agents.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13013
- Subject Headings
- Robots--Control systems, Intelligent control systems, Genetic algorithms, Parallel processing (Electronic computers)
- Format
- Document (PDF)