Current Search: High performance computing (x)
View All Items
- Title
- An integrated component selection framework for system level design.
- Creator
- Calvert, Chad., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The increasing system design complexity is negatively impacting the overall system design productivity by increasing the cost and time of product development. One key to overcoming these challenges is exploiting Component Based Engineering practices. However it is a challenge to select an optimum component from a component library that will satisfy all system functional and non-functional requirements, due to varying performance parameters and quality of service requirements. In this thesis...
Show moreThe increasing system design complexity is negatively impacting the overall system design productivity by increasing the cost and time of product development. One key to overcoming these challenges is exploiting Component Based Engineering practices. However it is a challenge to select an optimum component from a component library that will satisfy all system functional and non-functional requirements, due to varying performance parameters and quality of service requirements. In this thesis we propose an integrated framework for component selection. The framework is a two phase approach that includes a system modeling and analysis phase and a component selection phase. Three component selection algorithms have been implemented for selecting components for a Network on Chip architecture. Two algorithms are based on a standard greedy method, with one being enhanced to produce more intelligent behavior. The third algorithm is based on simulated annealing. Further, a prototype was developed to evaluate the proposed framework and compare the performance of all the algorithms.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/368608
- Subject Headings
- High performance computing, Computer architecture, Engineering design, Data processing, Computer-aided design
- Format
- Document (PDF)
- Title
- Technoeconomic aspects of next-generation telecommunications including the Internet service.
- Creator
- Tourinho Sardenberg, Renata Cristina., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This research is concerned with the technoeconomic aspects of modern and next-generation telecommunications including the Internet service. The goal of this study thereof is tailored to address the following: (i) Reviewing the technoeconomic considerations prevailing in telecommunication (telco) systems and their implicating futures; (ii) studying relevant considerations by depicting the modern/next-generation telecommunications as a digital ecosystem viewed in terms of underlying complex...
Show moreThis research is concerned with the technoeconomic aspects of modern and next-generation telecommunications including the Internet service. The goal of this study thereof is tailored to address the following: (i) Reviewing the technoeconomic considerations prevailing in telecommunication (telco) systems and their implicating futures; (ii) studying relevant considerations by depicting the modern/next-generation telecommunications as a digital ecosystem viewed in terms of underlying complex system evolution (akin to biological systems); (iii) pursuant to the digital ecosystem concept, co-evolution modeling of competitive business structures in the technoeconomics of telco services using dichotomous (flip-flop) states as seen in prey-predator evolution; (iv) specific to Internet pricing economics, deducing the profile of consumer surplus versus pricing model under DiffServ QoS architecture pertinent to dynamic- , smart- and static-markets; (v) developing and exemplifying decision-making pursuits in telco business under non-competitive and competitive markets (via gametheoretic approach); (vi) and modeling forecasting issues in telco services addressed in terms of a simplified ARIMA-based time-series approach, (which includes seasonal and non-seasonal data plus goodness-fit estimations in time- and frequency-domains). Commensurate with the topics indicated above, necessary analytical derivations/models are proposed and computational exercises are performed (with MatLabTM R2006b and other software as needed). Extensive data gathered from open literature are used thereof and, ad hoc model verifications are performed. Lastly, results are discussed, inferences are made and open-questions for further research are identified.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1930492
- Subject Headings
- Computer networks, Management, Telecommunication, Traffic, Management, Intranets (Computer networks), Evaluation, Network performance (Telecommunication), High performance computing, Engineering economy
- Format
- Document (PDF)
- Title
- Fuzzycuda: interactive matte extraction on a GPU.
- Creator
- Gibson, Joel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Natural matte extraction is a difficult and generally unsolved problem. Generating a matte from a nonuniform background traditionally requires a tediously hand drawn matte. This thesis studies recent methods requiring the user to place only modest scribbles identifying the foreground and the background. This research demonstrates a new GPU-based implementation of the recently introduced Fuzzy- Matte algorithm. Interactive matte extraction was achieved on a CUDA enabled G80 graphics processor....
Show moreNatural matte extraction is a difficult and generally unsolved problem. Generating a matte from a nonuniform background traditionally requires a tediously hand drawn matte. This thesis studies recent methods requiring the user to place only modest scribbles identifying the foreground and the background. This research demonstrates a new GPU-based implementation of the recently introduced Fuzzy- Matte algorithm. Interactive matte extraction was achieved on a CUDA enabled G80 graphics processor. Experimental results demonstrate improved performance over the previous CPU based version. In depth analysis of experimental data from the GPU and the CPU implementations are provided. The design challenges of porting a variant of Dijkstra's shortest distance algorithm to a parallel processor are considered.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/186288
- Subject Headings
- Computer graphics, Scientific applications, Information visualization, High performance computing, Real-time data processing
- Format
- Document (PDF)
- Title
- Content identification using video tomography.
- Creator
- Leon, Gustavo A., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Video identification or copy detection is a challenging problem and is becoming increasingly important with the popularity of online video services. The problem addressed in this thesis is the identification of a given video clip in a given set of videos. For a given query video, the system returns all the instance of the video in the data set. This identification system uses video signatures based on video tomography. A robust and low complexity video signature is designed and implemented....
Show moreVideo identification or copy detection is a challenging problem and is becoming increasingly important with the popularity of online video services. The problem addressed in this thesis is the identification of a given video clip in a given set of videos. For a given query video, the system returns all the instance of the video in the data set. This identification system uses video signatures based on video tomography. A robust and low complexity video signature is designed and implemented. The nature of the signature makes it independent to the most commonly video transformations. The signatures are generated for video shots and not individual frames, resulting in a compact signature of 64 bytes per video shot. The signatures are matched using simple Euclidean distance metric. The results show that videos can be identified with 100% recall and over 93% precision. The experiments included several transformations on videos.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/2783207
- Subject Headings
- Biometric identification, High performance computing, Image processing, Digital techniques, Multimedia systems, Security measures
- Format
- Document (PDF)
- Title
- Multimedia Big Data Processing Using Hpcc Systems.
- Creator
- Chinta, Vishnu, Kalva, Hari, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
There is now more data being created than ever before and this data can be any form of data, textual, multimedia, spatial etc. To process this data, several big data processing platforms have been developed including Hadoop, based on the MapReduce model and LexisNexis’ HPCC systems. In this thesis we evaluate the HPCC Systems framework with a special interest in multimedia data analysis and propose a framework for multimedia data processing. It is important to note that multimedia data...
Show moreThere is now more data being created than ever before and this data can be any form of data, textual, multimedia, spatial etc. To process this data, several big data processing platforms have been developed including Hadoop, based on the MapReduce model and LexisNexis’ HPCC systems. In this thesis we evaluate the HPCC Systems framework with a special interest in multimedia data analysis and propose a framework for multimedia data processing. It is important to note that multimedia data encompasses a wide variety of data including but not limited to image data, video data, audio data and even textual data. While developing a unified framework for such wide variety of data, we have to consider computational complexity in dealing with the data. Preliminary results show that HPCC can potentially reduce the computational complexity significantly.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004875, http://purl.flvc.org/fau/fd/FA00004875
- Subject Headings
- Big data., High performance computing., Software engineering., Artificial intelligence--Data processing., Management information systems., Multimedia systems.
- Format
- Document (PDF)
- Title
- HPCC based Platform for COPD Readmission Risk Analysis with implementation of Dimensionality reduction and balancing techniques.
- Creator
- Jain, Piyush, Agarwal, Ankur, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Hospital readmission rates are considered to be an important indicator of quality of care because they may be a consequence of actions of commission or omission made during the initial hospitalization of the patient, or as a consequence of poorly managed transition of the patient back into the community. The negative impact on patient quality of life and huge burden on healthcare system have made reducing hospital readmissions a central goal of healthcare delivery and payment reform efforts....
Show moreHospital readmission rates are considered to be an important indicator of quality of care because they may be a consequence of actions of commission or omission made during the initial hospitalization of the patient, or as a consequence of poorly managed transition of the patient back into the community. The negative impact on patient quality of life and huge burden on healthcare system have made reducing hospital readmissions a central goal of healthcare delivery and payment reform efforts. In this study, we will be proposing a framework on how the readmission analysis and other healthcare models could be deployed in real world and a Machine learning based solution which uses patients discharge summaries as a dataset to train and test the machine learning model created. Current systems does not take into consideration one of the very important aspect of solving readmission problem by taking Big data into consideration. This study also takes into consideration Big data aspect of solutions which can be deployed in the field for real world use. We have used HPCC compute platform which provides distributed parallel programming platform to create, run and manage applications which involves large amount of data. We have also proposed some feature engineering and data balancing techniques which have shown to greatly enhance the machine learning model performance. This was achieved by reducing the dimensionality in the data and fixing the imbalance in the dataset. The system presented in this study provides a real world machine learning based predictive modeling for reducing readmissions which could be templatized for other diseases.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013560
- Subject Headings
- Machine learning, Big data, Patient Readmission, Hospitals--Admission and discharge--Data processing, High performance computing
- Format
- Document (PDF)
- Title
- System Level Modeling and Simulation with MLDesigner.
- Creator
- Kovalski, Fabiano, Aalo, Valentine A., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
System modeling has the potential to enhance system design productivity by providing a platform for system performance evaluations. This model must be designed at an abstract level, hiding system details. However, it must represent any subsystem or its components at any level of specification details. In order to model such a system, we will need to combine various models-of-computation (MOC). MOC provide a framework to model various algorithms and activities, while accounting for and...
Show moreSystem modeling has the potential to enhance system design productivity by providing a platform for system performance evaluations. This model must be designed at an abstract level, hiding system details. However, it must represent any subsystem or its components at any level of specification details. In order to model such a system, we will need to combine various models-of-computation (MOC). MOC provide a framework to model various algorithms and activities, while accounting for and exploiting concurrency and synchronization aspects. Along with supporting various MOC, a modeling environment should also support a well developed library. In this thesis, we have explored various modeling environments. MLDesigner (MLD) is one such modeling environment that supports a well developed library and integrates various MOC. We present an overview and discuss the process of system modeling with MLD. We further present an abstract model of a Network-on-Chip in MLD and show latency results for various customizable parameters for this model.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fau/fd/FA00012531
- Subject Headings
- High performance computing, Electronic data processing--Distributed processing, Information technology--Management, Spatial analysis (Statistics), System design
- Format
- Document (PDF)
- Title
- Energy Efficient Cluster-Based Target Tracking Strategy.
- Creator
- AL-Ghanem, Waleed Khalid, Mahgoub, Imad, Florida Atlantic University
- Abstract/Description
-
This research proposes a cluster-based target tracking strategy for one moving object using wireless sensor networks. The sensor field is organized in 3 hierarchal levels. 1-bit message is sent when a node detects the target. Otherwise the node stays silent. Since in wireless sensor network nodes have limited computational resources, limited storage resources, and limited battery, the code for predicting the target position should be simple, and fast to execute. The algorithm proposed in this...
Show moreThis research proposes a cluster-based target tracking strategy for one moving object using wireless sensor networks. The sensor field is organized in 3 hierarchal levels. 1-bit message is sent when a node detects the target. Otherwise the node stays silent. Since in wireless sensor network nodes have limited computational resources, limited storage resources, and limited battery, the code for predicting the target position should be simple, and fast to execute. The algorithm proposed in this research is simple, fast, and utilizes all available detection data for estimating the location of the target while conserving energy. lbis has the potential of increasing the network life time. A simulation program is developed to study the impact of the field size and density on the overall performance of the strategy. Simulation results show that the strategy saves energy while estimating the location of the target with an acceptable error margin.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012501
- Subject Headings
- Wireless communication systems--Technological innovations, Sensor networks--Security measures, High performance computing, Adaptive signal processing, Target acquisition, Expert systems (Computer science)
- Format
- Document (PDF)