Current Search: Information theory (x)
View All Items
Pages
- Title
- RATIONAL EXPECTATIONS AND INFORMATION THEORY.
- Creator
- Atallo, Robert Alan, Florida Atlantic University, Hanage, Neela, College of Business, Department of Economics
- Abstract/Description
-
This thesis examines the relationship between the rational expectations flexible-price macroeconomic model and the pure theory of information originally developed within the electronic communications disciplines. The thesis first develops the rational expectations macromodel, including discussions of the model's assumptions, robustness, econometric issues, and important empirical results. The pure theory of information is then developed, and the "polar cases" of full information and complete...
Show moreThis thesis examines the relationship between the rational expectations flexible-price macroeconomic model and the pure theory of information originally developed within the electronic communications disciplines. The thesis first develops the rational expectations macromodel, including discussions of the model's assumptions, robustness, econometric issues, and important empirical results. The pure theory of information is then developed, and the "polar cases" of full information and complete information deprivation are examined. Finally, a generalized model of information acquisition and "transmission noise" are developed within the rational expectations framework.
Show less - Date Issued
- 1987
- PURL
- http://purl.flvc.org/fcla/dt/14405
- Subject Headings
- Rational expectations (Economic theory), Information theory
- Format
- Document (PDF)
- Title
- Quantitative methodology and applications in measuring supply chain complexity.
- Creator
- Morgan, Courtney Luke, Sr., Florida Atlantic University, Han, Chingping (Jim), College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The study of this paper on entropy-based methodology for measuring and analyzing the material and information complexity across organizational interfaces within a logistical framework material and Information flows within the supply chain are often complex. The complexity analysis described in this paper can be used to highlight several issues that are critical to effective supply chain management, and for internal control. My methodology is based on the understanding that the level of...
Show moreThe study of this paper on entropy-based methodology for measuring and analyzing the material and information complexity across organizational interfaces within a logistical framework material and Information flows within the supply chain are often complex. The complexity analysis described in this paper can be used to highlight several issues that are critical to effective supply chain management, and for internal control. My methodology is based on the understanding that the level of control and predictability, as well as the dynamic characteristics and static of the flows contribute to the overall level of complexity within the supply chain. This study demonstrated that supply chain complexity is realistically understood, carefully and accurately assessed it can be used to monitor and effectively control the performance of the entire supply chain and delivery system. At the supply chain interface, the methodology can provide quantitative insights into the transfer of complexity along the supply chain.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12938
- Subject Headings
- Business logistics, Entropy (Information theory)
- Format
- Document (PDF)
- Title
- Studies on nonlinear activity and cross-entropy considerations in neural networks.
- Creator
- Abusalah, Salahalddin Tawfiq., Florida Atlantic University, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The objectives of this research as deliberated in this dissertation are two-folded: (i) To study the nonlinear activity in the neural complex (real and artificial) and (ii) to analyze the learning processe(s) pertinent to an artificial neural network in the information-theoretic plane using cross-entropy error-metrics. The research efforts envisaged enclave the following specific tasks: (i) Obtaining a general solution for the Bernoulli-Riccati equation to represent a single parameter family...
Show moreThe objectives of this research as deliberated in this dissertation are two-folded: (i) To study the nonlinear activity in the neural complex (real and artificial) and (ii) to analyze the learning processe(s) pertinent to an artificial neural network in the information-theoretic plane using cross-entropy error-metrics. The research efforts envisaged enclave the following specific tasks: (i) Obtaining a general solution for the Bernoulli-Riccati equation to represent a single parameter family of S-shaped (sigmoidal) curves depicting the nonlinear activity in the neural network. (ii) Analysis of the logistic growth of output versus input values in the neural complex (real and artificial) under the consideration that the boundaries of the sets constituting the input and output entities are crisp and/or fuzzy. (iii) Construction of a set of cross-entropy error-metrics (known as Csiszar's measures) deduced in terms of the parameters pertinent to a perceptron topology and elucidation of their relative effectiveness in training the network optimally towards convergence. (iv) Presenting the methods of symmetrizing and balancing the aforesaid error-entropy measures (in the information-theoretic plane) so as to make them usable as error-metrics in the test domain. (v) Description and analysis of the dynamics of neural learning process in the information-theoretic plane for both crisp and fuzzy attributes of input values. Relevant to these topics portraying the studies on nonlinear activity and cross-entropy considerations vis-a-vis neural networks, newer and/or exploratory inferences are made, logical conclusions are enumerated and relative discussions are presented along with the scope for future research to be pursued.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12447
- Subject Headings
- Neural networks (Computer science), Entropy (Information theory), Nonlinear control theory
- Format
- Document (PDF)
- Title
- Information-theoretics based analysis of hard handoffs in mobile communications.
- Creator
- Bendett, Raymond Morris., Florida Atlantic University, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The research proposed and elaborated in this dissertation is concerned with the development of new decision algorithms for hard handoff strategies in mobile communication systems. Specifically, the research tasks envisaged include the following: (1) Use of information-theoretics based statistical distance measures as a metric for hard handoff decisions; (2) A study to evaluate the log-likelihood criterion towards decision considerations to perform the hard handoff; (3) Development of a...
Show moreThe research proposed and elaborated in this dissertation is concerned with the development of new decision algorithms for hard handoff strategies in mobile communication systems. Specifically, the research tasks envisaged include the following: (1) Use of information-theoretics based statistical distance measures as a metric for hard handoff decisions; (2) A study to evaluate the log-likelihood criterion towards decision considerations to perform the hard handoff; (3) Development of a statistical model to evaluate optimum instants of measurements of the metric used for hard handoff decision. The aforesaid objectives refer to a practical scenario in which a mobile station (MS) traveling away from a serving base station (BS-I) may suffer communications impairment due to interference and shadowing affects, especially in an urban environment. As a result, it will seek to switch over to another base station (BS-II) that facilitates a stronger signal level. This is called handoff procedure. (The hard handoff refers to the specific case in which only one base station serves the mobile at the instant of handover). Classically, the handoff decision is done on the basis of the difference between received signal strengths (RSS) from BS-I and BS-II. The algorithms developed here, in contrast, stipulate the decision criterion set by the statistical divergence and/or log-likelihood ratio that exists between the received signals. The purpose of the present study is to evaluate the relative efficacy of the conventional and proposed algorithms in reference to: (i) Minimization of unnecessary handoffs ("ping-pongs"); (ii) Minimization of delay in handing over; (iii) Ease of implementation and (iv) Minimization of possible call dropouts due to ineffective handover envisaged. Simulated results with data commensurate with practical considerations are furnished and discussed. Background literature is presented in the introductory chapter and scope for future work is identified via open questions in the concluding chapter.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12639
- Subject Headings
- Mobile communication systems, Information theory, Algorithms
- Format
- Document (PDF)
- Title
- Measurement of coupling and cohesion of software.
- Creator
- Chen, Ye., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
Graphs are often used to depict an abstraction of software. A graph may be an abstraction of a software system and a subgraph may represent a software module. Coupling and cohesion are attributes that summarize the degree of interdependence or connectivity among subsystems or within subsystems, respectively. When used in conjunction with measures of other attributes, coupling and cohesion can contribute to an assessment or prediction of software quality. Information theory is attractive to us...
Show moreGraphs are often used to depict an abstraction of software. A graph may be an abstraction of a software system and a subgraph may represent a software module. Coupling and cohesion are attributes that summarize the degree of interdependence or connectivity among subsystems or within subsystems, respectively. When used in conjunction with measures of other attributes, coupling and cohesion can contribute to an assessment or prediction of software quality. Information theory is attractive to us because the design decisions embodied by the graph are information. Using information theory, we propose measures of the cohesion and coupling of a modular system and cohesion and coupling of each constituent module. These measures conform to the properties of cohesion and coupling defined by Briand, Morasca and Basili, applied to undirected graphs and therefore, are in the families of measures called cohesion and coupling.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/15760
- Subject Headings
- Information theory, Computer software--Evaluation, Software measurement
- Format
- Document (PDF)
- Title
- Information theory and software measurement.
- Creator
- Allen, Edward B., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Development of reliable, high quality, software requires study and understanding at each step of the development process. A basic assumption in the field of software measurement is that metrics of internal software attributes somehow relate to the intrinsic difficulty in understanding a program. Measuring the information content of a program attempts to indirectly quantify the comprehension task. Information theory based software metrics are attractive because they quantify the amount of...
Show moreDevelopment of reliable, high quality, software requires study and understanding at each step of the development process. A basic assumption in the field of software measurement is that metrics of internal software attributes somehow relate to the intrinsic difficulty in understanding a program. Measuring the information content of a program attempts to indirectly quantify the comprehension task. Information theory based software metrics are attractive because they quantify the amount of information in a well defined framework. However, most information theory based metrics have been proposed with little reference to measurement theory fundamentals, and empirical validation of predictive quality models has been lacking. This dissertation proves that representative information theory based software metrics can be "meaningful" components of software quality models in the context of measurement theory. To this end, members of a major class of metrics are shown to be regular representations of Minimum Description Length or Variety of software attributes, and are interval scale. An empirical validation case study is presented that predicted faults in modules based on Operator Information. This metric is closely related to Harrison's Average Information Content Classification, which is the entropy of the operators. New general methods for calculating synthetic complexity at the system level and module level are presented, quantifying the joint information of an arbitrary set of primitive software measures. Since all kinds of information are not equally relevant to software quality factors, components of synthetic module complexity are also defined. Empirical case studies illustrate the potential usefulness of the proposed synthetic metrics. A metrics data base is often the key to a successful ongoing software metrics program. The contribution of any proposed metric is defined in terms of measured variation using information theory, irrespective of the metric's usefulness in quality models. This is of interest when full validation is not practical. Case studies illustrate the method.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/12412
- Subject Headings
- Software engineering, Computer software--Quality control, Information theory
- Format
- Document (PDF)
- Title
- Information-theoretic aspects of local access techniques/ADSL with or without ATM-centric considerations.
- Creator
- Preechayasomboon, Apiruck., Florida Atlantic University, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The research proposed and elaborated in this dissertation is concerned with the development of new and smart techniques for subchannel allocation in the asymmetric digital subscriber lines (ADSLs). The ADSL refers to a class of access technology adopted currently in modern telecommunications to make use of the available channel capacity on the twisted copper-wires, which exist in the "last-mile" between the central office and subscribers. This available spectrum on the voice grade copper...
Show moreThe research proposed and elaborated in this dissertation is concerned with the development of new and smart techniques for subchannel allocation in the asymmetric digital subscriber lines (ADSLs). The ADSL refers to a class of access technology adopted currently in modern telecommunications to make use of the available channel capacity on the twisted copper-wires, which exist in the "last-mile" between the central office and subscribers. This available spectrum on the voice grade copper-lines is judiciously used to transport broadband data over the last mile regime. For this purpose, the channel capacity on the access lines is segmented in subchannels and the traffic to be transported is placed on the subchannels matching the bit-rates of the traffic to the subchannel capacity (as dictated by Hartley-Shannon law). The available subchannels for downstream and upstreams are of different extents (640 kbps for upstream and 9 Mbps for downstream); and, hence are qualified as asymmetric transports. Relevant to the subchannel allocation as above, the specific research, carried out can be enumerated as follows: (1) Development of a subchannel allocation metric (SAM) on the basis of information-theoretic considerations and duly accounting for noise/interference effects on the access lines and BER-based information-impairments on the trunks (feeding the access lines); (2) Use of SAM as an algorithmic support to train an artificial neural network (ANN), which is facilitated at the ADSL modem performing subchannel allocation. A new version of ANN training (and subchannel allocation prediction) strategies is developed by implementing the ANN operation in the entropy-plane. This technique allows a fast convergence of the ANN compatible for telecommunication transports. The incorporation of ANN in the modem renders the subchannel allocation smart; (3) Fuzzy considerations are also included in the ANN indicated above and operation of ADSL modem is then tuned to function as an intelligent neuro inference engine in its efforts towards subchannel allocation; (4) ATM support on ADSL lines is investigated and a scheme for allocating the permanent and switched virtual circuits (supporting ATM specified traffic) on the subchannels of access lines is developed. Relevant call-blocking probabilities are assessed; (5) Lastly, the EMI/RFI, and crosstalks on access lines are studied in the framework of practical considerations and mitigatory efforts are suggested thereof. Simulated results using data commensurate with practical aspects of ADSL transport are furnished and discussed. Background literature is comprehensively presented chapterwise and scope for future work is identified via open questions in the concluding chapter.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12652
- Subject Headings
- Data transmission systems, Asynchronous transfer mode, Information theory
- Format
- Document (PDF)
- Title
- Studies on information-theoretics based data-sequence pattern-discriminant algorithms: Applications in bioinformatic data mining.
- Creator
- Arredondo, Tomas Vidal., Florida Atlantic University, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This research refers to studies on information-theoretic (IT) aspects of data-sequence patterns and developing thereof discriminant algorithms that enable distinguishing the features of underlying sequence patterns having characteristic, inherent stochastical attributes. The application potentials of such algorithms include bioinformatic data mining efforts. Consistent with the scope of the study as above, considered in this research are specific details on information-theoretics and entropy...
Show moreThis research refers to studies on information-theoretic (IT) aspects of data-sequence patterns and developing thereof discriminant algorithms that enable distinguishing the features of underlying sequence patterns having characteristic, inherent stochastical attributes. The application potentials of such algorithms include bioinformatic data mining efforts. Consistent with the scope of the study as above, considered in this research are specific details on information-theoretics and entropy considerations vis-a-vis sequence patterns (having stochastical attributes) such as DNA sequences of molecular biology. Applying information-theoretic concepts (essentially in Shannon's sense), the following distinct sets of metrics are developed and applied in the algorithms developed for data-sequence pattern-discrimination applications: (i) Divergence or cross-entropy algorithms of Kullback-Leibler type and of general Czizar class; (ii) statistical distance measures; (iii) ratio-metrics; (iv) Fisher type linear-discriminant measure and (v) complexity metric based on information redundancy. These measures are judiciously adopted in ascertaining codon-noncodon delineations in DNA sequences that consist of crisp and/or fuzzy nucleotide domains across their chains. The Fisher measure is also used in codon-noncodon delineation and in motif detection. Relevant algorithms are used to test DNA sequences of human and some bacterial organisms. The relative efficacy of the metrics and the algorithms is determined and discussed. The potentials of such algorithms in supplementing the prevailing methods are indicated. Scope for future studies is identified in terms of persisting open questions.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fau/fd/FADT12057
- Subject Headings
- Data mining, Bioinformatics, Discriminant analysis, Information theory in biology
- Format
- Document (PDF)
- Title
- THE USE OF CERTAINTY EQUIVALENCE FOR LOCATION AND ECONOMIC GROWTH DECISIONS.
- Creator
- WONSETLER, ELIZABETH ANN., Florida Atlantic University, Scheidell, John M., College of Business, Department of Economics
- Abstract/Description
-
This study was prepared to analyze the use of the first-period certainty equivalence procedure in location decisions. Certainty equivalence is a mathematical technique which explicitly incorporates probablistic uncertainty in the decision making process. The feasible location of an international jetport which would service the South Florida region is used to illustrate this decision making technique.
- Date Issued
- 1972
- PURL
- http://purl.flvc.org/fcla/dt/13473
- Subject Headings
- Decision-making--Mathematical models, Uncertainty (Information theory)
- Format
- Document (PDF)
- Title
- Implementation and comparison of the Golay and first order Reed-Muller codes.
- Creator
- Shukina, Olga., Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
In this project we perform data transmission across noisy channels and recover the message first by using the Golay code, and then by using the first-order Reed- Muller code. The main objective of this thesis is to determine which code among the above two is more efficient for text message transmission by applying the two codes to exactly the same data with the same channel error bit probabilities. We use the comparison of the error-correcting capability and the practical speed of the Golay...
Show moreIn this project we perform data transmission across noisy channels and recover the message first by using the Golay code, and then by using the first-order Reed- Muller code. The main objective of this thesis is to determine which code among the above two is more efficient for text message transmission by applying the two codes to exactly the same data with the same channel error bit probabilities. We use the comparison of the error-correcting capability and the practical speed of the Golay code and the first-order Reed-Muller code to meet our goal.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fcla/dt/3362579
- Subject Headings
- Error-correcting codes (Information theory), Coding theory, Computer algorithms, Digital modulation
- Format
- Document (PDF)
- Title
- DSP implementation of turbo decoder using the Modified-Log-MAP algorithm.
- Creator
- Khan, Zeeshan Haneef., Florida Atlantic University, Zhuang, Hanqi, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The design of any communication receiver needs to addresses the issues of operating under the lowest possible signal-to-noise ratio. Among various algorithms that facilitate this objective are those used for iterative decoding of two-dimensional systematic convolutional codes in applications such as spread spectrum communications and Code Division Multiple Access (CDMA) detection. A main theme of any decoding schemes is to approach the Shannon limit in signal-to-noise ratio. All these...
Show moreThe design of any communication receiver needs to addresses the issues of operating under the lowest possible signal-to-noise ratio. Among various algorithms that facilitate this objective are those used for iterative decoding of two-dimensional systematic convolutional codes in applications such as spread spectrum communications and Code Division Multiple Access (CDMA) detection. A main theme of any decoding schemes is to approach the Shannon limit in signal-to-noise ratio. All these decoding algorithms have various complexity levels and processing delay issues. Hence, the optimality depends on how they are used in the system. The technique used in various decoding algorithms is termed as iterative decoding. Iterative decoding was first developed as a practical means for decoding turbo codes. With the Log-Likelihood algebra, it is shown that a decoder can be developed that accepts soft inputs as a priori information and delivers soft outputs consisting of channel information, a posteriori information and extrinsic information to subsequent stages of iteration. Different algorithms such as Soft Output Viterbi Algorithm (SOVA), Maximum A Posteriori (MAP), and Log-MAP are compared and their complexities are analyzed in this thesis. A turbo decoder is implemented on the Digital Signal Processing (DSP) chip, TMS320C30 by Texas Instruments using a Modified-Log-MAP algorithm. For the Modified-Log-MAP-Algorithm, the optimal choice of the lookup table (LUT) is analyzed by experimenting with different LUT approximations. A low complexity decoder is proposed for a (7,5) code and implemented in the DSP chip. Performance of the decoder is verified under the Additive Wide Gaussian Noise (AWGN) environment. Hardware issues such as memory requirements and processing time are addressed for the chosen decoding scheme. Test results of the bit error rate (BER) performance are presented for a fixed number of frames and iterations.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12948
- Subject Headings
- Error-correcting codes (Information theory), Signal processing--Digital techniques, Coding theory, Digital communications
- Format
- Document (PDF)
- Title
- The Perceived Impact of Technology-Based Informal Learning on Membership Organizations.
- Creator
- Miller, Lori, Bryan, Valerie, Florida Atlantic University, College of Education, Department of Educational Leadership and Research Methodology
- Abstract/Description
-
Educational leadership goes beyond the boundaries of the classroom; skills needed for talent development professionals in business closely align with those needed in traditional educational leadership positions as both are responsible for the development and growth of others. Traditionally, the role of professional membership associations or organizations such as the Association for Talent Development (ATD, formerly known as the American Society for Training and Development), the group...
Show moreEducational leadership goes beyond the boundaries of the classroom; skills needed for talent development professionals in business closely align with those needed in traditional educational leadership positions as both are responsible for the development and growth of others. Traditionally, the role of professional membership associations or organizations such as the Association for Talent Development (ATD, formerly known as the American Society for Training and Development), the group dedicated to individuals in the field of workplace learning and development, is to provide learning opportunities, set standards, identify best practices in their respective fields, and allow members to network with other professionals who share their interests. However, with the rampant increase in the use of technology and social networking, individuals are now able to access a vast majority of information for free online via tools such as LinkedIn, Facebook, Google, and YouTube. Where has this left organizations that typically charged for access to this type of information in the past? Surveys and interviews were conducted with ATD members in this mixed-methods study to answer the following research questions: 1. What are the perceptions of Association for Talent Development (ATD) members regarding the effect of technology-based informal learning on the role of ATD? 2. How do ATD members utilize technology for informal learning? 3. Are there factors such as gender, age, ethnicity, educational level, or length of time in the field that predict a member's likelihood to utilize technology for informal learning? 4. Are there certain ATD competency areas for which informal learning is preferred over non-formal or formal learning? The significance of the study includes the identification of how the Association for Talent Development (ATD, formerly ASTD) can continue to support professionals in our constantly evolving te chnological society as well as advancing the field by contributing research connecting informal learning with technology to membership organization roles.
Show less - Date Issued
- 2015
- PURL
- http://purl.flvc.org/fau/fd/FA00004523, http://purl.flvc.org/fau/fd/FA00004523
- Subject Headings
- Educational leadership--Influence., Virtual reality in management., Knowledge management., Information networks., Organizational learning., Knowledge representation (Information theory)
- Format
- Document (PDF)
- Title
- A min/max algorithm for cubic splines over k-partitions.
- Creator
- Golinko, Eric David, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
The focus of this thesis is to statistically model violent crime rates against population over the years 1960-2009 for the United States. We approach this question as to be of interest since the trend of population for individual states follows different patterns. We propose here a method which employs cubic spline regression modeling. First we introduce a minimum/maximum algorithm that will identify potential knots. Then we employ least squares estimation to find potential regression...
Show moreThe focus of this thesis is to statistically model violent crime rates against population over the years 1960-2009 for the United States. We approach this question as to be of interest since the trend of population for individual states follows different patterns. We propose here a method which employs cubic spline regression modeling. First we introduce a minimum/maximum algorithm that will identify potential knots. Then we employ least squares estimation to find potential regression coefficients based upon the cubic spline model and the knots chosen by the minimum/maximum algorithm. We then utilize the best subsets regression method to aid in model selection in which we find the minimum value of the Bayesian Information Criteria. Finally, we preent the R2adj as a measure of overall goodness of fit of our selected model. We have found among the fifty states and Washington D.C., 42 out of 51 showed an R2adj value that was greater than 90%. We also present an overall model of the United States. Also, we show additional applications our algorithm for data which show a non linear association. It is hoped that our method can serve as a unified model for violent crime rate over future years.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3342107
- Subject Headings
- Spline theory, Data processing, Bayesian statistical decision theory, Data processing, Neural networks (Computer science), Mathematical statistics, Uncertainty (Information theory), Probabilities, Regression analysis
- Format
- Document (PDF)
- Title
- Creating mindfulness with sensual functional handmade ceramics.
- Creator
- Schwartz, Alexandria., Dorothy F. Schmidt College of Arts and Letters, Department of Visual Arts and Art History
- Abstract/Description
-
I create opportunities for nourishment that are physical, emotional and spiritual with my functional porcelain vessels. They reference the human body's sensual curves, dimples, and bulges, establishing the experience of eating as a metaphor for the sensual experience of human interaction. The tactility is heightened by the variety of glazes dancing around the vessels, from satiny smooth and skin-like, to wet and dripping. Handmade vessels connect the users not only more deeply to the food...
Show moreI create opportunities for nourishment that are physical, emotional and spiritual with my functional porcelain vessels. They reference the human body's sensual curves, dimples, and bulges, establishing the experience of eating as a metaphor for the sensual experience of human interaction. The tactility is heightened by the variety of glazes dancing around the vessels, from satiny smooth and skin-like, to wet and dripping. Handmade vessels connect the users not only more deeply to the food that provides them nourishment, but also connects them more deeply to one another, and to the maker of the work. The slow, deliberate work of making one-of-a-kind objects is similar to the act of carefully preparing a homemade meal, and in turn, dedicating time to the ritual of sitting down together to enjoy that meal. Whether I'm working in my studio creating vessels, or in my kitchen creating a meal, I derive the same experience of spiritual wellbeing. In these moments I am completely present and mindful.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fcla/dt/3361056
- Subject Headings
- Conceptual structures (Information theory), Symbolic interactionism, Body, Human, Social aspects, Resilience (Psychology), Quality of life
- Format
- Document (PDF)
- Title
- Information-theoretics based technoeconomic growth models: simulation and computation of forecasting in telecommunication services.
- Creator
- Yassin, Raef Rashad., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This research is concerned with algorithmic representation of technoeconomic growth concerning modern and next-generation telecommunications including the Internet service. The goal of this study thereof is to emphasize efforts to establish the associated forecasting and, the envisioned tasks thereof include : (i) Reviewing the technoeconomic considerations prevailing in telecommunication (telco) service industry and their implicating features; (ii) studying relevant aspects of underlying...
Show moreThis research is concerned with algorithmic representation of technoeconomic growth concerning modern and next-generation telecommunications including the Internet service. The goal of this study thereof is to emphasize efforts to establish the associated forecasting and, the envisioned tasks thereof include : (i) Reviewing the technoeconomic considerations prevailing in telecommunication (telco) service industry and their implicating features; (ii) studying relevant aspects of underlying complex system evolution (akin to biological systems), (iii) pursuant co-evolution modeling of competitive business structures using dichotomous (flip-flop) states as seen in predator evolutions ; (iv) conceiving a novel algorithm based on information-theoretic principles toward technoeconomic forecasting on the basis of modified Fisher-Kaysen model consistent with proportional fairness concept of comsumers' willingness-to-pay, and (v) evaluating forecast needs on inter-office facility based congestion sensitive traffics encountered. Commensurate with the topics indicated above, necessary algorithms, analytical derivations and compatible models are proposed. Relevant computational exercises are performed with MatLab[TM] using data gathered from open-literature on the service profiles of telecommunication companies (telco); and ad hoc model verifications are performed on the results. Lastly, discussions and inferences are made with open-questions identified for further research.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3356898
- Subject Headings
- Signal theory (Telecommunication), Data compression (Computer science), Telecommunication systems, Mathematical models, Information science
- Format
- Document (PDF)
- Title
- Signature schemes in single and multi-user settings.
- Creator
- Villanyi, Viktoria., Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
In the first chapters we will give a short introduction to signature schemes in single and multi-user settings. We give the definition of a signature scheme and explain a group of possible attacks on them. In Chapter 6 we give a construction which derives a subliminal-free RSA public key. In the construction we use a computationally binding and unconditionally hiding commitment scheme. To establish a subliminal-free RSA modulus n, we have to construct the secret primes p and q. To prove p and...
Show moreIn the first chapters we will give a short introduction to signature schemes in single and multi-user settings. We give the definition of a signature scheme and explain a group of possible attacks on them. In Chapter 6 we give a construction which derives a subliminal-free RSA public key. In the construction we use a computationally binding and unconditionally hiding commitment scheme. To establish a subliminal-free RSA modulus n, we have to construct the secret primes p and q. To prove p and q are primes we use Lehmann's primality test on the commitments. The chapter is based on the paper, "RSA signature schemes with subliminal-free public key" (Tatra Mountains Mathematical Publications 41 (2008)). In chapter 7 a one-time signature scheme using run-length encoding is presented, which in the random oracle model offers security against chosen-message attacks. For parameters of interest, the proposed scheme enables about 33% faster verification with a comparable signature size than a construction of Merkle and Winternitz. The public key size remains unchanged (1 hash value). The main cost for the faster verification is an increase in the time required for signing messages and for key generation. The chapter is based on the paper "A one-time signature using run-length encoding" (Information Processing Letters Vol. 108, Issue 4, (2008)).
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/215289
- Subject Headings
- Information technology, Security measures, Cryptography, Coding theory, Data encryption (Computer science), DIgital watermarking
- Format
- Document (PDF)
- Title
- Sparse and low rank constraints on optical flow and trajectories.
- Creator
- Gibson, Joel, Marques, Oge, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this dissertation we apply sparse constraints to improve optical flow and trajectories. We apply sparsity in two ways. First, with 2-frame optical flow, we enforce a sparse representation of flow patches using a learned overcomplete dictionary. Second, we apply a low rank constraint to trajectories via robust coupling. We begin with a review of optical flow fundamentals. We discuss the commonly used flow estimation strategies and the advantages and shortcomings of each. We introduce the...
Show moreIn this dissertation we apply sparse constraints to improve optical flow and trajectories. We apply sparsity in two ways. First, with 2-frame optical flow, we enforce a sparse representation of flow patches using a learned overcomplete dictionary. Second, we apply a low rank constraint to trajectories via robust coupling. We begin with a review of optical flow fundamentals. We discuss the commonly used flow estimation strategies and the advantages and shortcomings of each. We introduce the concepts associated with sparsity including dictionaries and low rank matrices.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004286, http://purl.flvc.org/fau/fd/FA00004286
- Subject Headings
- Approximation theory -- Mathematical models, Computer vision, Image processing -- Digital techniques, Information visualization
- Format
- Document (PDF)
- Title
- Estimation of information: Theoretics-based delay bounds of MPEG traffic over ATM networks.
- Creator
- Jagannathan, Shuba., Florida Atlantic University, Hsu, Sam, Neelakanta, Perambur S.
- Abstract/Description
-
This thesis addresses a method to deduce the statistical bounds associated with the cell-transfer delay variations (CDVs) encountered by the cells of MPEG traffic, transmitted in the Asynchronous Transfer Mode (ATM) networks. This study focuses on: (1) Estimating CDV arising from multiplexing/switching for both constant bit rate (CBR) and variable bit rate (VBR) traffics via priority allocation based simulations. (2) Developing an information-theoretics based technique to get an insight of...
Show moreThis thesis addresses a method to deduce the statistical bounds associated with the cell-transfer delay variations (CDVs) encountered by the cells of MPEG traffic, transmitted in the Asynchronous Transfer Mode (ATM) networks. This study focuses on: (1) Estimating CDV arising from multiplexing/switching for both constant bit rate (CBR) and variable bit rate (VBR) traffics via priority allocation based simulations. (2) Developing an information-theoretics based technique to get an insight of the combined BER-induced and multiplexing/switching-induced CDVs in ATM networks. Algorithms pertinent to CDV statistics are derived and the lower and upper bounds of the statistics are obtained via simulations in respect of CBR and VBR traffics. Ascertaining these bounds is useful in the cell admission control (CAC) strategies adopted in ATM transmissions. Inferential remarks indicating the effects of traffic parameters (such as bandwidth, burstiness etc.) on the values of the statistical bounds are presented, and scope for further work are presented.
Show less - Date Issued
- 1998
- PURL
- http://purl.flvc.org/fcla/dt/15577
- Subject Headings
- Asynchronous transfer mode, Broadband communication systems, Integrated services digital networks, Information theory
- Format
- Document (PDF)
- Title
- Estimation of Internet transit times using a fast-computing artificial neural network (FC-ANN).
- Creator
- Fasulo, Joseph V., Florida Atlantic University, Neelakanta, Perambur S.
- Abstract/Description
-
The objective of this research is to determine the macroscopic behavior of packet transit-times across the global Internet cloud using an artificial neural network (ANN). Specifically, the problem addressed here refers to using a "fast-convergent" ANN for the purpose indicated. The underlying principle of fast-convergence is that, the data presented in training and prediction modes of the ANN is in the entropy (information-theoretic) domain, and the associated annealing process is "tuned" to...
Show moreThe objective of this research is to determine the macroscopic behavior of packet transit-times across the global Internet cloud using an artificial neural network (ANN). Specifically, the problem addressed here refers to using a "fast-convergent" ANN for the purpose indicated. The underlying principle of fast-convergence is that, the data presented in training and prediction modes of the ANN is in the entropy (information-theoretic) domain, and the associated annealing process is "tuned" to adopt only the useful information content and discard the posentropy part of the data presented. To demonstrate the efficacy of the research pursued, a feedforward ANN structure is developed and the necessary transformations required to convert the input data from the parametric-domain to the entropy-domain (and a corresponding inverse transformation) are followed so as to retrieve the output in parametric-domain. The fast-convergent or fast-computing ANN (FC-ANN) developed is deployed to predict the packet-transit performance across the Internet. (Abstract shortened by UMI.)
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/12835
- Subject Headings
- Neural networks (Computer science), Information theory, Packet switching (Data transmission), Internet
- Format
- Document (PDF)
- Title
- Static error modeling of sensors applicable to ocean systems.
- Creator
- Ah-Chong, Jeremy Fred., Florida Atlantic University, An, Edgar
- Abstract/Description
-
This thesis presents a method for modeling navigation sensors used on ocean systems and particularly on Autonomous Underwater Vehicles (AUV). An extended Kalman filter was previously designed for the implementation of the Inertial Navigation System (INS) making use of Inertial Measurement Unit (IMU), a magnetic compass, a GPS/DGPS system and a Doppler Velocity Log (DVL). Emphasis is put on characterizing the static sensor error model. A "best-fit ARMA model" based on the Aikake Information...
Show moreThis thesis presents a method for modeling navigation sensors used on ocean systems and particularly on Autonomous Underwater Vehicles (AUV). An extended Kalman filter was previously designed for the implementation of the Inertial Navigation System (INS) making use of Inertial Measurement Unit (IMU), a magnetic compass, a GPS/DGPS system and a Doppler Velocity Log (DVL). Emphasis is put on characterizing the static sensor error model. A "best-fit ARMA model" based on the Aikake Information Criterion (AIC), Whiteness test and graphical analyses were used for the model identification. Model orders and parameters were successfully estimated for compass heading, GPS position and IMU static measurements. Static DVL measurements could not be collected and require another approach. The variability of the models between different measurement data sets suggests online error model estimation.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/12977
- Subject Headings
- Underwater navigation, Kalman filtering, Error-correcting codes (Information theory), Detectors
- Format
- Document (PDF)