Current Search: Coding theory (x)
View All Items
Pages
 Title
 LINEAR CODES AND RECURRENT SEQUENCES.
 Creator
 HAMLIN, ALLEN CHARLES, Florida Atlantic University
 Abstract/Description

In this paper, a brief introduction to coding theory is presented. Linear (block) codes arc briefly discussed along with some of their error (multiple error, burst error) correction and detection properties. Recurrent sequences are discussed in the major portion of the paper, and it is shown that the study of general recurrent sequences is equivalent to the study of sequences associated with irreducible polynomials. The paper concludes with a brief mention of autocorrelation functions and a...
Show moreIn this paper, a brief introduction to coding theory is presented. Linear (block) codes arc briefly discussed along with some of their error (multiple error, burst error) correction and detection properties. Recurrent sequences are discussed in the major portion of the paper, and it is shown that the study of general recurrent sequences is equivalent to the study of sequences associated with irreducible polynomials. The paper concludes with a brief mention of autocorrelation functions and a way of finding the minimal recursion of a recurrent sequence given some terms of the sequence. The paper is an exposition of previously known results. Some modification in notation and proofs has been done to present the material in a unified and more readable manner.
Show less  Date Issued
 1973
 PURL
 http://purl.flvc.org/fcla/dt/13552
 Subject Headings
 Coding theory
 Format
 Document (PDF)
 Title
 Implementation of lowcomplexity Viterbi decoder.
 Creator
 Mukhtar, Adeel., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The design of mobile communication receiver requires addressing the stringent issues of low signaltonoise ratio (SNR) operation and low battery power consumption. Typically, forward error correction using convolutional coding with Viterbi decoding is employed to improve the error performance. However, even with moderate code lengths, the computation and storage requirement of conventional VD are substantial consuming appreciable fraction of DSP computations and hence battery power. The new...
Show moreThe design of mobile communication receiver requires addressing the stringent issues of low signaltonoise ratio (SNR) operation and low battery power consumption. Typically, forward error correction using convolutional coding with Viterbi decoding is employed to improve the error performance. However, even with moderate code lengths, the computation and storage requirement of conventional VD are substantial consuming appreciable fraction of DSP computations and hence battery power. The new error selective Viterbi decoding (ESVD) scheme developed recently (1) reduces the computational load substantially by taking advantage of the noisefree intervals to limit the trellis search. This thesis is concerned with the development of an efficient hardware architecture to implement a hard decision version of ESVD scheme for IS54 coder. The implementations are optimized to reduce the computational complexity. The performance of the implemented ESVD scheme is verified for different channel conditions.
Show less  Date Issued
 1997
 PURL
 http://purl.flvc.org/fcla/dt/15429
 Subject Headings
 Decoders (Electronics), Coding theory
 Format
 Document (PDF)
 Title
 Covert and multilevel visual cryptographic schemes.
 Creator
 Lopez, Jessica Maria, Florida Atlantic University, Mullin, Ronald C.
 Abstract/Description

Visual cryptography concerns the problem of "hiding" a monochrome image among sets of transparencies, known as shares. These are created in such a fashion that certain sets of shares when superimposed, will reveal the image; while other subsets yield no information. A standard model is the (k, n) scheme, where any k shares will reveal the image, but any k  1 or fewer shares reveal no information. In this thesis, we explain the basic mechanism for creating shares. We survey the literature and...
Show moreVisual cryptography concerns the problem of "hiding" a monochrome image among sets of transparencies, known as shares. These are created in such a fashion that certain sets of shares when superimposed, will reveal the image; while other subsets yield no information. A standard model is the (k, n) scheme, where any k shares will reveal the image, but any k  1 or fewer shares reveal no information. In this thesis, we explain the basic mechanism for creating shares. We survey the literature and show how to create (k, k) schemes which exist for all k > 2. Then we introduce perfect hash functions, which can be used to construct (k, n) schemes from (k, k) schemes for all 2 < k < n. We introduce generalizations of (k, n) schemes that we call covert cryptographic schemes, and extend this notion to multilevel visual cryptographic schemes. We give conditions for the existence of such schemes, and we conclude with a survey of generalizations.
Show less  Date Issued
 2005
 PURL
 http://purl.flvc.org/fcla/dt/13206
 Subject Headings
 Coding theory, Cryptography, Data encryption (Computer science)
 Format
 Document (PDF)
 Title
 Implementation and comparison of the Golay and first order ReedMuller codes.
 Creator
 Shukina, Olga., Charles E. Schmidt College of Science, Department of Mathematical Sciences
 Abstract/Description

In this project we perform data transmission across noisy channels and recover the message first by using the Golay code, and then by using the firstorder Reed Muller code. The main objective of this thesis is to determine which code among the above two is more efficient for text message transmission by applying the two codes to exactly the same data with the same channel error bit probabilities. We use the comparison of the errorcorrecting capability and the practical speed of the Golay...
Show moreIn this project we perform data transmission across noisy channels and recover the message first by using the Golay code, and then by using the firstorder Reed Muller code. The main objective of this thesis is to determine which code among the above two is more efficient for text message transmission by applying the two codes to exactly the same data with the same channel error bit probabilities. We use the comparison of the errorcorrecting capability and the practical speed of the Golay code and the firstorder ReedMuller code to meet our goal.
Show less  Date Issued
 2013
 PURL
 http://purl.flvc.org/fcla/dt/3362579
 Subject Headings
 Errorcorrecting codes (Information theory), Coding theory, Computer algorithms, Digital modulation
 Format
 Document (PDF)
 Title
 DSP implementation of turbo decoder using the ModifiedLogMAP algorithm.
 Creator
 Khan, Zeeshan Haneef., Florida Atlantic University, Zhuang, Hanqi, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The design of any communication receiver needs to addresses the issues of operating under the lowest possible signaltonoise ratio. Among various algorithms that facilitate this objective are those used for iterative decoding of twodimensional systematic convolutional codes in applications such as spread spectrum communications and Code Division Multiple Access (CDMA) detection. A main theme of any decoding schemes is to approach the Shannon limit in signaltonoise ratio. All these...
Show moreThe design of any communication receiver needs to addresses the issues of operating under the lowest possible signaltonoise ratio. Among various algorithms that facilitate this objective are those used for iterative decoding of twodimensional systematic convolutional codes in applications such as spread spectrum communications and Code Division Multiple Access (CDMA) detection. A main theme of any decoding schemes is to approach the Shannon limit in signaltonoise ratio. All these decoding algorithms have various complexity levels and processing delay issues. Hence, the optimality depends on how they are used in the system. The technique used in various decoding algorithms is termed as iterative decoding. Iterative decoding was first developed as a practical means for decoding turbo codes. With the LogLikelihood algebra, it is shown that a decoder can be developed that accepts soft inputs as a priori information and delivers soft outputs consisting of channel information, a posteriori information and extrinsic information to subsequent stages of iteration. Different algorithms such as Soft Output Viterbi Algorithm (SOVA), Maximum A Posteriori (MAP), and LogMAP are compared and their complexities are analyzed in this thesis. A turbo decoder is implemented on the Digital Signal Processing (DSP) chip, TMS320C30 by Texas Instruments using a ModifiedLogMAP algorithm. For the ModifiedLogMAPAlgorithm, the optimal choice of the lookup table (LUT) is analyzed by experimenting with different LUT approximations. A low complexity decoder is proposed for a (7,5) code and implemented in the DSP chip. Performance of the decoder is verified under the Additive Wide Gaussian Noise (AWGN) environment. Hardware issues such as memory requirements and processing time are addressed for the chosen decoding scheme. Test results of the bit error rate (BER) performance are presented for a fixed number of frames and iterations.
Show less  Date Issued
 2002
 PURL
 http://purl.flvc.org/fcla/dt/12948
 Subject Headings
 Errorcorrecting codes (Information theory), Signal processingDigital techniques, Coding theory, Digital communications
 Format
 Document (PDF)
 Title
 Perceptual methods for video coding.
 Creator
 Adzic, Velibor, Kalva, Hari, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are...
Show moreThe main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are implemented in the stateof theart video encoders. Result of using our algorithms is visually lossless compression with improved efficiency, as verified by standard subjective quality and psychophysical tests. Savings in bitrate compared to the High Efficiency Video Coding / H.265 reference implementation are up to 45%.
Show less  Date Issued
 2014
 PURL
 http://purl.flvc.org/fau/fd/FA00004074, http://purl.flvc.org/fau/fd/FA00004074
 Subject Headings
 Algorithms, Coding theory, Digital coding  Data processing, Imaging systems  Image quality, Perception, Video processing  Data processing
 Format
 Document (PDF)
 Title
 HEVC optimization in mobile environments.
 Creator
 Garcia, Ray, Kalva, Hari, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

Recently, multimedia applications and their use have grown dramatically in popularity in strong part due to mobile device adoption by the consumer market. Applications, such as video conferencing, have gained popularity. These applications and others have a strong video component that uses the mobile device’s resources. These resources include processing time, network bandwidth, memory use, and battery life. The goal is to reduce the need of these resources by reducing the complexity of the...
Show moreRecently, multimedia applications and their use have grown dramatically in popularity in strong part due to mobile device adoption by the consumer market. Applications, such as video conferencing, have gained popularity. These applications and others have a strong video component that uses the mobile device’s resources. These resources include processing time, network bandwidth, memory use, and battery life. The goal is to reduce the need of these resources by reducing the complexity of the coding process. Mobile devices offer unique characteristics that can be exploited for optimizing video codecs. The combination of small display size, video resolution, and human vision factors, such as acuity, allow encoder optimizations that will not (or minimally) impact subjective quality. The focus of this dissertation is optimizing video services in mobile environments. Industry has begun migrating from H.264 video coding to a more resource intensive but compression efficient High Efficiency Video Coding (HEVC). However, there has been no proper evaluation and optimization of HEVC for mobile environments. Subjective quality evaluations were performed to assess relative quality between H.264 and HEVC. This will allow for better use of device resources and migration to new codecs where it is most useful. Complexity of HEVC is a significant barrier to adoption on mobile devices and complexity reduction methods are necessary. Optimal use of encoding options is needed to maximize quality and compression while minimizing encoding time. Methods for optimizing coding mode selection for HEVC were developed. Complexity of HEVC encoding can be further reduced by exploiting the mismatch between the resolution of the video, resolution of the mobile display, and the ability of the human eyes to acquire and process video under these conditions. The perceptual optimizations developed in this dissertation use the properties of spatial (visual acuity) and temporal information processing (motion perception) to reduce the complexity of HEVC encoding. A unique feature of the proposed methods is that they reduce encoding complexity and encoding time. The proposed HEVC encoder optimization methods reduced encoding time by 21.7% and bitrate by 13.4% with insignificant impact on subjective quality evaluations. These methods can easily be implemented today within HEVC.
Show less  Date Issued
 2014
 PURL
 http://purl.flvc.org/fau/fd/FA00004112
 Subject Headings
 Coding theory, Digital coding  Data processing, Image processing  Digital techniques, Multimedia systems, Video compression
 Format
 Document (PDF)
 Title
 Turbocoded frequency division multiplexing for underwater acoustic communications between 60 kHz and 90 kHz.
 Creator
 Pajovic, Milutin., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
 Abstract/Description

The Intermediate Frequency Acoustic Modem (IFAM), developed by Dr. Beaujean, is designed to transmit the commandandcontrol messages from the topside to the wetside unit in ports and very shallow waters. This research presents the design of the turbo coding scheme and its implementation in the IFAM modem with the purpose of meeting a strict requirement for the IFAM error rate performance. To simulate the coded IFAM, a channel simulator is developed. It is basically a multitap filter whose...
Show moreThe Intermediate Frequency Acoustic Modem (IFAM), developed by Dr. Beaujean, is designed to transmit the commandandcontrol messages from the topside to the wetside unit in ports and very shallow waters. This research presents the design of the turbo coding scheme and its implementation in the IFAM modem with the purpose of meeting a strict requirement for the IFAM error rate performance. To simulate the coded IFAM, a channel simulator is developed. It is basically a multitap filter whose parameters are set depending on the channel geometry and system specifics. The simulation results show that the turbo code is able to correct 89% of the messages received with errors in the hostile channel conditions. The BoseChadhuriHocquenghem (BCH) coding scheme corrects less that 15% of these messages. The other simulation results obtained for the system operation in different shallow water settings are presented.
Show less  Date Issued
 2009
 PURL
 http://purl.flvc.org/FAU/215291
 Subject Headings
 Underwater acoustics, Measurement, Coding theory, Signal processing, Digital techniques
 Format
 Document (PDF)
 Title
 Massively parallel computation and porting of EPIC research hydro code on CrayT3D.
 Creator
 Dutta, Arindum., Florida Atlantic University, Tsai, ChiTay
 Abstract/Description

The objective of the work is to verify the feasibility of converting a large FEA code into a massively parallel FEA code in terms of computational speed and cost. Sequential subroutines in the Research EPIC hydro code, a Lagrangian finite element analysis code for high velocity elasticplastic impact problems, are individually converted into parallel code using Cray Adaptive Fortran (CRAFT). The performance of massively parallel subroutines running on 32 PEs on CrayT3D is faster than their...
Show moreThe objective of the work is to verify the feasibility of converting a large FEA code into a massively parallel FEA code in terms of computational speed and cost. Sequential subroutines in the Research EPIC hydro code, a Lagrangian finite element analysis code for high velocity elasticplastic impact problems, are individually converted into parallel code using Cray Adaptive Fortran (CRAFT). The performance of massively parallel subroutines running on 32 PEs on CrayT3D is faster than their sequential counterparts on CrayYMP. At next stage of the research, Parallel Virtual Machine (PVM) directives is used to develop a PVM version of the EPIC hydro code by connecting the converted parallel subroutines running on multiple PEs of T3D to the sequential part of the code running on single PE. With an incremental increase in the massively parallel subroutines into the PVM EPIC hydro code, the performance with respect to speedup of the code increased accordingly. The results indicate that significant speedup can be achieved in the EPIC hydro code when most or all of the subroutines are massively parallelized.
Show less  Date Issued
 1996
 PURL
 http://purl.flvc.org/fcla/dt/15249
 Subject Headings
 Parallel processing (Electronic computers), Computer programs, coding theory, Supercomputers
 Format
 Document (PDF)
 Title
 A new approach to the inverse kinematic analysis of redundant robots.
 Creator
 Dutta, Partha Sarathi., Florida Atlantic University
 Abstract/Description

A new approach has been developed for inverse kinematic analysis of redundant robots. In case of redundant robots inverse kinematics is complicated by the nonsquare nature of the Jacobian. In this method the Jacobian and inverse kinematic equation are reduced based on the rank of the Jacobian and the constraints specified. This process automatically locks some joints of the robot at various trajectory points. The reduced inverse kinematic equation is solved by an iterative procedure to find...
Show moreA new approach has been developed for inverse kinematic analysis of redundant robots. In case of redundant robots inverse kinematics is complicated by the nonsquare nature of the Jacobian. In this method the Jacobian and inverse kinematic equation are reduced based on the rank of the Jacobian and the constraints specified. This process automatically locks some joints of the robot at various trajectory points. The reduced inverse kinematic equation is solved by an iterative procedure to find joint variable values for known task description. The results of computer simulation of the inverse kinematics applied on a redundant planar robot and a redundant moving base robot proved the method to be efficient, and the results can be found within a few iterations with excellent accuracy.
Show less  Date Issued
 1988
 PURL
 http://purl.flvc.org/fcla/dt/14433
 Subject Headings
 Parallel processing (Electronic computers), Computer programs, coding theory, Supercomputers
 Format
 Document (PDF)
 Title
 New Results in Group Theoretic Cryptology.
 Creator
 Sramka, Michal, Florida Atlantic University, Magliveras, Spyros S., Charles E. Schmidt College of Science, Department of Mathematical Sciences
 Abstract/Description

With the publication of Shor's quantum algorithm for solving discrete logarithms in finite cyclic groups, a need for new cryptographic primitives arose; namely, for more secure primitives that would prevail in the postquantum era. The aim of this dissertation is to exploit some hard problems arising from group theory for use in cryptography. Over the years, there have been many such proposals. We first look at two recently proposed schemes based on some form of a generalization of the...
Show moreWith the publication of Shor's quantum algorithm for solving discrete logarithms in finite cyclic groups, a need for new cryptographic primitives arose; namely, for more secure primitives that would prevail in the postquantum era. The aim of this dissertation is to exploit some hard problems arising from group theory for use in cryptography. Over the years, there have been many such proposals. We first look at two recently proposed schemes based on some form of a generalization of the discrete logari thm problem (DLP), identify their weaknesses, and cryptanalyze them. By applying the exper tise gained from the above cryptanalyses, we define our own generalization of the DLP to arbitrary finite groups. We show that such a definition leads to the design of signature schemes and pseudorandom number generators with provable security under a security assumption based on a group theoretic problem. In particular, our security assumption is based on the hardness of factorizing elements of the projective special linear group over a finite field in some representations. We construct a oneway function based on this group theoretic assumption and provide a security proof.
Show less  Date Issued
 2006
 PURL
 http://purl.flvc.org/fau/fd/FA00000878
 Subject Headings
 Group theory, Mathematical statistics, Cryptography, Combinatorial designs and configurations, Data encryption (Computer science), Coding theory
 Format
 Document (PDF)
 Title
 Low complexity H.264 video encoder design using machine learning techniques.
 Creator
 Carrillo, Paula., Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
 Abstract/Description

H.264/AVC encoder complexity is mainly due to variable size in Intra and Inter frames. This makes H.264/AVC very difficult to implement, especially for real time applications and mobile devices. The current technological challenge is to conserve the compression capacity and quality that H.264 offers but reduce the encoding time and, therefore, the processing complexity. This thesis applies machine learning technique for video encoding mode decisions and investigates ways to improve the...
Show moreH.264/AVC encoder complexity is mainly due to variable size in Intra and Inter frames. This makes H.264/AVC very difficult to implement, especially for real time applications and mobile devices. The current technological challenge is to conserve the compression capacity and quality that H.264 offers but reduce the encoding time and, therefore, the processing complexity. This thesis applies machine learning technique for video encoding mode decisions and investigates ways to improve the process of generating more general low complexity H.264/AVC video encoders. The proposed H.264 encoding method decreases the complexity in the mode decision inside the Inter frames. Results show, at least, a 150% average reduction of complexity and, at most, 0.6 average increases in PSNR for different kinds of videos and formats.
Show less  Date Issued
 2008
 PURL
 http://purl.flvc.org/FAU/166448
 Subject Headings
 Code division multiple access, Digital media, Technological innovations, Image transmission, Technological innovations, Coding theory, Data structures (Computer science)
 Format
 Document (PDF)
 Title
 Elliptic curves: identitybased signing and quantum arithmetic.
 Creator
 Budhathoki, Parshuram, Steinwandt, Rainer, Eisenbarth, Thomas, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Mathematical Sciences
 Abstract/Description

Pairingfriendly curves and elliptic curves with a trapdoor for the discrete logarithm problem are versatile tools in the design of cryptographic protocols. We show that curves having both properties enable a deterministic identitybased signing with “short” signatures in the random oracle model. At PKC 2003, Choon and Cheon proposed an identitybased signature scheme along with a provable security reduction. We propose a modification of their scheme with several performance benefits. In...
Show morePairingfriendly curves and elliptic curves with a trapdoor for the discrete logarithm problem are versatile tools in the design of cryptographic protocols. We show that curves having both properties enable a deterministic identitybased signing with “short” signatures in the random oracle model. At PKC 2003, Choon and Cheon proposed an identitybased signature scheme along with a provable security reduction. We propose a modification of their scheme with several performance benefits. In addition to faster signing, for batch signing the signature size can be reduced, and if multiple signatures for the same identity need to be verified, the verification can be accelerated. Neither the signing nor the verification algorithm rely on the availability of a (pseudo)random generator, and we give a provable security reduction in the random oracle model to the (`)Strong DiffieHellman problem. Implementing the group arithmetic is a costcritical task when designing quantum circuits for Shor’s algorithm to solve the discrete logarithm problem. We introduce a tool for the automatic generation of addition circuits for ordinary binary elliptic curves, a prominent platform group for digital signatures. Our Python software generates circuit descriptions that, without increasing the number of qubits or Tdepth, involve less than 39% of the number of Tgates in the best previous construction. The software also optimizes the (CNOT) depth for F2linear operations by means of suitable graph colorings.
Show less  Date Issued
 2014
 PURL
 http://purl.flvc.org/fau/fd/FA00004182, http://purl.flvc.org/fau/fd/FA00004182
 Subject Headings
 Coding theory, Computer network protocols, Computer networks  Security measures, Data encryption (Computer science), Mathematical physics, Number theory  Data processing
 Format
 Document (PDF)
 Title
 An algebraic attack on block ciphers.
 Creator
 Matheis, Kenneth., Charles E. Schmidt College of Science, Department of Mathematical Sciences
 Abstract/Description

The aim of this work is to investigate an algebraic attack on block ciphers called Multiple Right Hand Sides (MRHS). MRHS models a block cipher as a system of n matrix equations Si := Aix = [Li], where each Li can be expressed as a set of its columns bi1, . . . , bisi . The set of solutions Ti of Si is dened as the union of the solutions of Aix = bij , and the set of solutions of the system S1, . . . , Sn is dened as the intersection of T1, . . . , Tn. Our main contribution is a hardware...
Show moreThe aim of this work is to investigate an algebraic attack on block ciphers called Multiple Right Hand Sides (MRHS). MRHS models a block cipher as a system of n matrix equations Si := Aix = [Li], where each Li can be expressed as a set of its columns bi1, . . . , bisi . The set of solutions Ti of Si is dened as the union of the solutions of Aix = bij , and the set of solutions of the system S1, . . . , Sn is dened as the intersection of T1, . . . , Tn. Our main contribution is a hardware platform which implements a particular algorithm that solves MRHS systems (and hence block ciphers). The case is made that the platform performs several thousand orders of magnitude faster than software, it costs less than US$1,000,000, and that actual times of block cipher breakage can be calculated once it is known how the corresponding software behaves. Options in MRHS are also explored with a view to increase its efficiency.
Show less  Date Issued
 2010
 PURL
 http://purl.flvc.org/FAU/2976444
 Subject Headings
 Ciphers, Cryptography, Data encryption (Computer science), Computer security, Coding theory, Integrated circuits, Design and construction
 Format
 Document (PDF)
 Title
 Signature schemes in single and multiuser settings.
 Creator
 Villanyi, Viktoria., Charles E. Schmidt College of Science, Department of Mathematical Sciences
 Abstract/Description

In the first chapters we will give a short introduction to signature schemes in single and multiuser settings. We give the definition of a signature scheme and explain a group of possible attacks on them. In Chapter 6 we give a construction which derives a subliminalfree RSA public key. In the construction we use a computationally binding and unconditionally hiding commitment scheme. To establish a subliminalfree RSA modulus n, we have to construct the secret primes p and q. To prove p and...
Show moreIn the first chapters we will give a short introduction to signature schemes in single and multiuser settings. We give the definition of a signature scheme and explain a group of possible attacks on them. In Chapter 6 we give a construction which derives a subliminalfree RSA public key. In the construction we use a computationally binding and unconditionally hiding commitment scheme. To establish a subliminalfree RSA modulus n, we have to construct the secret primes p and q. To prove p and q are primes we use Lehmann's primality test on the commitments. The chapter is based on the paper, "RSA signature schemes with subliminalfree public key" (Tatra Mountains Mathematical Publications 41 (2008)). In chapter 7 a onetime signature scheme using runlength encoding is presented, which in the random oracle model offers security against chosenmessage attacks. For parameters of interest, the proposed scheme enables about 33% faster verification with a comparable signature size than a construction of Merkle and Winternitz. The public key size remains unchanged (1 hash value). The main cost for the faster verification is an increase in the time required for signing messages and for key generation. The chapter is based on the paper "A onetime signature using runlength encoding" (Information Processing Letters Vol. 108, Issue 4, (2008)).
Show less  Date Issued
 2009
 PURL
 http://purl.flvc.org/FAU/215289
 Subject Headings
 Information technology, Security measures, Cryptography, Coding theory, Data encryption (Computer science), DIgital watermarking
 Format
 Document (PDF)
 Title
 Exploiting audiovisual attention for visual coding.
 Creator
 Torres, Freddy., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

Perceptual video coding has been a promising area during the last years. Increases in compression ratios have been reported by applying foveated video coding techniques where the region of interest (ROI) is selected by using a computational attention model. However, most of the approaches for perceptual video coding only use visual features ignoring the auditory component. In recent physiological studies, it has been demonstrated that auditory stimuli affects our visual perception. In this...
Show morePerceptual video coding has been a promising area during the last years. Increases in compression ratios have been reported by applying foveated video coding techniques where the region of interest (ROI) is selected by using a computational attention model. However, most of the approaches for perceptual video coding only use visual features ignoring the auditory component. In recent physiological studies, it has been demonstrated that auditory stimuli affects our visual perception. In this work, we validate some of those physiological tests using complex video sequence. We designed and developed a webbased tool for video quality measurement. After conducting different experiments, we observed that in the general reaction time to detect video artifacts was higher when video was presented with the audio information. We observed that emotional information in audio guide human attention to particular ROI. We also observed that sound frequency change spatial frequency perception in still images.
Show less  Date Issued
 2013
 PURL
 http://purl.flvc.org/fcla/dt/3361251
 Subject Headings
 Digital video, Image processing, Digital techniques, Visual perception, Coding theory, Humancomputer interaction, Intersensory effects
 Format
 Document (PDF)
 Title
 XYZ Video Compression: An algorithm for realtime compression of motion video based upon the threedimensional discrete cosine transform.
 Creator
 Westwater, Raymond John., Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

XYZ Video Compression denotes a video compression algorithm that operates in three dimensions, without the overhead of motion estimation. The smaller overhead of this algorithm as compared to MPEG and other "standardsbased" compression algorithms using motion estimation suggests the suitability of this algorithm to realtime applications. The demonstrated results of compression of standard motion video benchmarks suggest that XYZ Video Compression is not only a faster algorithm, but develops...
Show moreXYZ Video Compression denotes a video compression algorithm that operates in three dimensions, without the overhead of motion estimation. The smaller overhead of this algorithm as compared to MPEG and other "standardsbased" compression algorithms using motion estimation suggests the suitability of this algorithm to realtime applications. The demonstrated results of compression of standard motion video benchmarks suggest that XYZ Video Compression is not only a faster algorithm, but develops superior compression ratios as well. The algorithm is based upon the threedimensional Discrete Cosine Transform (DCT). Pixels are organized as 8 x 8 x 8 cubes by taking 8 x 8 squares out of 8 consecutive frames. A fast threedimensional transform is applied to each cube, generating 512 DCT coefficients. The energypacking property of the DCT concentrates the energy in the cube into few coefficients. The DCT coefficients are quantized to maximize the energy concentration at the expense of introduction of a userdetermined level of error. A method of adaptive quantization that generates optimal quantizers based upon statistics gathered for the 8 consecutive frames is described. The sensitivity of the human eye to various DCT coefficients is used to modify the quantizers to create a "visually equivalent" cube with still greater energy concentration. Experiments are described that justify choice of Human Visual System factors to be folded into the quantization step. The quantized coefficients are then encoded into a data stream using a method of entropy coding based upon the statistics of the quantized coefficients. The bitstream generated by entropy coding represents the compressed data of the 8 motion video frames, and typically will be compressed at 50:1 at 5% error. The decoding process is the reverse of the encoding process: the bitstream is decoded to generate blocks of quantized DCT coefficients, the DCT coefficients are dequantized, and the Inverse Discrete Cosine Transform is performed on the cube to recover pixel data suitable for display. The elegance of this technique lies in its simplicity, which lends itself to inexpensive implementation of both encoder and decoder. Finally, realtime implementation of the XYZ Compressor/Decompressor is discussed. Experiments are run to determine the effectiveness of the implementation.
Show less  Date Issued
 1996
 PURL
 http://purl.flvc.org/fcla/dt/12450
 Subject Headings
 Digital video, Data compression (Telecommunication), Image processingDigital techniques, Coding theory
 Format
 Document (PDF)
 Title
 Static error modeling of sensors applicable to ocean systems.
 Creator
 AhChong, Jeremy Fred., Florida Atlantic University, An, Edgar
 Abstract/Description

This thesis presents a method for modeling navigation sensors used on ocean systems and particularly on Autonomous Underwater Vehicles (AUV). An extended Kalman filter was previously designed for the implementation of the Inertial Navigation System (INS) making use of Inertial Measurement Unit (IMU), a magnetic compass, a GPS/DGPS system and a Doppler Velocity Log (DVL). Emphasis is put on characterizing the static sensor error model. A "bestfit ARMA model" based on the Aikake Information...
Show moreThis thesis presents a method for modeling navigation sensors used on ocean systems and particularly on Autonomous Underwater Vehicles (AUV). An extended Kalman filter was previously designed for the implementation of the Inertial Navigation System (INS) making use of Inertial Measurement Unit (IMU), a magnetic compass, a GPS/DGPS system and a Doppler Velocity Log (DVL). Emphasis is put on characterizing the static sensor error model. A "bestfit ARMA model" based on the Aikake Information Criterion (AIC), Whiteness test and graphical analyses were used for the model identification. Model orders and parameters were successfully estimated for compass heading, GPS position and IMU static measurements. Static DVL measurements could not be collected and require another approach. The variability of the models between different measurement data sets suggests online error model estimation.
Show less  Date Issued
 2003
 PURL
 http://purl.flvc.org/fcla/dt/12977
 Subject Headings
 Underwater navigation, Kalman filtering, Errorcorrecting codes (Information theory), Detectors
 Format
 Document (PDF)
 Title
 Video transcoding using machine learning.
 Creator
 Holder, Christopher., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The field of Video Transcoding has been evolving throughout the past ten years. The need for transcoding of video files has greatly increased because of the new upcoming standards which are incompatible with old ones. This thesis takes the method of using machine learning for video transcoding mode decisions and discusses ways to improve the process of generating the algorithm for implementation in different video transcoders. The transcoding methods used decrease the complexity in the mode...
Show moreThe field of Video Transcoding has been evolving throughout the past ten years. The need for transcoding of video files has greatly increased because of the new upcoming standards which are incompatible with old ones. This thesis takes the method of using machine learning for video transcoding mode decisions and discusses ways to improve the process of generating the algorithm for implementation in different video transcoders. The transcoding methods used decrease the complexity in the mode decision inside the video encoder. Also methods which automate and improve results are discussed and implemented in two different sets of transcoders: H.263 to VP6 , and MPEG2 to H.264. Both of these transcoders have shown a complexity loss of almost 50%. Video transcoding is important because the quantity of video standards have been increasing while devices usually can only decode one specific codec.
Show less  Date Issued
 2008
 PURL
 http://purl.flvc.org/FAU/166451
 Subject Headings
 Coding theory, Image transmission, Technological innovations, File conversion (Computer science), Data structures (Computer science), MPEG (Video coding standard), Digital media, Video compression
 Format
 Document (PDF)
 Title
 Subband coding of images using binomial QMF and vector quantization.
 Creator
 Rajamani, Kannan., Florida Atlantic University, Erdol, Nurgun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

This thesis presents an image coding system using binomial QMF based subband decomposition and vector quantisation. An attempt was made to compress a still image of size 256 x 256 represented at a resolution of 8 bits/pixel to a bit rate of 0.5 bits/pixel using 16 channel subband decomposition with binomial QMFs and coding the subbands using a full search LBG Vector Quantizer (VQ). Simulations were done on SUN work station and the quality of the image was evaluated by computing the Signal to...
Show moreThis thesis presents an image coding system using binomial QMF based subband decomposition and vector quantisation. An attempt was made to compress a still image of size 256 x 256 represented at a resolution of 8 bits/pixel to a bit rate of 0.5 bits/pixel using 16 channel subband decomposition with binomial QMFs and coding the subbands using a full search LBG Vector Quantizer (VQ). Simulations were done on SUN work station and the quality of the image was evaluated by computing the Signal to Noise Ratio (SNR) between the original image and the reconstructed image.
Show less  Date Issued
 1995
 PURL
 http://purl.flvc.org/fcla/dt/15203
 Subject Headings
 Image compressionDigital techniques, Image processingDigital techniques, Image transmissionDigital techniques, Coding theory, Vector fields
 Format
 Document (PDF)