Current Search: Data Compression (x)
View All Items
- Title
- Data compression techniques for underwater imagery.
- Creator
- Schmalz, Mark S., Ritter, G. X., Caimi, F. M., Harbor Branch Oceanographic Institute
- Date Issued
- 1996
- PURL
- http://purl.flvc.org/fau/fd/FA00007349
- Subject Headings
- Underwater imaging systems, Data Compression
- Format
- Document (PDF)
- Title
- DSP hardware implementation of transform-based compression algorithm for AUV telemetry.
- Creator
- Kocak, D. M., Caimi, F. M., Harbor Branch Oceanographic Institute
- Date Issued
- 1998
- PURL
- http://purl.flvc.org/FCLA/DT/3183703
- Subject Headings
- Image compression, Submersibles--Automatic control, Telemetry, Data Compression
- Format
- Document (PDF)
- Title
- APPLICATION OF DATA COMPRESSION TECHNIQUES ON COMPUTER-BASED TEXT AND PROGRAM SOURCE LIBRARIES.
- Creator
- FLOYD, RAYMOND EDWARD., Florida Atlantic University, Marcovitz, Alan B.
- Abstract/Description
-
Data compression in computer based data files has just begun to be used, with the major emphasis placed on character string suppression. Within this paper, character string suppression, Huffman encoding, noun-vector, and dictionary-vector compression methods are reviewed and compared as well as several combinations of these methods. The methods investigated were compared against three typical data library types: 1) program source data files, 2) test case data files, and 3) text data files....
Show moreData compression in computer based data files has just begun to be used, with the major emphasis placed on character string suppression. Within this paper, character string suppression, Huffman encoding, noun-vector, and dictionary-vector compression methods are reviewed and compared as well as several combinations of these methods. The methods investigated were compared against three typical data library types: 1) program source data files, 2) test case data files, and 3) text data files. Compression percentage, speed of compression and decompression, storage requirements, error recovery, and data security comparison of the various methods are also presented.
Show less - Date Issued
- 1977
- PURL
- http://purl.flvc.org/fcla/dt/13852
- Subject Headings
- Data compression (Computer science), Data libraries--Automation
- Format
- Document (PDF)
- Title
- Densely-centered uniform P-search: A fast motion estimation algorithm.
- Creator
- Greenberg, Joshua H., Florida Atlantic University, Furht, Borko
- Abstract/Description
-
Video compression technology promises to be the key to the transmission of motion video. A number of techniques have been introduced in the past few years, particularly that developed by the Motion Picture Experts Group (MPEG). The MPEG algorithm uses Motion Estimation to reduce the amount of data that is stored for each frame. Motion Estimation uses a reference frame as a codebook for a modified Vector Quantization process. While an exhaustive search for Motion Estimation Vectors is time...
Show moreVideo compression technology promises to be the key to the transmission of motion video. A number of techniques have been introduced in the past few years, particularly that developed by the Motion Picture Experts Group (MPEG). The MPEG algorithm uses Motion Estimation to reduce the amount of data that is stored for each frame. Motion Estimation uses a reference frame as a codebook for a modified Vector Quantization process. While an exhaustive search for Motion Estimation Vectors is time-consuming, various fast search algorithms have been developed. These techniques are surveyed, and the theoretical framework for a new search algorithm is developed: Densely-Centered Uniform P-Search. The time complexity of Densely-Centered Uniform P-Search is comparable to other popular Motion Estimation techniques, and shows superior results on a variety of motion video sources.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15286
- Subject Headings
- Image processing--Digital techniques, Data compression (Telecommunication)
- Format
- Document (PDF)
- Title
- The effect of compression of performance in a demand paging operating system.
- Creator
- Wynn, Allen Chester., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this thesis, we measure and analyze the effects of compression in a demand paging operating system. We first explore existing compression algorithms and page replacement policies currently in use. Then we examine the OS/2 operating system which is modified to include page-based compression. Software trace hooks are inserted into the operating system to determine the amount of time required to process a page fault for each type of page, e.g. non-compressed, compressed, zero-filled, and the...
Show moreIn this thesis, we measure and analyze the effects of compression in a demand paging operating system. We first explore existing compression algorithms and page replacement policies currently in use. Then we examine the OS/2 operating system which is modified to include page-based compression. Software trace hooks are inserted into the operating system to determine the amount of time required to process a page fault for each type of page, e.g. non-compressed, compressed, zero-filled, and the number of page faults for each type of page. Software trace measurements as well as physical timings are taken on a system without compressed pages and the same system with compressed pages. We find the system with compressed pages shows a slight increase in paging activity for memory constrained systems, but performance (time) is improved in both memory constrained and unconstrained systems.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15421
- Subject Headings
- Paging (Computer science), Data compression (Computer science), Operating systems (Computers)
- Format
- Document (PDF)
- Title
- Interactive progressive encoding system for transmission of complex images.
- Creator
- Furht, Borko, Wang, Yingli, Celli, Joseph
- Date Issued
- 1998-11
- PURL
- http://purl.flvc.org/fcla/dt/351011
- Subject Headings
- Internetworking (Telecommunication), Digital video., Image transmission., JPEG (Image coding standard), Data compression (Telecommunication) --Standards., Image compression --Standards.
- Format
- Document (PDF)
- Title
- HEVC optimization in mobile environments.
- Creator
- Garcia, Ray, Kalva, Hari, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Recently, multimedia applications and their use have grown dramatically in popularity in strong part due to mobile device adoption by the consumer market. Applications, such as video conferencing, have gained popularity. These applications and others have a strong video component that uses the mobile device’s resources. These resources include processing time, network bandwidth, memory use, and battery life. The goal is to reduce the need of these resources by reducing the complexity of the...
Show moreRecently, multimedia applications and their use have grown dramatically in popularity in strong part due to mobile device adoption by the consumer market. Applications, such as video conferencing, have gained popularity. These applications and others have a strong video component that uses the mobile device’s resources. These resources include processing time, network bandwidth, memory use, and battery life. The goal is to reduce the need of these resources by reducing the complexity of the coding process. Mobile devices offer unique characteristics that can be exploited for optimizing video codecs. The combination of small display size, video resolution, and human vision factors, such as acuity, allow encoder optimizations that will not (or minimally) impact subjective quality. The focus of this dissertation is optimizing video services in mobile environments. Industry has begun migrating from H.264 video coding to a more resource intensive but compression efficient High Efficiency Video Coding (HEVC). However, there has been no proper evaluation and optimization of HEVC for mobile environments. Subjective quality evaluations were performed to assess relative quality between H.264 and HEVC. This will allow for better use of device resources and migration to new codecs where it is most useful. Complexity of HEVC is a significant barrier to adoption on mobile devices and complexity reduction methods are necessary. Optimal use of encoding options is needed to maximize quality and compression while minimizing encoding time. Methods for optimizing coding mode selection for HEVC were developed. Complexity of HEVC encoding can be further reduced by exploiting the mismatch between the resolution of the video, resolution of the mobile display, and the ability of the human eyes to acquire and process video under these conditions. The perceptual optimizations developed in this dissertation use the properties of spatial (visual acuity) and temporal information processing (motion perception) to reduce the complexity of HEVC encoding. A unique feature of the proposed methods is that they reduce encoding complexity and encoding time. The proposed HEVC encoder optimization methods reduced encoding time by 21.7% and bitrate by 13.4% with insignificant impact on subjective quality evaluations. These methods can easily be implemented today within HEVC.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004112
- Subject Headings
- Coding theory, Digital coding -- Data processing, Image processing -- Digital techniques, Multimedia systems, Video compression
- Format
- Document (PDF)
- Title
- Permutation-based data compression.
- Creator
- Mihnea, Amalya, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
The use of permutations in data compression is an aspect that is worthy of further exploration. The work that has been done in video compression based on permutations was primarily oriented towards lossless algorithms. The study of previous algorithms has led to a new algorithm that could be either lossless or lossy, for which the amount of compression and the quality of the output can be controlled. The lossless version of our algorithm performs close to lossy versions of H.264 and it...
Show moreThe use of permutations in data compression is an aspect that is worthy of further exploration. The work that has been done in video compression based on permutations was primarily oriented towards lossless algorithms. The study of previous algorithms has led to a new algorithm that could be either lossless or lossy, for which the amount of compression and the quality of the output can be controlled. The lossless version of our algorithm performs close to lossy versions of H.264 and it improves on them for the majority of the videos that we analyzed. Our algorithm could be used in situations where there is a need for lossless compression and the video sequences are part of a single scene, e.g., medical videos, where loss of information could be risky or expensive. Some results on permutations, which may be of independent interest, arose in developing this algorithm. We report on these as well.
Show less - Date Issued
- 2011
- PURL
- http://purl.flvc.org/FAU/3333054
- Subject Headings
- Data compression (Telecommunication), Combinatorics, Network architecture and design, Computer network architectures, Mathematical optimization
- Format
- Document (PDF)
- Title
- Information-theoretics based technoeconomic growth models: simulation and computation of forecasting in telecommunication services.
- Creator
- Yassin, Raef Rashad., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This research is concerned with algorithmic representation of technoeconomic growth concerning modern and next-generation telecommunications including the Internet service. The goal of this study thereof is to emphasize efforts to establish the associated forecasting and, the envisioned tasks thereof include : (i) Reviewing the technoeconomic considerations prevailing in telecommunication (telco) service industry and their implicating features; (ii) studying relevant aspects of underlying...
Show moreThis research is concerned with algorithmic representation of technoeconomic growth concerning modern and next-generation telecommunications including the Internet service. The goal of this study thereof is to emphasize efforts to establish the associated forecasting and, the envisioned tasks thereof include : (i) Reviewing the technoeconomic considerations prevailing in telecommunication (telco) service industry and their implicating features; (ii) studying relevant aspects of underlying complex system evolution (akin to biological systems), (iii) pursuant co-evolution modeling of competitive business structures using dichotomous (flip-flop) states as seen in predator evolutions ; (iv) conceiving a novel algorithm based on information-theoretic principles toward technoeconomic forecasting on the basis of modified Fisher-Kaysen model consistent with proportional fairness concept of comsumers' willingness-to-pay, and (v) evaluating forecast needs on inter-office facility based congestion sensitive traffics encountered. Commensurate with the topics indicated above, necessary algorithms, analytical derivations and compatible models are proposed. Relevant computational exercises are performed with MatLab[TM] using data gathered from open-literature on the service profiles of telecommunication companies (telco); and ad hoc model verifications are performed on the results. Lastly, discussions and inferences are made with open-questions identified for further research.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3356898
- Subject Headings
- Signal theory (Telecommunication), Data compression (Computer science), Telecommunication systems, Mathematical models, Information science
- Format
- Document (PDF)
- Title
- XYZ Video Compression: An algorithm for real-time compression of motion video based upon the three-dimensional discrete cosine transform.
- Creator
- Westwater, Raymond John., Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
XYZ Video Compression denotes a video compression algorithm that operates in three dimensions, without the overhead of motion estimation. The smaller overhead of this algorithm as compared to MPEG and other "standards-based" compression algorithms using motion estimation suggests the suitability of this algorithm to real-time applications. The demonstrated results of compression of standard motion video benchmarks suggest that XYZ Video Compression is not only a faster algorithm, but develops...
Show moreXYZ Video Compression denotes a video compression algorithm that operates in three dimensions, without the overhead of motion estimation. The smaller overhead of this algorithm as compared to MPEG and other "standards-based" compression algorithms using motion estimation suggests the suitability of this algorithm to real-time applications. The demonstrated results of compression of standard motion video benchmarks suggest that XYZ Video Compression is not only a faster algorithm, but develops superior compression ratios as well. The algorithm is based upon the three-dimensional Discrete Cosine Transform (DCT). Pixels are organized as 8 x 8 x 8 cubes by taking 8 x 8 squares out of 8 consecutive frames. A fast three-dimensional transform is applied to each cube, generating 512 DCT coefficients. The energy-packing property of the DCT concentrates the energy in the cube into few coefficients. The DCT coefficients are quantized to maximize the energy concentration at the expense of introduction of a user-determined level of error. A method of adaptive quantization that generates optimal quantizers based upon statistics gathered for the 8 consecutive frames is described. The sensitivity of the human eye to various DCT coefficients is used to modify the quantizers to create a "visually equivalent" cube with still greater energy concentration. Experiments are described that justify choice of Human Visual System factors to be folded into the quantization step. The quantized coefficients are then encoded into a data stream using a method of entropy coding based upon the statistics of the quantized coefficients. The bitstream generated by entropy coding represents the compressed data of the 8 motion video frames, and typically will be compressed at 50:1 at 5% error. The decoding process is the reverse of the encoding process: the bitstream is decoded to generate blocks of quantized DCT coefficients, the DCT coefficients are dequantized, and the Inverse Discrete Cosine Transform is performed on the cube to recover pixel data suitable for display. The elegance of this technique lies in its simplicity, which lends itself to inexpensive implementation of both encoder and decoder. Finally, real-time implementation of the XYZ Compressor/Decompressor is discussed. Experiments are run to determine the effectiveness of the implementation.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12450
- Subject Headings
- Digital video, Data compression (Telecommunication), Image processing--Digital techniques, Coding theory
- Format
- Document (PDF)
- Title
- A visual perception threshold matching algorithm for real-time video compression.
- Creator
- Noll, John M., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A barrier to the use of digital imaging is the vast storage requirements involved. One solution is compression. Since imagery is ultimately subject to human visual perception, it is worthwhile to design and implement an algorithm which performs compression as a function of perception. The underlying premise of the thesis is that if the algorithm closely matches visual perception thresholds, then its coded images contain only the components necessary to recreate the perception of the visual...
Show moreA barrier to the use of digital imaging is the vast storage requirements involved. One solution is compression. Since imagery is ultimately subject to human visual perception, it is worthwhile to design and implement an algorithm which performs compression as a function of perception. The underlying premise of the thesis is that if the algorithm closely matches visual perception thresholds, then its coded images contain only the components necessary to recreate the perception of the visual stimulus. Psychophysical test results are used to map the thresholds of visual perception, and develop an algorithm that codes only the image content exceeding those thresholds. The image coding algorithm is simulated in software to demonstrate compression of a single frame image. The simulation results are provided. The algorithm is also adapted to real-time video compression for implementation in hardware.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14857
- Subject Headings
- Image processing--Digital techniques, Computer algorithms, Visual perception, Data compression (Computer science)
- Format
- Document (PDF)
- Title
- Video transcoding using machine learning.
- Creator
- Holder, Christopher., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The field of Video Transcoding has been evolving throughout the past ten years. The need for transcoding of video files has greatly increased because of the new upcoming standards which are incompatible with old ones. This thesis takes the method of using machine learning for video transcoding mode decisions and discusses ways to improve the process of generating the algorithm for implementation in different video transcoders. The transcoding methods used decrease the complexity in the mode...
Show moreThe field of Video Transcoding has been evolving throughout the past ten years. The need for transcoding of video files has greatly increased because of the new upcoming standards which are incompatible with old ones. This thesis takes the method of using machine learning for video transcoding mode decisions and discusses ways to improve the process of generating the algorithm for implementation in different video transcoders. The transcoding methods used decrease the complexity in the mode decision inside the video encoder. Also methods which automate and improve results are discussed and implemented in two different sets of transcoders: H.263 to VP6 , and MPEG-2 to H.264. Both of these transcoders have shown a complexity loss of almost 50%. Video transcoding is important because the quantity of video standards have been increasing while devices usually can only decode one specific codec.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/166451
- Subject Headings
- Coding theory, Image transmission, Technological innovations, File conversion (Computer science), Data structures (Computer science), MPEG (Video coding standard), Digital media, Video compression
- Format
- Document (PDF)