Current Search: Department of Computer and Electrical Engineering and Computer Science (x) » Furht, Borko (x)
View All Items
Pages
- Title
- Techniques for improving the capacity of video on demand systems.
- Creator
- Kalva, Hari., Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis presents a survey of multimedia networks and the techniques to improve the capacity of video on demand systems. A survey was conducted and comparative evaluation was done to determine the multimedia capabilities of various networks. Video on demand is an electronic video rental system in which clients request and play videos on-demand. Video on-demand system can be implemented over an existing cable TV network or an upgraded ADSL network. The two techniques used to improve the...
Show moreThis thesis presents a survey of multimedia networks and the techniques to improve the capacity of video on demand systems. A survey was conducted and comparative evaluation was done to determine the multimedia capabilities of various networks. Video on demand is an electronic video rental system in which clients request and play videos on-demand. Video on-demand system can be implemented over an existing cable TV network or an upgraded ADSL network. The two techniques used to improve the capacity of video on-demand systems are segmentation and multicasting. Segmentation consists of dividing the video into several fixed length segments, and then transmitting the segments at regular intervals instead of transmitting the video continuously. With multicasting, more than one user requesting the same video are served by a single video stream. Multicasting further assumes that each subscriber has a limited storage space, so same video segments can be multicast to subscribers simultaneously even if the requests for a video are not synchronous.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15112
- Subject Headings
- Video recording, Multimedia systems, Interactive multimedia
- Format
- Document (PDF)
- Title
- Disk I/O scheduling techniques for multimedia systems.
- Creator
- Sommers, Daniel R., Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The home computer user represents a significant portion of the multimedia market. To the home user, multimedia is the ability to play, edit and even create movies (video and sound) on his home computer system. While there are many studies that concentrate on large multimedia servers which support hundreds (even thousands) of simultaneous users, there are very few that focus attention on the home computer configuration. This thesis presents the mechanisms for generating, compressing,...
Show moreThe home computer user represents a significant portion of the multimedia market. To the home user, multimedia is the ability to play, edit and even create movies (video and sound) on his home computer system. While there are many studies that concentrate on large multimedia servers which support hundreds (even thousands) of simultaneous users, there are very few that focus attention on the home computer configuration. This thesis presents the mechanisms for generating, compressing, transmitting and decompressing multimedia data as a framework for the long-term storage and retrieval of multimedia data on disk drives. After developing the framework, the thesis presents an in-depth design and analysis of a disk-based multimedia storage system, proposes a scheduling algorithm for data retrieval (DAN-SCAN) and presents the results of a simulation of the algorithm.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15235
- Subject Headings
- Multimedia systems, Disk access (Computer science), Computer storage devices, Data disk drives
- Format
- Document (PDF)
- Title
- A novel technique for the retrieval of compressed image and video databases.
- Creator
- Saksobhavivat, Pornvit., Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The classic methods in indexing image and video databases are either using keywords or analysis of color distribution. In the recent year, there is a new standard in image and video compression standard called JPEG and MPEG respectively. One of the basic operations of JPEG and MPEG is Discrete Cosine Transform (DCT). The human visual system is known to be very dependent on spatial frequency. The DCT has capability to provide a good approximation of the images' spatial frequency that is...
Show moreThe classic methods in indexing image and video databases are either using keywords or analysis of color distribution. In the recent year, there is a new standard in image and video compression standard called JPEG and MPEG respectively. One of the basic operations of JPEG and MPEG is Discrete Cosine Transform (DCT). The human visual system is known to be very dependent on spatial frequency. The DCT has capability to provide a good approximation of the images' spatial frequency that is sensitive to human eyes. We take this advantage of DCT in indexing image and video databases. However, the two-dimensional DCT can give us 64 coefficients per block of 8 x 8 pixels. These numbers are too many to calculate to receive fast indexing results. We use only first coefficient of DCT called DC coefficient to represent an 8 x 8 block of transformed data. This representation yields satisfactory indexing results.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15442
- Subject Headings
- Video compression, Image compression, Indexing
- Format
- Document (PDF)
- Title
- Virtualization techniques for mobile systems.
- Creator
- Jaramillo, David, Furht, Borko, Agarwal, Ankur, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In current mobile system environment there is a large gap in the use of smart phones for personal and enterprise use due to required enterprise security policies, privacy concerns as well as freedom of use. In the current environment, data-plans on mobile systems have become so wide spread that the rate of adaptation of data plans for every day customers has far outpaced the ability for enterprises to keep up with existing secure enterprise infrastructures. Most of the enterprises require...
Show moreIn current mobile system environment there is a large gap in the use of smart phones for personal and enterprise use due to required enterprise security policies, privacy concerns as well as freedom of use. In the current environment, data-plans on mobile systems have become so wide spread that the rate of adaptation of data plans for every day customers has far outpaced the ability for enterprises to keep up with existing secure enterprise infrastructures. Most of the enterprises require/provide the access of emails and other official information on smart platforms which presents a big challenge for the enterprise in securing their systems. Therefore due to the security issues and policies imposed by the enterprise in using the same device for dual purpose (personal and enterprise), the consumers often lose their individual freedom and convenience at the cost of security. Few solutions have been successful addressing this challenge. One effective way is to partition the mobile device such that the enterprise system access and its information are completely separated from the personal information. Several approaches are described and presented for mobile virtualization that creates a secure and secluded environment for enterprise information while allowing the user to access their personal information. A reference architecture is then presented that allows for integration with existing enterprise mobile device management systems and at the same time providing a light weight solution for containerizing mobile applications. This solution is then benchmarked with several of the existing mobile virtualization solutions.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fau/fd/FA0004028
- Subject Headings
- Mobile communication systems, Virtual computer systems
- Format
- Document (PDF)
- Title
- Innovative web applications for analyzing traffic operations.
- Creator
- Petrovska, Natasha, Furht, Borko, Stevanovic, Aleksandar, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The road traffic along with other key infrastructure sectors such as telecommunication, power, etc. has an important role in economic and technological growth of one country. Traffic engineers and analysts are responsible for solving a diversity of traffic problems, such as traffic data acquisition and evaluation. In response to the need to improve traffic operation, researchers implement advanced technologies and integration of systems and data, and develop state-of-the-art applications....
Show moreThe road traffic along with other key infrastructure sectors such as telecommunication, power, etc. has an important role in economic and technological growth of one country. Traffic engineers and analysts are responsible for solving a diversity of traffic problems, such as traffic data acquisition and evaluation. In response to the need to improve traffic operation, researchers implement advanced technologies and integration of systems and data, and develop state-of-the-art applications. This thesis introduces three novel web applications with an aim to offer traffic operators, managers, and analysts’ possibility to monitor the congestion, and analyze incidents and signal performance measures. They offer more detailed analysis providing users with insights from different levels and perspectives. The benefit of providing these visualization tools is more efficient estimation of the performance of local networks, thus facilitating the decision making process in case of emergency events.
Show less - Date Issued
- 2015
- PURL
- http://purl.flvc.org/fau/fd/FA00004459, http://purl.flvc.org/fau/fd/FA00004459
- Subject Headings
- Application program interfaces (Computer software), Internet -- Mathematical models, Traffic congestion -- Management, Traffic estimation -- Computer simulation, Transportation demand -- Forecasting
- Format
- Document (PDF)
- Title
- Context-based Image Concept Detection and Annotation.
- Creator
- Zolghadr, Esfandiar, Furht, Borko, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Scene understanding attempts to produce a textual description of visible and latent concepts in an image to describe the real meaning of the scene. Concepts are either objects, events or relations depicted in an image. To recognize concepts, the decision of object detection algorithm must be further enhanced from visual similarity to semantical compatibility. Semantically relevant concepts convey the most consistent meaning of the scene. Object detectors analyze visual properties (e.g., pixel...
Show moreScene understanding attempts to produce a textual description of visible and latent concepts in an image to describe the real meaning of the scene. Concepts are either objects, events or relations depicted in an image. To recognize concepts, the decision of object detection algorithm must be further enhanced from visual similarity to semantical compatibility. Semantically relevant concepts convey the most consistent meaning of the scene. Object detectors analyze visual properties (e.g., pixel intensities, texture, color gradient) of sub-regions of an image to identify objects. The initially assigned objects names must be further examined to ensure they are compatible with each other and the scene. By enforcing inter-object dependencies (e.g., co-occurrence, spatial and semantical priors) and object to scene constraints as background information, a concept classifier predicts the most semantically consistent set of names for discovered objects. The additional background information that describes concepts is called context. In this dissertation, a framework for building context-based concept detection is presented that uses a combination of multiple contextual relationships to refine the result of underlying feature-based object detectors to produce most semantically compatible concepts. In addition to the lack of ability to capture semantical dependencies, object detectors suffer from high dimensionality of feature space that impairs them. Variances in the image (i.e., quality, pose, articulation, illumination, and occlusion) can also result in low-quality visual features that impact the accuracy of detected concepts. The object detectors used to build context-based framework experiments in this study are based on the state-of-the-art generative and discriminative graphical models. The relationships between model variables can be easily described using graphical models and the dependencies and precisely characterized using these representations. The generative context-based implementations are extensions of Latent Dirichlet Allocation, a leading topic modeling approach that is very effective in reduction of the dimensionality of the data. The discriminative contextbased approach extends Conditional Random Fields which allows efficient and precise construction of model by specifying and including only cases that are related and influence it. The dataset used for training and evaluation is MIT SUN397. The result of the experiments shows overall 15% increase in accuracy in annotation and 31% improvement in semantical saliency of the annotated concepts.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004745, http://purl.flvc.org/fau/fd/FA00004745
- Subject Headings
- Computer vision--Mathematical models., Pattern recognition systems., Information visualization., Natural language processing (Computer science), Multimodal user interfaces (Computer systems), Latent structure analysis., Expert systems (Computer science)
- Format
- Document (PDF)
- Title
- Predictive Models for Ebola using Machine Learning Algorithms.
- Creator
- Jain, Abhishek, Agarwal, Ankur, Furht, Borko, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Identifying and tracking individuals affected by this virus in densely populated areas is a unique and an urgent challenge in the public health sector. Currently, mapping the spread of the Ebola virus is done manually, however with the help of social contact networks we can model dynamic graphs and predictive diffusion models of Ebola virus based on the impact on either a specific person or a specific community. With the help of this model, we can make more precise forward predictions of the...
Show moreIdentifying and tracking individuals affected by this virus in densely populated areas is a unique and an urgent challenge in the public health sector. Currently, mapping the spread of the Ebola virus is done manually, however with the help of social contact networks we can model dynamic graphs and predictive diffusion models of Ebola virus based on the impact on either a specific person or a specific community. With the help of this model, we can make more precise forward predictions of the disease propagations and to identify possibly infected individuals which will help perform trace – back analysis to locate the possible source of infection for a social group. This model will visualize and identify the families and tightly connected social groups who have had contact with an Ebola patient and is a proactive approach to reduce the risk of exposure of Ebola spread within a community or geographic location.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004919, http://purl.flvc.org/fau/fd/FA00004919
- Subject Headings
- Communicable diseases--Epidemiology., Public health surveillance., Ebola virus disease--Transmission., Machine learning., Computer algorithms., Virtual reality., Interactive multimedia., Computer graphics., History--Graphic methods., Historiography--Technological innovations.
- Format
- Document (PDF)
- Title
- Novel Techniques in Genetic Programming.
- Creator
- Fernandez, Thomas, Furht, Borko, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Three major problems make Genetic Programming unfeasible or impractical for real world problems. The first is the excessive time complexity.In nature the evolutionary process can take millions of years, a time frame that is clearly not acceptable for the solution of problems on a computer. In order to apply Genetic Programming to real world problems, it is essential that its efficiency be improved. The second is called overfitting (where results are inaccurate outside the training data). In a...
Show moreThree major problems make Genetic Programming unfeasible or impractical for real world problems. The first is the excessive time complexity.In nature the evolutionary process can take millions of years, a time frame that is clearly not acceptable for the solution of problems on a computer. In order to apply Genetic Programming to real world problems, it is essential that its efficiency be improved. The second is called overfitting (where results are inaccurate outside the training data). In a paper[36] for the Federal Reserve Bank, authors Neely and Weller state “a perennial problem with using flexible, powerful search procedures like Genetic Programming is overfitting, the finding of spurious patterns in the data. Given the well-documented tendency for the genetic program to overfit the data it is necessary to design procedures to mitigate this.” The third is the difficulty of determining optimal control parameters for the Genetic Programming process. Control parameters control the evolutionary process. They include settings such as, the size of the population and the number of generations to be run. In his book[45], Banzhaf describes this problem, “The bad news is that Genetic Programming is a young field and the effect of using various combinations of parameters is just beginning to be explored.” We address these problems by implementing and testing a number of novel techniques and improvements to the Genetic Programming process. We conduct experiments using data sets of various degrees of difficulty to demonstrate success with a high degree of statistical confidence.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fau/fd/FA00012570
- Subject Headings
- Evolutionary programming (Computer science), Genetic algorithms, Genetic programming (Computer science)
- Format
- Document (PDF)
- Title
- Cloud-based Skin Lesion Diagnosis System using Convolutional Neural Networks.
- Creator
- Akar, Esad, Furht, Borko, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Skin cancer is a major medical problem. If not detected early enough, skin cancer like melanoma can turn fatal. As a result, early detection of skin cancer, like other types of cancer, is key for survival. In recent times, deep learning methods have been explored to create improved skin lesion diagnosis tools. In some cases, the accuracy of these methods has reached dermatologist level of accuracy. For this thesis, a full-fledged cloud-based diagnosis system powered by convolutional neural...
Show moreSkin cancer is a major medical problem. If not detected early enough, skin cancer like melanoma can turn fatal. As a result, early detection of skin cancer, like other types of cancer, is key for survival. In recent times, deep learning methods have been explored to create improved skin lesion diagnosis tools. In some cases, the accuracy of these methods has reached dermatologist level of accuracy. For this thesis, a full-fledged cloud-based diagnosis system powered by convolutional neural networks (CNNs) with near dermatologist level accuracy has been designed and implemented in part to increase early detection of skin cancer. A large range of client devices can connect to the system to upload digital lesion images and request diagnosis results from the diagnosis pipeline. The diagnosis is handled by a two-stage CNN pipeline hosted on a server where a preliminary CNN performs quality check on user requests, and a diagnosis CNN that outputs lesion predictions.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013150
- Subject Headings
- Skin Diseases--diagnosis, Skin--Cancer--Diagnosis, Diagnosis--Methodology, Neural networks, Cloud computing
- Format
- Document (PDF)
- Title
- COMPARISON OF PRE-TRAINED CONVOLUTIONAL NEURAL NETWORK PERFORMANCE ON GLIOMA CLASSIFICATION.
- Creator
- Andrews, Whitney Angelica Johanna, Furht, Borko, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Gliomas are an aggressive class of brain tumors that are associated with a better prognosis at a lower grade level. Effective differentiation and classification are imperative for early treatment. MRI scans are a popular medical imaging modality to detect and diagnosis brain tumors due to its capability to non-invasively highlight the tumor region. With the rise of deep learning, researchers have used convolution neural networks for classification purposes in this domain, specifically pre...
Show moreGliomas are an aggressive class of brain tumors that are associated with a better prognosis at a lower grade level. Effective differentiation and classification are imperative for early treatment. MRI scans are a popular medical imaging modality to detect and diagnosis brain tumors due to its capability to non-invasively highlight the tumor region. With the rise of deep learning, researchers have used convolution neural networks for classification purposes in this domain, specifically pre-trained networks to reduce computational costs. However, with various MRI modalities, MRI machines, and poor image scan quality cause different network structures to have different performance metrics. Each pre-trained network is designed with a different structure that allows robust results given specific problem conditions. This thesis aims to cover the gap in the literature to compare the performance of popular pre-trained networks on a controlled dataset that is different than the network trained domain.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013450
- Subject Headings
- Gliomas, Neural networks (Computer science), Deep Learning, Convolutional neural networks
- Format
- Document (PDF)
- Title
- ILLUMINATING CYBER THREATS FOR SMART CITIES: A DATA-DRIVEN APPROACH FOR CYBER ATTACK DETECTION WITH VISUAL CAPABILITIES.
- Creator
- Neshenko, Nataliia, Furht, Borko, Bou-Harb, Elias, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
A modern urban infrastructure no longer operates in isolation but instead leverages the latest technologies to collect, process, and distribute aggregated knowledge to improve the quality of the provided services and promote the efficiency of resource consumption. However, the ambiguity of ever-evolving cyber threats and their debilitating consequences introduce new barriers for decision-makers. Numerous techniques have been proposed to address the cyber misdemeanors against such critical...
Show moreA modern urban infrastructure no longer operates in isolation but instead leverages the latest technologies to collect, process, and distribute aggregated knowledge to improve the quality of the provided services and promote the efficiency of resource consumption. However, the ambiguity of ever-evolving cyber threats and their debilitating consequences introduce new barriers for decision-makers. Numerous techniques have been proposed to address the cyber misdemeanors against such critical realms and increase the accuracy of attack inference; however, they remain limited to detection algorithms omitting attack attribution and impact interpretation. The lack of the latter prompts the transition of these methods to operation difficult to impossible. In this dissertation, we first investigate the threat landscape of smart cities, survey and reveal the progress in data-driven methods for situational awareness and evaluate their effectiveness when addressing various cyber threats. Further, we propose an approach that integrates machine learning, the theory of belief functions, and dynamic visualization to complement available attack inference for ICS deployed in the realm of smart cities. Our framework offers an extensive scope of knowledge as opposed to solely evident indicators of malicious activity. It gives the cyber operators and digital investigators an effective tool to dynamically and visually interact, explore and analyze heterogeneous, complex data, and provide rich context information. Such an approach is envisioned to facilitate the cyber incident interpretation and support a timely evidence-based decision-making process.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013813
- Subject Headings
- Smart cities, Cyber intelligence (Computer security), Visual analytics, Threats
- Format
- Document (PDF)
- Title
- Content-based image retrieval using relevance feedback.
- Creator
- Marques, Oge, Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This dissertation presents the results of research that led to the development of a complete, fully functional, image search and retrieval system with relevance feedback capabilities, called MUSE (MUltimedia SEarch and Retrieval Using Relevance Feedback). Two different models for searching for a target image using relevance feedback have been proposed, implemented, and tested. The first model uses a color-based feature vector and employs a Bayesian learning algorithm that updates the...
Show moreThis dissertation presents the results of research that led to the development of a complete, fully functional, image search and retrieval system with relevance feedback capabilities, called MUSE (MUltimedia SEarch and Retrieval Using Relevance Feedback). Two different models for searching for a target image using relevance feedback have been proposed, implemented, and tested. The first model uses a color-based feature vector and employs a Bayesian learning algorithm that updates the probability of each image in the database being the target based on the user's actions. The second model uses cluster analysis techniques, a combination of color-, texture-, and edge(shape)-based features, and a novel approach to learning the user's goals and the relevance of each feature for a particular search. Both models follow a purely content-based image retrieval paradigm. The search process is based exclusively on image contents automatically extracted during the (off-line) feature extraction stage. Moreover, they minimize the number and complexity of required user's actions, in contrast with the complexity of the underlying search and retrieval engine. Results of experiments show that both models exhibit good performance for moderate-size, unconstrained databases and that a combination of the two outperforms any of them individually, which is encouraging. In the process of developing this dissertation, we also implemented and tested several image features and similarity measurement combinations. The result of these tests---performed under the query-by-example (QBE) paradigm---served as a reference in the choice of which features to use in the relevance feedback mode and confirmed the difficulty in encoding the understanding of image similarity into a combination of features and distances without human assistance. Most of the code written during the development of this dissertation has been encapsulated into a multifunctional prototype that combines image searching (with or without an example), browsing, and viewing capabilities and serves as a framework for future research in the subject.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/11954
- Subject Headings
- Information storage and retrieval systems, Image processing--Digital techniques, Feedback control systems
- Format
- Document (PDF)
- Title
- Enhancing video quality based on psychophysical studies of smooth pursuit eye movements.
- Creator
- Chilamakuri, Pavani., Florida Atlantic University, Furht, Borko, Glenn, William E., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
When motion occurs in a scene, the quality of video degrades due to motion smear, which results in a loss of contrast in the image. The characteristics of the human vision system when smooth pursuit eye movements occur are different from those when the eye fixates on an object such as a video screen during motion. Smooth pursuit eye movements dominate in the presence of dynamic stimuli. In the presence of smooth pursuit eye movements, the contrast sensitivity for increasing target velocities...
Show moreWhen motion occurs in a scene, the quality of video degrades due to motion smear, which results in a loss of contrast in the image. The characteristics of the human vision system when smooth pursuit eye movements occur are different from those when the eye fixates on an object such as a video screen during motion. Smooth pursuit eye movements dominate in the presence of dynamic stimuli. In the presence of smooth pursuit eye movements, the contrast sensitivity for increasing target velocities shifts toward lower spatial frequencies. The sensitivity for low spatial frequencies during motion is higher than for a stationary case. This dissertation will propose a method to improve the perceptual quality of video using temporal enhancement prefiltering technique based on the characteristics of Smooth Pursuit Eye Movements (SPEM). The resulting technique closely matches the characteristics of the human visual system (HVS). When motion occurs, the eye tracks the moving targets in a scene as opposed to fixating on any portion of the scene. Hence, psychophysical studies of smooth pursuit eye movements were used as a basis to design the temporal filters. Results of experiments show that temporal enhancement results in improved quality by increasing the apparent sharpness of the image sequence. In this dissertation, a study of research describing how motion affects the image quality at the camera lens and the human eye is presented. This dissertation uses that research to develop a temporal enhancement technique to improve the quality of video degraded by motion.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fau/fd/FADT12035
- Subject Headings
- Eye--Movements, Digital video, Visual perception, Video compression
- Format
- Document (PDF)
- Title
- Adaptive two-level watermarking for binary document images.
- Creator
- Muharemagic, Edin., Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In our society, large volumes of documents are exchanged on a daily basis. Since documents can easily be scanned, modified and reproduced without any loss in quality, unauthorized use and modification of documents is of major concern. An authentication watermark embedded into a document as an invisible, fragile mark can be used to detect illegal document modification. However, the authentication watermark can only be used to determine whether documents have been tampered with, and additional...
Show moreIn our society, large volumes of documents are exchanged on a daily basis. Since documents can easily be scanned, modified and reproduced without any loss in quality, unauthorized use and modification of documents is of major concern. An authentication watermark embedded into a document as an invisible, fragile mark can be used to detect illegal document modification. However, the authentication watermark can only be used to determine whether documents have been tampered with, and additional protection may be needed to prevent unauthorized use and distribution of those documents. A solution to this problem is a two-level, multipurpose watermark. The first level watermark is an authentication mark used to detect document tampering, while the second level watermark is a robust mark, which identifies the legitimate owner and/or user of specific document. This dissertation introduces a new adaptive two-level multipurpose watermarking scheme suitable for binary document images, such as scanned text, figures, engineering and road maps, architectural drawings, music scores, and handwritten text and sketches. This watermarking scheme uses uniform quantization and overlapped embedding to add two watermarks, one robust and the other fragile, into a binary document image. The two embedded watermarks serve different purposes. The robust watermark carries document owner or document user identification, and the fragile watermark confirms document authenticity and helps detect document tampering. Both watermarks can be extracted without accessing the original document image. The proposed watermarking scheme adaptively selects an image partitioning block size to optimize the embedding capacity, the image permutation key to minimize watermark detection error, and the size of local neighborhood in which modification candidate pixels are scored to minimize visible distortion of watermarked documents. Modification candidate pixels are scored using a novel, objective metric called the Structural Neighborhood Distortion Measure (SNDM). Experimental results confirm that this watermarking scheme, which embeds watermarks by modifying image pixels based on their SNDM scores, creates smaller visible document distortion than watermarking schemes which base watermark embedding on any other published pixel scoring method. Document tampering is detected successfully and the robust watermark can be detected even after document tampering renders the fragile watermark undetectable.
Show less - Date Issued
- 2004
- PURL
- http://purl.flvc.org/fau/fd/FADT12113
- Subject Headings
- Data encryption (Computer science), Computer security, Digital watermarking, Data protection, Image processing--Digital techniques, Watermarks
- Format
- Document (PDF)
- Title
- Innovative video error resilient techniques for MBMS systems.
- Creator
- Sanigepalli, Praveen., Florida Atlantic University, Kalva, Hari, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In the current communications age, the capabilities of mobile devices are increasing. The mobiles are capable of communicating at data rates of hundreds of mbps on 4G networks. This enables playback of rich multimedia content comparable to internet and television networks. However, mobile networks need to be spectrum-efficient to be affordable to users. Multimedia Broadcast Multicast Systems (MBMS) is a wireless broadcasting standard that is being drafted to enable multimedia broadcast while...
Show moreIn the current communications age, the capabilities of mobile devices are increasing. The mobiles are capable of communicating at data rates of hundreds of mbps on 4G networks. This enables playback of rich multimedia content comparable to internet and television networks. However, mobile networks need to be spectrum-efficient to be affordable to users. Multimedia Broadcast Multicast Systems (MBMS) is a wireless broadcasting standard that is being drafted to enable multimedia broadcast while focusing on being spectrum-efficient. The hybrid video coding techniques facilitate low bitrate transmission, but result in dependencies across frames. With a mobile environment being error prone, no error correction technique can guarantee error free transmission. Such errors propagate, resulting in quality degradation. With numerous mobiles sharing the broadcast session, any error resilient scheme should account for heterogeneous device capabilities and channel conditions. The current research on wireless video broadcasting focuses on network based techniques such as FEC and retransmissions, which add bandwidth overhead. There is a need to design innovative error resilient techniques that make video codec robust with minimal bandwidth overhead. This Dissertation introduces novel techniques in the area of MBMS systems. First, robust video structures are proposed in Periodic Intra Frame based Prediction (PIFBP) and Periodic Anchor Frame based Prediction (PAFBP) schemes. In these schemes, the Intra frames or anchor frames serve as reference frames for prediction during GOP period. The intermediate frames are independent of others; any errors in such frames are not propagated, thereby resulting in error resilience. In prior art, intra block rate is adapted based on the channel characteristics for error resilience. This scheme has been generalized in multicasting to address a group of users sharing the same session. Average packet loss is used to determine the intra block rate. This improves performance of the overall group and strives for consistent performance. Also, the inherent diversity in the broadcasting session can be used for its advantage. With mobile devices capable of accessing a WLAN during broadcast, they form an adhoc network on a WLAN to recover lost packets. New error recovery schemes are proposed for error recovery and their performance comparison is presented.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/12187
- Subject Headings
- Wireless communication systems, Signal processing, Digital video, Multimedia systems, Digital communications, Data transmission systems
- Format
- Document (PDF)
- Title
- Neural network approach to Bayesian background modeling for video object segmentation.
- Creator
- Culibrk, Dubravko., Florida Atlantic University, Furht, Borko, Marques, Oge, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Object segmentation in a video sequence is an essential task in video processing and forms the foundation of content analysis, scene understanding, object-based video encoding (e.g. MPEG-4), various surveillance and 2D-to-pseudo-3D conversion applications. Popularization and availability of video sequences with increased spatial resolution requires development of new, more efficient algorithms for object detection and segmentation. This dissertation discusses a novel neural-network-based...
Show moreObject segmentation in a video sequence is an essential task in video processing and forms the foundation of content analysis, scene understanding, object-based video encoding (e.g. MPEG-4), various surveillance and 2D-to-pseudo-3D conversion applications. Popularization and availability of video sequences with increased spatial resolution requires development of new, more efficient algorithms for object detection and segmentation. This dissertation discusses a novel neural-network-based approach to background modeling for motion-based object segmentation in video sequences. In particular, we show how Probabilistic Neural Network (PNN) architecture can be extended to form an unsupervised Bayesian classifier for the domain of video object segmentation. The constructed Background Modeling Neural Network (BNN) is capable of efficiently handling segmentation in natural-scene sequences with complex background motion and changes in illumination. The weights of the proposed neural network serve as an exclusive model of the background and are temporally updated to reflect the observed background statistics. The proposed approach is designed to enable an efficient, highly-parallelized hardware implementation. Such a system would be able to achieve real-time segmentation of high-resolution image sequences.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fcla/dt/12214
- Subject Headings
- Neural networks (Computer science), Application software--Development, Data structures (Computer science), Bayesian field theory
- Format
- Document (PDF)
- Title
- Permutation-based transformations for digital multimedia encryption and steganography.
- Creator
- Socek, Daniel, Florida Atlantic University, Furht, Borko, Magliveras, Spyros S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The aim of this work is to explore the utilization of permutation-based transformations to achieve compression, encryption and steganography in the domain of digital videos. The main contribution of this dissertation is a novel type of digital video encryption that has several advantages over other currently available digital video encryption methods. An extended classification of digital video encryption algorithms is presented in order to clarify these advantages. The classification itself...
Show moreThe aim of this work is to explore the utilization of permutation-based transformations to achieve compression, encryption and steganography in the domain of digital videos. The main contribution of this dissertation is a novel type of digital video encryption that has several advantages over other currently available digital video encryption methods. An extended classification of digital video encryption algorithms is presented in order to clarify these advantages. The classification itself represents an original work, since to date, no such comprehensive classification is provided in known scientific literature. Both security and performance aspects of the proposed method are thoroughly analyzed to provide evidence for high security and performance efficiency. Since the basic model is feasible only for a certain class of video sequences and video codecs, several extensions providing broader applicability are described along with the basic algorithm. An additional significant contribution is the proposition of a novel type of digital video steganography based on disguising a given video by another video. Experimental results are generated for a number of video sequences to demonstrate the performance of proposed methods.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fcla/dt/12225
- Subject Headings
- Image processing--Security measures, Data encryption (Computer science), Computer security, Multimedia systems--Security measures
- Format
- Document (PDF)
- Title
- A feedback-based multimedia synchronization technique for distributed systems.
- Creator
- Ehley, Lynnae Anne., Florida Atlantic University, Ilyas, Mohammad, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Multimedia applications incorporate the use of more than one type of media, i.e., voice, video, data, text and image. With the advances in high-speed communication, the ability to transmit multimedia is becoming widely available. One of the means of transport for multimedia in distributed networks is Broadband Integrated Services Digital Network (B-ISDN). B-ISDN supports the transport of large volumes of data with a low error rate. It also handles the burstiness of multimedia traffic by...
Show moreMultimedia applications incorporate the use of more than one type of media, i.e., voice, video, data, text and image. With the advances in high-speed communication, the ability to transmit multimedia is becoming widely available. One of the means of transport for multimedia in distributed networks is Broadband Integrated Services Digital Network (B-ISDN). B-ISDN supports the transport of large volumes of data with a low error rate. It also handles the burstiness of multimedia traffic by providing dynamic bandwidth allocation. When multimedia is requested for transport in a distributed network, different Quality of Service (QOS) may be required for each type of media. For example, video can withstand more errors than voice. In order to provide, the most efficient form of transfer, different QOS media are sent using different channels. By using different channels for transport, jitter can impose skews on the temporal relations between the media. Jitter is caused by errors and buffering delays. Since B-ISDN uses Asynchronous Transfer Mode (ATM) as its transfer mode, the jitter that is incurred can be assumed to be bounded if traffic management principles such as admission control and resource reservation are employed. Another network that can assume bounded buffering is the 16 Mbps token-ring LAN when the LAN Server (LS) Ultimedia(TM) software is applied over the OS/2 LAN Server(TM) (using OS/2(TM)). LS Ultimedia(TM) reserves critical resources such as disk, server processor, and network resources for multimedia use. In addition, it also enforces admission control(1). Since jitter is bounded on the networks chosen, buffers can be used to realign the temporal relations in the media. This dissertation presents a solution to this problem by proposing a Feedback-based Multimedia Synchronization Technique (FMST) to correct and compensate for the jitter that is incurred when media are received over high speed communication channels and played back in real time. FMST has been implemented at the session layer for the playback of the streams. A personal computer was used to perform their synchronized playback from a 16 Mbps token-ring and from a simulated B-ISDN network.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12382
- Subject Headings
- Multimedia systems, Broadband communication systems, Data transmission systems, Integrated services digital networks, Electronic data processing--Distributed processing
- Format
- Document (PDF)
- Title
- XYZ Video Compression: An algorithm for real-time compression of motion video based upon the three-dimensional discrete cosine transform.
- Creator
- Westwater, Raymond John., Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
XYZ Video Compression denotes a video compression algorithm that operates in three dimensions, without the overhead of motion estimation. The smaller overhead of this algorithm as compared to MPEG and other "standards-based" compression algorithms using motion estimation suggests the suitability of this algorithm to real-time applications. The demonstrated results of compression of standard motion video benchmarks suggest that XYZ Video Compression is not only a faster algorithm, but develops...
Show moreXYZ Video Compression denotes a video compression algorithm that operates in three dimensions, without the overhead of motion estimation. The smaller overhead of this algorithm as compared to MPEG and other "standards-based" compression algorithms using motion estimation suggests the suitability of this algorithm to real-time applications. The demonstrated results of compression of standard motion video benchmarks suggest that XYZ Video Compression is not only a faster algorithm, but develops superior compression ratios as well. The algorithm is based upon the three-dimensional Discrete Cosine Transform (DCT). Pixels are organized as 8 x 8 x 8 cubes by taking 8 x 8 squares out of 8 consecutive frames. A fast three-dimensional transform is applied to each cube, generating 512 DCT coefficients. The energy-packing property of the DCT concentrates the energy in the cube into few coefficients. The DCT coefficients are quantized to maximize the energy concentration at the expense of introduction of a user-determined level of error. A method of adaptive quantization that generates optimal quantizers based upon statistics gathered for the 8 consecutive frames is described. The sensitivity of the human eye to various DCT coefficients is used to modify the quantizers to create a "visually equivalent" cube with still greater energy concentration. Experiments are described that justify choice of Human Visual System factors to be folded into the quantization step. The quantized coefficients are then encoded into a data stream using a method of entropy coding based upon the statistics of the quantized coefficients. The bitstream generated by entropy coding represents the compressed data of the 8 motion video frames, and typically will be compressed at 50:1 at 5% error. The decoding process is the reverse of the encoding process: the bitstream is decoded to generate blocks of quantized DCT coefficients, the DCT coefficients are dequantized, and the Inverse Discrete Cosine Transform is performed on the cube to recover pixel data suitable for display. The elegance of this technique lies in its simplicity, which lends itself to inexpensive implementation of both encoder and decoder. Finally, real-time implementation of the XYZ Compressor/Decompressor is discussed. Experiments are run to determine the effectiveness of the implementation.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12450
- Subject Headings
- Digital video, Data compression (Telecommunication), Image processing--Digital techniques, Coding theory
- Format
- Document (PDF)
- Title
- The interlaced pixel delta codec: For transmission of video on low bit rate communication lines.
- Creator
- Celi, Joseph, Jr., Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The Interlaced Pixel Delta (IPD) Video Codec is a real time video compression and decompression engine. It is specifically designed to be used for video phone or video conferencing applications that are to be run under very low bandwidth networking conditions. The example network used throughout this dissertation is the Internet where users are typically connected at transmission speeds of 33.3 K bits per second or less. In order to accomplish this goal, the IPD codec must achieve very high...
Show moreThe Interlaced Pixel Delta (IPD) Video Codec is a real time video compression and decompression engine. It is specifically designed to be used for video phone or video conferencing applications that are to be run under very low bandwidth networking conditions. The example network used throughout this dissertation is the Internet where users are typically connected at transmission speeds of 33.3 K bits per second or less. In order to accomplish this goal, the IPD codec must achieve very high compression ratios. This feat is further complicated by the fact that the IPD codec is to be fully realized using a software approach in order to be considered a viable solution for the average Internet user. The demonstrated test results show that the IPD codec is capable of achieving these ambitious goals. The IPD compressor operates in a pipelined manner. Each stage in the IPD compression pipeline has its own complexities and challenges, which are individually addressed in detail. The ultimate goal of the IPD compressor is to maintain a constant compression ratio that is sufficiently high enough to allow bi-directional video communication to take place across low bandwidth transmission lines. These compression ratios must be achieved using a software compressor and decompressor. Strict CPU utilization requirements must be met by the IPD codec in order for it to be able to operate in real time. The IPD compressor defines a unique video interlacing scheme to sample the pixels that comprise the incoming video frames. The properties of the interlacing schemes aid the video compressor in its quest for high compression ratios. Later in the decompression stage, the IPD decompressor uses the properties of the interlacing schemes to reverse the sampling process to bring back the original picture quality. The IPD compressor also employs a custom variation of the error diffusion algorithm in its color reduction phase. A pixel delta algorithm is used to build a new frame from a previous frame. The pixel delta algorithm defines a unique bitmask representation of pixel locations that are flagged for refresh. These pixel locations will be used to build a subsequent frame. The bitmask representation of pixel locations is further compressed using a variation of the Huffman compression algorithm. An IPD delta frame is built by the IPD compressor. The IPD delta frame contains a header, the compressed bitmask of pixel locations flagged for change and the actual compressed pixel intensity values used used to build a new frame from a previous frame. The IPD decompressor also operates in a pipelined manner. The IPD decompressor also has strict requirements with respect to CPU utilization. The IPD decompressor applies several image processing algorithms to the video output stream in order to enhance the visual quality of the reconstructed output video frames. Custom test programs are used to derive and validate the algorithms presented in this dissertation. A working prototype of the complete IPD codec is also presented to aid in the visual analysis of the final video picture quality.
Show less - Date Issued
- 1998
- PURL
- http://purl.flvc.org/fcla/dt/12573
- Subject Headings
- Internet videoconferencing, Video telephone, Image transmission
- Format
- Document (PDF)