Current Search: Raviv, Daniel (x)
View All Items
- Title
- Campus 2020: pedestrian and driver based GPS navigation for the Florida Atlantic University campuses.
- Creator
- Jones, Brandon, Cardei, Mihaela, Raviv, Daniel, Graduate College
- Date Issued
- 2013-04-12
- PURL
- http://purl.flvc.org/fcla/dt/3361315
- Subject Headings
- Global Positioning System, Florida Atlantic University, GPS (Navigational system)
- Format
- Document (PDF)
- Title
- An active-vision-based method for autonomous navigation.
- Creator
- Ergen, Erkut Erhan., Florida Atlantic University, Raviv, Daniel
- Abstract/Description
-
This research explores the existing active-vision-based algorithms employed in today's autonomous navigation systems. Some of the popular range finding algorithms are introduced and presented with examples. In light of the existing methods, an active-vision-based method, which extracts visual cues from a sequence of 2D images, is proposed for autonomous navigation. The proposed algorithm merges the method titled 'Visual Threat Cues (VTCs) for Autonomous Navigation' developed by Kundur (1),...
Show moreThis research explores the existing active-vision-based algorithms employed in today's autonomous navigation systems. Some of the popular range finding algorithms are introduced and presented with examples. In light of the existing methods, an active-vision-based method, which extracts visual cues from a sequence of 2D images, is proposed for autonomous navigation. The proposed algorithm merges the method titled 'Visual Threat Cues (VTCs) for Autonomous Navigation' developed by Kundur (1), with the structured-light-based methods. By combining these methods, a more practical and a simpler method for indoors autonomous navigation tasks is developed. A textured-pattern, which is projected onto the object surface by a slide projector, is used as the structured-light source, and the proposed approach is independent of the textured-pattern used. Several experiments are performed with the autonomous robot LOOMY to test the proposed algorithm, and the results are very promising.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15425
- Subject Headings
- Autonomous robots, Automotive sensors
- Format
- Document (PDF)
- Title
- RF-based location system for communicating and monitoring vehicles in a multivehicle network.
- Creator
- Cortes, Luis Fernando, Raviv, Daniel, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This document reports on a hands-on project aimed at learning and experiencing the concept of system-of-systems. The motivation behind this project is to study and implement the concept of System of Systems in the generation of a RF-based communication and control complex system. The goal of this project is to develop a multi-level integrated and complete system in which the vehicles that belong to a same network can become aware of their location, communicate with nearby vehicles (sometimes...
Show moreThis document reports on a hands-on project aimed at learning and experiencing the concept of system-of-systems. The motivation behind this project is to study and implement the concept of System of Systems in the generation of a RF-based communication and control complex system. The goal of this project is to develop a multi-level integrated and complete system in which the vehicles that belong to a same network can become aware of their location, communicate with nearby vehicles (sometimes with no visible line of sight), be notified of the presence of different objects located in their immediate vicinity (obstacles, such as abundant vehicles), and generate a two dimensional representation of the vehicles’ location for a remote user. In addition, this system will be able to transmit back messages from the remote user to a specific or to all local vehicles. The end result is a demonstration of a complex, functional, and robust system built and tested for other projects to use and learn from.
Show less - Date Issued
- 2015
- PURL
- http://purl.flvc.org/fau/fd/FA00004437, http://purl.flvc.org/fau/fd/FA00004437
- Subject Headings
- Radio frequency identification system, System analysis, System design, Systems engineering -- Technological innovations
- Format
- Document (PDF)
- Title
- Autonomous landing and road following using two-dimensional visual cues.
- Creator
- Yakali, Huseyin Hakan., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This dissertation deals with vision-based perception-action closed-loop control systems based on 2-D visual cues. These visual cues are used to calculate the relevant control signals required for autonomous landing and road following. In the landing tasks it has been shown that nine 2-D visual cues can be extracted from a single image of the runway. Seven of these cues can be used to accomplish parallel flight and glideslope tracking tasks of the landing. For the road following task, three...
Show moreThis dissertation deals with vision-based perception-action closed-loop control systems based on 2-D visual cues. These visual cues are used to calculate the relevant control signals required for autonomous landing and road following. In the landing tasks it has been shown that nine 2-D visual cues can be extracted from a single image of the runway. Seven of these cues can be used to accomplish parallel flight and glideslope tracking tasks of the landing. For the road following task, three different algorithms based on two different 2-D visual cues are developed. One of the road following algorithms can be used to generate steering and velocity commands for the vehicle. Glideslope tracking of the landing task has been implemented in real-time on a six-degree-of-freedom flight simulator. It has been shown that the relevant information computed from 2-D visual cues is robust and reliable for the landing tasks. Road following algorithms were tested successfully up to 50km/h on a US Army High Mobility and Multipurpose Wheeled Vehicle (HMMWV) equipped with a vision system and on a Denning mobile robot. The algorithms have also been tested successfully using PC-based software simulation programs.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12365
- Subject Headings
- Visual perception, Landing aids (Aeronautics)
- Format
- Document (PDF)
- Title
- Visual threat cues for autonomous navigation.
- Creator
- Kundur, Sridhar Reddy, Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This dissertation deals with novel vision-based motion cues called the Visual Threat Cues (VTCs), suitable for autonomous navigation tasks such as collision avoidance and maintenance of clearance. The VTCs are time-based and provide some measure for a relative change in range as well as clearance between a 3D surface and a moving observer. They are independent of the 3D environment around the observer and need almost no a-priori knowledge about it. For each VTC presented in this dissertation,...
Show moreThis dissertation deals with novel vision-based motion cues called the Visual Threat Cues (VTCs), suitable for autonomous navigation tasks such as collision avoidance and maintenance of clearance. The VTCs are time-based and provide some measure for a relative change in range as well as clearance between a 3D surface and a moving observer. They are independent of the 3D environment around the observer and need almost no a-priori knowledge about it. For each VTC presented in this dissertation, there is a corresponding visual field associated with it. Each visual field constitutes a family of imaginary 3D surfaces attached to the moving observer. All the points that lie on a particular imaginary 3D surface, produce the same value of the VTC. These visual fields can be used to demarcate the space around the moving observer into safe and danger zones of varying degree. Several approaches to extract the VTCs from a sequence of monocular images have been suggested. A practical method to extract the VTCs from a sequence of images of 3D textured surfaces, obtained by a visually fixation, fixed-focus moving camera is also presented. This approach is based on the extraction of a global image dissimilarity measure called the Image Quality Measure (IQM), which is extracted directly from the raw data of the gray level images. Based on the relative variations of the measured IQM, the VTCs are extracted. This practical approach to extract the VTCs needs no 3D reconstruction, depth information, optical flow or feature tracking. This algorithm to extract the VTCs was tested on several indoor as well as outdoor real image sequences. Two vision-based closed-loop control schemes for autonomous navigation tasks were implemented in a-priori unknown textured environments using one of the VTCs as relevant sensory feedback information. They are based on a set of IF-THEN fuzzy rules and need almost no a-priori information about the vehicle dynamics, speed, direction of motion, etc. They were implemented in real-time using a camera mounted on a six degree-of-freedom flight simulator.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12476
- Subject Headings
- Computer vision, Robot vision, Visual perception
- Format
- Document (PDF)
- Title
- Visual cues in active monocular vision for autonomous navigation.
- Creator
- Yang, Lingdi., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this dissertation, visual cues using an active monocular camera for autonomous vehicle navigation are investigated. A number of visual cues suitable to such an objective are proposed and effective methods to extract them are developed. Unique features of these visual cues include: (1) There is no need to reconstruct the 3D scene; (2) they utilize short image sequences taken by a monocular camera; and (3) they operate on local image brightness information. Taking these features into account...
Show moreIn this dissertation, visual cues using an active monocular camera for autonomous vehicle navigation are investigated. A number of visual cues suitable to such an objective are proposed and effective methods to extract them are developed. Unique features of these visual cues include: (1) There is no need to reconstruct the 3D scene; (2) they utilize short image sequences taken by a monocular camera; and (3) they operate on local image brightness information. Taking these features into account, the algorithms developed are computationally efficient. Simulation and experimental studies confirm the efficacy of the algorithms developed. The major contribution of the research work in this dissertation is the extraction of visual information suitable for autonomous navigation in an active monocular camera without 3D reconstruction by use of local image information. In the studies addressed, the first visual cue is related to camera focusing parameters. An objective function relating focusing parameters to local image brightness is proposed. A theoretical development is conducted to show that by maximizing the objective function one can focus successfully the camera by choosing the focusing parameters. As a result, the dense distance map between a camera and a front scene can be estimated without using the Gaussian spread function. The second visual cue, namely, the clearance invariant (first proposed by Raviv (97)), is extended here to include arbitrary translational motion of a camera. It is shown that the angle between the optical axis and moving direction of a camera can be estimated by minimizing the relevant estimated error residual. This method needs only one image projection from a 3D surface point at an arbitrary time instant. The third issue discussed in this dissertation refers to extracting the looming and the magnitude of rotation using a new visual cue designated as the rotation invariant under the camera fixation. An algorithm to extract the looming is proposed using the image information available from only one 3D surface point at an arbitrary time instant. Further, an additional algorithm is proposed to estimate the magnitude of rotational velocity of the camera by using the image projections of only two 3D surface points measured over two time instants. Finally, a method is presented to extract the focus of expansion robustly without using image brightness derivatives. It decomposes an image projection trajectory into two independent linear models, and applies the Kalman filters to estimate the focus of expansion.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12527
- Subject Headings
- Computer vision, Robot vision
- Format
- Document (PDF)
- Title
- LOOMY: A platform for vision-based autonomous driving.
- Creator
- Kelly, Thomas Joseph., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis describes the conceptualization, design and implementation of a low-cost vision-based autonomous vehicle named LOOMY. A golf cart has been ouffitted with a personal computer, a fixed foward-looking camera, and the necessary actuators to facilitate driving operations. Steering, braking, and speed control actuators are being driven in open-loop with no sort of local feedback. The only source of feedback to the system is through the image sequence obtained from the camera. The images...
Show moreThis thesis describes the conceptualization, design and implementation of a low-cost vision-based autonomous vehicle named LOOMY. A golf cart has been ouffitted with a personal computer, a fixed foward-looking camera, and the necessary actuators to facilitate driving operations. Steering, braking, and speed control actuators are being driven in open-loop with no sort of local feedback. The only source of feedback to the system is through the image sequence obtained from the camera. The images are processed and the relative information is extracted and applied to the navigation task. The implemented task is to follow another vehicle, tracing its actions while avoiding collisions using the visual looming cue.
Show less - Date Issued
- 1998
- PURL
- http://purl.flvc.org/fcla/dt/15610
- Subject Headings
- Automotive sensors, Autonomous robots
- Format
- Document (PDF)
- Title
- NEW FAMILY OF DATA CENTER METRICS USING A MULTIDIMENSIONAL APPROACH FOR A HOLISTIC UNDERSTANDING.
- Creator
- Moises, Levy, Raviv, Daniel, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Data centers’ mission critical nature, significant power consumption, and increasing reliance on them for storing digital information, have created a need to monitor and manage these facilities. Metrics are a key part of this effort to raise flags that lead to optimization of resource utilization. While existing metrics have contributed to improvements regarding data center efficiency, they are very specific and overlook important aspects such as the overall performance and the risks to which...
Show moreData centers’ mission critical nature, significant power consumption, and increasing reliance on them for storing digital information, have created a need to monitor and manage these facilities. Metrics are a key part of this effort to raise flags that lead to optimization of resource utilization. While existing metrics have contributed to improvements regarding data center efficiency, they are very specific and overlook important aspects such as the overall performance and the risks to which the data center is exposed. With several variables affecting performance, there is an urgent need for new and improved metrics, capable to provide a holistic understanding of the data center behavior. This research proposes a novel framework using a multidimensional approach for a new family of data center metrics. Performance is examined across four different subdimensions: productivity, efficiency, sustainability, and operations. Risk associated with each of those sub-dimensions is contemplated. External risks are introduced, namely site risk, as another dimension of the metrics. Results from metrics across all sub-dimensions can be normalized to the same scale and incorporated in one graph, which simplifies visualization and reporting. This research also explores theoretical modeling of data center components using a cyber-physical systems lens to estimate and predict different variables including key performance indicators. Data center simulation models are deployed in MATLAB and Simulink to assess data centers under certain a-priori known conditions. The results of the simulations, with different workloads and IT resources, show quality of service as well as power, airflow and energy parameters. Ultimately, this research describes how key parameters associated with data center infrastructure and information technology equipment can be monitored in real-time across an entire facility using low-power wireless sensors. Real-time data collection may contribute in calibrating and validating models. The new family of data center metrics gives a more comprehensive and evidence-based view of issues affecting data centers, highlights areas where mitigating actions can be implemented, and allows reexamining their overall behavior. It can help to standardize a process that evolves into a best practice for evaluating data centers, comparing them to each other, and improving grounds for decision-making.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013387
- Subject Headings
- Data centers, Metrics, Multidimensional, Cyber-physical systems, Data centers--Management
- Format
- Document (PDF)
- Title
- The visual looming navigation cue: A unified approach.
- Creator
- Joarder, Kunal., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This research introduces a unified approach to visual looming. Visual looming is related to an increasing projected size of an object on a viewer's retina while the relative distance between the viewer and the object decreases. Psychophysicists and neurobiologists have studied this phenomenon by observing vision and action in unison and have reported subject's tendency to react defensively or using this information in an anticipatory control of the body. Since visual looming induces senses of...
Show moreThis research introduces a unified approach to visual looming. Visual looming is related to an increasing projected size of an object on a viewer's retina while the relative distance between the viewer and the object decreases. Psychophysicists and neurobiologists have studied this phenomenon by observing vision and action in unison and have reported subject's tendency to react defensively or using this information in an anticipatory control of the body. Since visual looming induces senses of threat of collision, the same cue, if quantified, can be used along with visual fixation in obstacle avoidance in mobile robots. In quantitative form visual looming is defined as the time derivative of the relative distance (range) between the observer and the object divided by the relative distance itself. The visual looming is a measurable variable. Following the paradigm of Active Vision the approach in this research uses visual fixation to selectively attend a small part of the image, that is relevant to the task. Visual looming provides a time-based mapping from a "set of 2-D image cues" to "time-based 3-D space". This research describes how visual looming, which is a concept related to an object in the 3-D world, can be calculated studying the relative temporal change in the following four different attributes of a sequence of 2-D images: (i) image area; (ii) image brightness; (iii) texture density in the image; (iv) image blur. From a simple closed form expression it shows that a powerful unified approach can be adopted in these methods. An extension of this unified approach establishes a strong relationship with the Weber-Fechner law in Psychophysics. The four different methods explored for the calculation of looming are simple. The experimental results illustrate how the measured values of looming stay close to the actual values. This research also introduces one important visual invariant $\Re$ that exists in relative movements between a camera light-source pair and a visible object. Finally, looming is used in the sense of a threat of collision, to navigate in an unknown environment. The results show that the approach can be used in real-time obstacle avoidance with very little a-priori knowledge.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/12416
- Subject Headings
- Robots--Control systems, Robot vision, Robot camera--Calibration
- Format
- Document (PDF)
- Title
- A visual rotation invariant in fixated motion.
- Creator
- Ozery, Nissim Jossef., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis studies the 2-D-based visual invariant that exists during relative motion between a camera and a 3-D object. We show that during fixation there is a measurable nonlinear function of optical flow that produces the same value for all points of a stationary environment regardless of the 3-D shape of the environment. During fixated camera motion relative to a rigid object, e.g., a stationary environment, the projection of the fixated point remains (by definition) at the same location...
Show moreThis thesis studies the 2-D-based visual invariant that exists during relative motion between a camera and a 3-D object. We show that during fixation there is a measurable nonlinear function of optical flow that produces the same value for all points of a stationary environment regardless of the 3-D shape of the environment. During fixated camera motion relative to a rigid object, e.g., a stationary environment, the projection of the fixated point remains (by definition) at the same location in the image, and all other points located on the 3-D rigid object can only rotate relative to that 3-D fixation point. This rotation rate of the points is invariant for all points that lie on the particular environment, and it is measurable from a sequence of images. This new invariant is obtained from a set of monocular images and is expressed explicitly as a closed form solution.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15095
- Subject Headings
- Invariants, Visual perception, Motion perception (Vision)
- Format
- Document (PDF)
- Title
- Two-dimensional feature tracking algorithm for motion analysis.
- Creator
- Krishnan, Srivatsan., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this thesis we describe a local-neighborhood-pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3-D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2-D intensity correlation surface' constructed from a local neighborhood in the...
Show moreIn this thesis we describe a local-neighborhood-pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3-D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2-D intensity correlation surface' constructed from a local neighborhood in the first image of the sequence to be analyzed. Any kind of motion, i.e., 6 DOF (translation and rotation), can be tolerated keeping in mind the pixels-per-frame motion limitations. No subpixel computations are necessary. Taking into account constraints of temporal continuity, the algorithm uses simple and efficient predictive tracking over multiple frames. Trajectories of features on multiple objects can also be computed. The algorithm accepts a slow, continuous change of brightness D.C. level in the pixels of the feature. Another important aspect of the algorithm is the use of an adaptive feature matching threshold that accounts for change in relative brightness of neighboring pixels. As applications of the feature-tracking algorithm and to test the accuracy of the tracking, we show how the algorithm has been used to extract the Focus of Expansion (FOE) and compute the Time-to-contact using real image sequences of unstructured, unknown environments. In both these applications, information from multiple frames is used.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15030
- Subject Headings
- Algorithms, Image transmission, Motion perception (Vision), Image processing
- Format
- Document (PDF)
- Title
- People counting and density estimation using public cameras.
- Creator
- Escudero Huedo, Antonio Eliseo, Kalva, Hari, Raviv, Daniel, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Many times we decide to go to a place depending on how crowded the place is. Our decisions are made based on different aspects that are only known in real time. A system that provides users or agencies information about the actual number of people in the scene over the time will allow them to make a decision or have information about a given location. This thesis presents a low complexity system for human counting and human detection using public cameras which usually do not have good quality...
Show moreMany times we decide to go to a place depending on how crowded the place is. Our decisions are made based on different aspects that are only known in real time. A system that provides users or agencies information about the actual number of people in the scene over the time will allow them to make a decision or have information about a given location. This thesis presents a low complexity system for human counting and human detection using public cameras which usually do not have good quality. The use of computer vision techniques makes it possible to have a system that allows the user to have an estimate number of people. Different videos were studied with different resolutions and camera positions. The best video result shows an error of 0.269%, while the worst one is 8.054 %. The results show that relatively inexpensive cameras streaming video at a low bitrate can be used to develop large scale people counting applications.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004104
- Format
- Document (PDF)
- Title
- A simplistic approach to reactive multi-robot navigation in unknown environments.
- Creator
- MacKunis, William Thomas., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Multi-agent control is a very promising area of robotics. In applications for which it is difficult or impossible for humans to intervene, the utilization of multi-agent, autonomous robot groups is indispensable. This thesis presents a novel approach to reactive multi-agent control that is practical and elegant in its simplicity. The basic idea upon which this approach is based is that a group of robots can cooperate to determine the shortest path through a previously unmapped environment by...
Show moreMulti-agent control is a very promising area of robotics. In applications for which it is difficult or impossible for humans to intervene, the utilization of multi-agent, autonomous robot groups is indispensable. This thesis presents a novel approach to reactive multi-agent control that is practical and elegant in its simplicity. The basic idea upon which this approach is based is that a group of robots can cooperate to determine the shortest path through a previously unmapped environment by virtue of redundant sharing of simple data between multiple agents. The idea was implemented with two robots. In simulation, it was tested with over sixty agents. The results clearly show that the shortest path through various environments emerges as a result of redundant sharing of information between agents. In addition, this approach exhibits safeguarding techniques that reduce the risk to robot agents working in unknown and possibly hazardous environments. Further, the simplicity of this approach makes implementation very practical and easily expandable to reliably control a group comprised of many agents.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13013
- Subject Headings
- Robots--Control systems, Intelligent control systems, Genetic algorithms, Parallel processing (Electronic computers)
- Format
- Document (PDF)
- Title
- COLLISION FREE NAVIGATION IN 3D UNSTRUCTURED ENVIRONMENTS USING VISUAL LOOMING.
- Creator
- Yepes, Juan David Arango, Raviv, Daniel, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Vision is a critical sense for many species, with the perception of motion being a fundamental aspect. This aspect often provides richer information than static images for understanding the environment. Motion recognition is a relatively simple computation compared to shape recognition. Many creatures can discriminate moving objects quite well while having virtually no capacity for recognizing stationary objects. Traditional methods for collision-free navigation require the reconstruction of...
Show moreVision is a critical sense for many species, with the perception of motion being a fundamental aspect. This aspect often provides richer information than static images for understanding the environment. Motion recognition is a relatively simple computation compared to shape recognition. Many creatures can discriminate moving objects quite well while having virtually no capacity for recognizing stationary objects. Traditional methods for collision-free navigation require the reconstruction of a 3D model of the environment before planning an action. These methods face numerous limitations as they are computationally expensive and struggle to scale in unstructured and dynamic environments with a multitude of moving objects. This thesis proposes a more scalable and efficient alternative approach without 3D reconstruction. We focus on visual motion cues, specifically ’visual looming’, the relative expansion of objects on an image sensor. This concept allows for the perception of collision threats and facilitates collision-free navigation in any environment, structured or unstructured, regardless of the vehicle’s movement or the number of moving objects present.
Show less - Date Issued
- 2023
- PURL
- http://purl.flvc.org/fau/fd/FA00014239
- Subject Headings
- Motion perception (Vision), Collision avoidance systems, Visual perception
- Format
- Document (PDF)