Sea navigation and operations within areas of interest has been a major focus of naval research. Documents such as Raster Navigational Charts (RNC) that help with sea navigation tasks are critically important. A RNC is a copy of a navigational paper chart in image form. Therefore, RNC contains important information such as navigational channels, water depths, rocky areas etc. However, a RNC is hard to interpret by computers and even humans as it contains very dense information due to the different layers of drawings from the information mentioned above. In this paper, we introduce a reverse engineering approach using computer vision to extract features from the RNC image. We use optical character recognition to extract text features and templates matching for symbolic features. With the new approach, we show that RNC will become machine readable, and the features extracted can be used to draw tactical regions of interest.
In this study, we approach the problem of image captioning with cycle consistent generative adversarial networks (CycleGANs). Due to CycleGANs’ ability to learn functions to map between multiple domains and use duality to strengthen each individual mapping with the usage of a cycle consistency loss, these models show great promise in their ability to learn both image captioning and image synthesis and to create a better image captioning framework. Historically, cycle consistency loss was based on the premise that the input should undergo little to no change when mapped to another domain and then back to its original; however, image captioning presents a unique challenge to this concept due to the many-to-many nature of the mapping from images to captions and vice-versa. TextCycleGAN overcomes this obstacle through utilization of cycle consistency in the feature space and is, thereby, able to perform well on both image captioning and synthesis. We will demonstrate its capability as an image captioning framework and discuss how its model architecture makes this possible.
Generating imagery using gaming engines has become a popular method to both augment or completely replace the need for real data. This is due largely to the fact that gaming engines, such as Unity3D and Unreal, have the ability to produce novel scenes and ground-truth labels quickly and with low-cost. However, there is a disparity between rendering imagery in the digital domain and testing in the real domain on a deep learning task. This disparity/gap is commonly known as domain mismatch or domain shift, and without a solution, renders synthetic imagery impractical and ineffective for deep learning tasks. Recently, Generative Adversarial Networks (GANs) have shown success at generating novel imagery and overcoming this gap between two different distributions by performing cross-domain transfer. In this research, we explore the use of state-of-the-art GANs to perform a domain transfer between a rendered synthetic domain to a real domain. We evaluate the data generated using an image-to-image translation GAN on a classification task as well as by qualitative analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.