Geo-localization Refinement of Optical Satellite Images by Embedding Synthetic Aperture Radar Data in Novel Deep Learning Frameworks

Please use this identifier to cite or link to this item:
https://osnadocs.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-20181206863
Open Access logo originally created by the Public Library of Science (PLoS)
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorProf. Dr. Peter Reinartzger
dc.creatorMerkle, Nina Marie-
dc.date.accessioned2018-12-06T09:29:19Z-
dc.date.available2018-12-06T09:29:19Z-
dc.date.issued2018-12-06T09:29:21Z-
dc.identifier.urihttps://osnadocs.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-20181206863-
dc.description.abstractEvery year, the number of applications relying on information extracted from high-resolution satellite imagery increases. In particular, the combined use of different data sources is rising steadily, for example to create high-resolution maps, to detect changes over time or to conduct image classification. In order to correctly fuse information from multiple data sources, the utilized images have to be precisely geometrically registered and have to exhibit a high absolute geo-localization accuracy. Due to the image acquisition process, optical satellite images commonly have an absolute geo-localization accuracy in the order of meters or tens of meters only. On the other hand, images captured by the high-resolution synthetic aperture radar satellite TerraSAR-X can achieve an absolute geo-localization accuracy within a few decimeters and therefore represent a reliable source for absolute geo-localization accuracy improvement of optical data. The main objective of this thesis is to address the challenge of image matching between high resolution optical and synthetic aperture radar (SAR) satellite imagery in order to improve the absolute geo-localization accuracy of the optical images. The different imaging properties of optical and SAR data pose a substantial challenge for a precise and accurate image matching, in particular for the handcrafted feature extraction stage common for traditional optical and SAR image matching methods. Therefore, a concept is required which is carefully tailored to the characteristics of optical and SAR imagery and is able to learn the identification and extraction of relevant features. Inspired by recent breakthroughs in the training of neural networks through deep learning techniques and the subsequent developments for automatic feature extraction and matching methods of single sensor images, two novel optical and SAR image matching methods are developed. Both methods pursue the goal of generating accurate and precise tie points by matching optical and SAR image patches. The foundation of these frameworks is a semi-automatic matching area selection method creating an optimal initialization for the matching approaches, by limiting the geometric differences of optical and SAR image pairs. The idea of the first approach is to eliminate the radiometric differences between the images trough an image-to-image translation with the help of generative adversarial networks and to realize the subsequent image matching through traditional algorithms. The second approach is an end-to-end method in which a Siamese neural network learns to automatically create tie points between image pairs through a targeted training. The geo-localization accuracy improvement of optical images is ultimately achieved by adjusting the corresponding optical sensor model parameters through the generated set of tie points. The quality of the proposed methods is verified using an independent set of optical and SAR image pairs spread over Europe. Thereby, the focus is set on a quantitative and qualitative evaluation of the two tie point generation methods and their ability to generate reliable and accurate tie points. The results prove the potential of the developed concepts, but also reveal weaknesses such as the limited number of training and test data acquired by only one combination of optical and SAR sensor systems. Overall, the tie points generated by both deep learning-based concepts enable an absolute geo-localization improvement of optical images, outperforming state-of-the-art methods.eng
dc.rightsNamensnennung-NichtKommerziell-KeineBearbeitung 3.0 Deutschland-
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/de/-
dc.subjectRemote Sensingeng
dc.subjectImage Registrationeng
dc.subjectImage Matchingeng
dc.subjectDeep Learningeng
dc.subjectSynthetic Aperture Radareng
dc.subjectOptical Satellite Imageseng
dc.subject.ddc004 - Informatikger
dc.subject.ddc550 - Geowissenschaftenger
dc.subject.ddc510 - Mathematikger
dc.titleGeo-localization Refinement of Optical Satellite Images by Embedding Synthetic Aperture Radar Data in Novel Deep Learning Frameworkseng
dc.typeDissertation oder Habilitation [doctoralThesis]-
thesis.locationOsnabrück-
thesis.institutionUniversität-
thesis.typeDissertation [thesis.doctoral]-
thesis.date2018-09-14-
orcid.creatorhttps://orcid.org/0000-0003-4177-1066-
dc.contributor.refereeProf. Dr. Stefan Hinzger
Appears in Collections:FB06 - E-Dissertationen

Files in This Item:
File Description SizeFormat 
thesis_merkle.pdfPräsentationsformat37,32 MBAdobe PDF
thesis_merkle.pdf
Thumbnail
View/Open


This item is licensed under a Creative Commons License Creative Commons