Germany - NZ Workshop on Underwater Vision and Aquatic Applications

Local venue: Dunedin, New Zealand, with zoom online presentations

Online access German time: Monday February 3nd 19:50 – 22:30 o clock (CET = UTC + 1:00)

Live access Dunedin time: Tuesday February 4rd, 07:50 – 10:30 o clock (NZDT = UTC + 13:00)

This workshop will bring together researchers in the field of underwater vision research and aquatic applications, with online zoom presentations from the GEOMAR Helmholtz Centre for Ocean Research Kiel, Germany, Kiel University, and hybrid presentations at Dunedin, New Zealand. The workshop is open to all interested parties.

Zoom channel: https://vuw.zoom.us/j/98589349408

The workshop is open to all interested parties. There is no need for registration, just join the zoom channel.

Programme in NZDT (CET) times:

Physical Address: Otago Business School Lecture Room 1.18, Union Street East, Dunedin

Google Map: https://maps.app.goo.gl/xJ1qc54ADQknnyEK8
Dunedin time (Kiel time) Session  
07:50 – 08:00 (19:50 – 20:00) Introduction  
08:00 – 08:25 (20:00 – 20:25) David Nakath: Inverse Physically Based Underwater Imaging VIDEO RECORDING
08:30 – 08:55 (20:30 – 20:55) Vasco Grossmann: Efficient and Accurate 3D Reconstruction of Underwater Environments VIDEO RECORDING
09:00 – 09:25 (21:00 – 21:25) Bing Xue: Artificial Intelligence for Aquaculture VIDEO RECORDING
09:30 – 09:40 (21:30 – 21:40) Tea Break  
09:40 – 10:05 (21:40 – 22:05) Rainer Kiko: AqQua - The Aquatic Life Foundation Project: Quantifying Life at Scale in a Changing World VIDEO RECORDING
10:10 – 10:35 (22:10 – 22:35) Richard Green: High-resolution (0.1mm) 3D surveying of dynamic underwater surfaces for species recognition and sizing VIDEO RECORDING
10:40 – 10:45 (22:40 – 22:45) Closing  
*For watching the recordings online, the recommended browsers are Safari, Chrome, and Microsoft Edge.

Presenters and Abstracts:

David Nakath: Inverse Physically Based Underwater Imaging

In general, underwater imagery is governed by a unique image formation model. Its complexity often makes automatic processing a very complex task. Hence, knowing the basics is greatly beneficial for practitioners defining downstream tasks on underwater data. This talk will detail geometric as well as radiometric distortions, due to cameras operating directly in water. Geometric distortions emerge when a light ray passes interfaces between media with different optical densities, specifically: air–glass–water. Radiometric distortions are caused by attenuation and scattering effects inside the medium itself. Furthermore, homogeneous illumination by the Sun, inhomogeneous artificial illumination, or even a mixture of both contribute another dimension of complexity to be explored. Physically based rendering approaches are particularly well suited to tackle problems in underwater vision, due to the dualism between models originally devised in physical oceanography and the medium models which are nowadays typically employed in physically-based raytracing. This enables us to capture underwater imagery and simultaneously measure optical properties of the water using an established sensor suite. We can then directly synthesize images with the same medium-properties and verify our rendering-systems. This enables us to provide reliable synthetic image data focusing on specific problems to train, develop, and test algorithms with. Vice versa, it is possible to infer inherent optical properties of the water body directly from images using an analysis-by-synthesis approach. The same approach can be applied to flat ports, dome ports as well as light sources. Being able to calibrate and simulate refraction, light, and the optical properties of the water directly enables a wide range of applications such as image restoration, shadow removal and light removal directly on submerged 3D models. In addition, medium-based approaches allow for re-immersible and re-lightable 3D-reconstruction of partially transparent specimens (e..g, Zooplankton) which inhabit our oceans.

Short Bio: Dr.-Ing. David Nakath is a computer-scientist by training, Post-Doc at the University of Kiel and Guest Scientist at GEOMAR Helmholtz Centre for Ocean Research Kiel. After finishing his PhD in Active Perception for Spacecraft Navigation and Mapping at the University of Bremen he joined GEOMAR to work on visual deep sea underwater mapping. He is now mainly interested in computer vision problems revolving around camera-light systems which are deployed underwater. Those are namely refraction, attenuation, scattering and challenging illumination conditions. He addresses those problems predominantly with physically-based methods, inverse Monte Carlo raytracing as well as with neural networks.

Vasco Grossmann: Efficient and Accurate 3D Reconstruction of Underwater Environments

Mapping and exploring the seafloor are critical tasks for understanding marine ecosystems, monitoring resources, and advancing oceanographic research. However, the underwater environment presents unique challenges, including limited visibility, light attenuation, dynamic scattering, and the absence of GPS, making traditional airborne or terrestrial visual mapping techniques ineffective. While methods such as sonar, ROVs and towed platforms have been employed successfully, they are often cost-prohibitive, time-intensive, and difficult to scale. AUVs equipped with advanced visual and navigation systems offer a transformative solution, providing high-resolution 3D reconstructions of vast underwater regions in an efficient and scalable manner.

This presentation explores advanced seafloor mapping enabled by AUV-mounted imaging systems and a navigation-aided structure-from-motion (SfM) technique, developed using a pipeline from GEOMAR Helmholtz Centre for Ocean Research Kiel. By combining locally generated 3D structures within a hierarchical SfM framework, it refines both clustering and reconstruction, leading to higher accuracy, minimized drift, and faster processing relative to traditional tools like COLMAP. This method not only handles large-scale datasets efficiently but also preserves sub-centimeter precision in 3D reconstructions. Additionally, color restoration under diverse underwater conditions is achieved through inverse rendering, applying the Jaffe-McGlamery illumination model to produce realistic, water-imprint-free colors. The system spans the entire process—from raw image capture to generation of high-fidelity mesh models, offering a complete solution for reconstructing underwater environments.

Short Bio: Dr.-Ing. Vasco Grossmann earned his doctorate in the field of massively parallel hardware acceleration on FPGA clusters at Kiel University’s Technical Computer Science Group. He subsequently joined the Multimedia Information Processing Group at the Department of Computer Science at Kiel University, conducting research on underwater computer vision, specifically camera calibration and panorama stitching for underwater vehicles. Currently, he contributes to the MARISPACE-X project, where his work centers on visual underwater reconstruction developing physically-motivated methods for underwater image restoration.

Bing Xue: Artificial Intelligence for Aquaculture

Artificial Intelligence (AI) and Machine Learning (ML) have emerged as powerful tools across various scientific domains, revolutionizing data analysis and decision-making processes. These technologies excel in tasks such as prediction, image classification, and pattern recognition, offering unprecedented insights and efficiency. In the realm of aquaculture and marine science, AI and ML applications are rapidly expanding, addressing critical challenges and optimising operations. This talk explores the integration of AI/ML techniques in aquaculture and marine research, focusing on key applications such as mussel farm image analysis and reconstruction, fish breeding optimization, and health monitoring. We will discuss how these technologies enhance mussel farm management, improve yield forecasting, and contribute to sustainable aquaculture practices. By leveraging these advanced computational methods, researchers and industry professionals can drive innovation and sustainability in aquaculture and marine science.

Short Bio: Prof. Dr. Bing Xue is a full professor of Artifical Intelligence. She is a deputy Head of School of Engineering and Computer Science (SECS), and a deputy Director of Center for Data Science and Artificial Intelligence (CDSAI) in Victoria University of Wellington, Wellington, New Zealand. Prof. Bing Xue is a Fellow of IEEE and a Fellow of Engineering NZ. Her main research interests are Artificial Intelligence, Machine Learning, Data Mining, Computer Visions and their applications.

Rainer Kiko: AqQua - The Aquatic Life Foundation Project: Quantifying Life at Scale in a Changing World

Climate and human well-being depend to a large extent on aquatic life. In particular, organic matter formed by plankton sustainably sequesters vast amounts of carbon from the atmosphere. Climate change alters fish-planktonic food webs, impacting the ocean’s biological carbon pump, as well as marine and freshwater food resources. While the critical role of aquatic life for climate regulation and human nutrition mandates precise mapping and monitoring, the abundance of most species is still unknown and unmonitored to date, and likewise current estimates of global marine carbon export exhibit vast uncertainties on the same order of magnitude as anthropogenic CO2 emissions. To this end, distributed pelagic imaging techniques enable the sustained observation of aquatic life and its debris, comprehensively covering the earth’s water bodies down to the bottom of the deep sea. Each day, millions of images of plankton and associated environmental data are acquired by researchers around the globe using a variety of devices. Each individual data point provides information about biodiversity, functioning of aquatic foodwebs and ecosystem status of the related water body as well as its role in carbon sequestration. The Aquatic Life Foundation Project (AqQua) will combine billions of images acquired with a variety of devices across the globe and leverage large-scale HPC to train the first foundational pelagic imaging model. The model will be fine-tuned for species classification, trait extraction and particulate organic carbon estimation in the Foundation Stage, and distilled and deployed to the global community as a resource-efficient, easily usable tool in the Readiness Stage. The Visionary Stage aims at a compositional model that integrates orthogonal modalities such as remote sensing- and environmental data to establish global maps of species biodiversity, ecosystem status, and carbon flux at unprecedented accuracy and granularity, thereby generating a fundamental understanding of marine and freshwater life in times of global change that will serve to aid decision making, in particular with respect to emerging ocean-bound carbon dioxide removal technologies.

Short Bio: Prof. Dr. Rainer Kiko (https://www.geomar.de/rkiko) holds a Heisenberg Professorship at the GEOMAR Helmholtz Centre for Ocean Research Kiel and Kiel University, Kiel, Germany. He received his Diploma in Biology in 2005 and his PhD in Biological Oceanography in 2009 from Kiel University. Since then he worked at GEOMAR (2009 to 2019; 2022 - present) and as a Make Our Planet Great Again Laureate at the Laboratoire d’Océanographie de Villefranche (France, 2019 - 2022). He uses different imaging techniques to investigate the plankton and particle distribution in the oceans and their impact on biogeochemical fluxes. This work entails the classification of millions and millions of plankton images.

Richard Green: High-resolution (0.1mm) 3D surveying of dynamic underwater surfaces for species recognition and sizing

We have been developing autonomous inspection of aquaculture environments with high-resolution (0.1mm) 3D imaging and AI-driven navigation of dynamic surfaces (e.g. mussel ropes) and tool use (e.g. grabbing moving mussels). The technology has enabled real-time classification, sizing and removal of invasive marine organisms (IMOs) with the goal to also improve yield estimates. This has been evaluated on aquaculture applications including shell-fish farming (e.g. mussels), ocean-caged finfish (e.g. salmon), seabed farming (e.g. scallops) and detection and removal of IMOs (e.g. wharf pylons and ship hulls).

Short Bio: Prof. Dr. Richard Green is with the Computer Science and Software Engineering, Faculty of Engineering, University of Canterbury. Since 2004 Professor Richard Green has been lecturing in computer science at the University of Canterbury (UC) after running his own successful 50 staff software company in Sydney (sold to a multinational). With over 200 refereed publications, Richard heads the UC Computer Vision Research Lab with an emphasis on AI for computer vision (autonomous robots, drones, underwater robots). Richard also co-chairs the NZ AI Researchers Association and leads the UC AI Research Cluster.

https://profiles.canterbury.ac.nz/Richard-Green

Workshop organizers:

Dr. Fanglue Zhang, Senior Lecturer in Computer Graphics, School of Engineering and Computer Science, Victoria University of Wellington, Wellington, New Zealand. Host of the Julius von Haast Fellow of the Royal Society of New Zealand.

Prof. Dr.-Ing. Reinhard Koch (em.), Department of Computer Science, Kiel University, Germany is the Recipient of the Julius von Haast Fellowship 2023-2026 of the Royal Society of New Zealand. His interests are Computer Vision and Computer Graphics, 3D-Scene reconstruction from images and videos, and applications in camera calibration, object tracking and underwater imaging. He is currently working with Dr. Zhang at the Victoria University of Wellington on the JVH Project Holistic Volumetric Representation for Reconstructing Immersive Videos.

Professor Brendan McCane, School of Computing, University of Otago, Dunedin, New Zealand. He is the local organizer and host of the Workshop on Underwater Vision. His research interests include computer vision, pattern recognition, machine learning, biomedical imaging, and robotics. My current research focuses on theoretical understanding of the effectiveness of deep networks, and self-learning for robots. He also has an interest in computer graphics and participates in the computer graphics group at the University of Otago.

Acknowledgement: This workshop is supported by the Julius von Haast Fellowship JVH-VUW2301 (Holistic Volumetric Representation for Reconstructing Immersive Videos) by the Royal Society of New Zealand, 2023-2026.