In April 2017, in response to an industry-posed challenge, Dr Adrian Clark and I developed a system for capturing video of the entire 4π steradians. The problem was testing such a system without getting operators in the field of view and to truly explore a 3-dimensional environment. Dr Clark, an experienced computer vision researcher, and myself, a specialist in human-machine interaction and underwater research, proposed to use computer vision processing to reconstruct a 3D model of the environment rather than just capture video of it. Furthmore, we proposed circumventing the problems of making the prototype hover by building an underwater system to capture structures such as coral reefs and wrecks, using floats within it to achieve neutral buoyancy.
The rig was tested at a marine research site in Dominica in the Caribbean during July and August 2017. The 3D models reconstructed from the images captured from the rig show accuracy and clarity at both small and large scales. Work is currently under way to optimise the procedure and increase the speed of data collection as time spent in the field is the rarest commodity in this type of research; however, the current speed of environment mapping is far greater than any other visual method currently used.
The research is spawning new collaborations with interested researchers including Coral Reef Research Unit in Essex's School of Biological Sciences, where we are now working on several new reef conservation projects.
The University of Essex has been active in underwater robotics for some years, and is keen to extend existing work on robotic fish and stereo vision systems. To this end, the faculty has funded a PhD scholarship to develop this research.