Dual Camera Raft | Surface + Underwater Live Stream & Time Lapse Edge-AI Monitoring
The Dual Camera Smart Observation Raft is a self‑powered surface and subsea imaging platform that packages edge‑AI cameras, solar‑battery power and managed connectivity into a single, ready‑to‑deploy unit for high‑value sites, pilots and research projects. It delivers simultaneous surface and underwater views as either continuous live streams or configurable time‑lapse, giving operations teams a clear, always‑on window into stock behaviour and welfare, predator and wildlife presence, farm infrastructure, and surrounding environmental conditions.
Designed for remote and exposed locations, the raft combines marine‑grade construction with off‑grid solar‑battery operation and FLOW cloud integration, so video, alerts and camera controls are available in real time from shore, reducing diver call‑outs, cutting unnecessary site visits and building a rich visual record for compliance, science and operational decision‑making.
Solar‑battery operation, marine‑grade construction and live FLOW dashboard integration turn the raft into a practical 24/7 monitoring tool for remote or hard‑to‑access waters, reducing diver call‑outs and unnecessary site visits while preserving rich video datasets for later analysis and reporting.
Camera systems with machine vision and edge artificial intelligence turn raw imagery from farms and coastal sites into continuous, interpretable information streams that feed directly into data dissemination and decision making tools.
Underwater and above water cameras are deployed on fixed moorings, farm gear and mobile platforms to capture video and still images of stock, gear and surrounding conditions. These images record events such as gear handling, stocking, cleaning, predation, fouling and environmental change that traditional sensors cannot fully describe. Edge artificial intelligence models running close to the camera process this imagery in near real time, extracting structured signals such as activity types, frequencies, locations and visual indicators of condition, rather than just storing raw video.
Machine vision models detect and classify events and features in the imagery, such as gear movements, fish or shellfish presence, fouling loads, turbidity changes or other anomalies. Edge artificial intelligence filters and summarises these detections on the device, sending compact event records and key metrics back to the cloud rather than high bandwidth video streams. These event streams are time stamped and geolocated, then combined with environmental sensor data to build a linked history of what happened, where and under what conditions.
In the data dissemination layer, camera derived events appear alongside water quality and operational data on interactive dashboards and maps. Users can scroll back through time and see not only how temperature or oxygen changed, but also when gear was cleaned, when cages were moved, when fouling increased or when unusual activity occurred, all anchored to imagery based evidence. Thresholds and rules can generate alerts when the camera system detects specific patterns, such as missed maintenance cycles, abnormal turbidity or unexpected activity at a site, prompting timely follow up.
By integrating machine vision and edge artificial intelligence outputs into the same tools as conventional sensor data, decision makers get a richer picture of cause and effect on farms and in surrounding waters. Farm managers can link performance and welfare outcomes to specific handling and maintenance practices, refine standard operating procedures and demonstrate compliance with environmental and operational expectations. Over time, the combined camera and sensor record forms an auditable history that supports traceability, reporting and continuous improvement across operations.