Seervision is on a mission to make visual storytelling effortless by automating camera operation. Seervision’s hardware-agnostic software incorporates image recognition, artificial intelligence, model predictive control, and high-speed dynamics to make autonomous video production a reality.
The Seervision team of engineers, programmers, and broadcasters is developing a radically new technology called “adaptive motion control.” This technology facilitates a new approach to production, in which intelligent robotic cameras collaborate to segment and understand each scene. The project is the result of years of research at ETH Zurich’s automatic-control laboratory, in which Seervision has analyzed the motion sequences of human-operated cameras in a variety of applications and converted them into algorithms that allow for real-time object recognition and scene segmentation. These algorithms are continuously enriched through a machine-learning process. This allows for multiple robot camera setups in which each robot can not only autonomously perform all the tasks of traditional camera work, but also exchange information with other robot cameras. The resulting shots are indistinguishable from those produced by a team of human camera operators.
Seervision represents a revolutionary change in live event, studio, and sports production. The dull and repetitive operational tasks handled in the past by traditional camera crews can now be executed autonomously — enabling humans to focus on creative storytelling.