top of page

Perception

The Perception team is responsible for translating the boat’s surroundings, position, and orientation into information useful for decision making. This encompasses both computer vision and sensors.

Computer Vision

Members working on computer vision spend time researching neural networks, building and training an object detection model, augmenting and annotating data, and integrating the model with our ZED 2i camera. The CV group had one main goal this semester: make our model more robust. An initial step was accumulating and annotating more data for our model. Next, the CV group transitioned from a YOLOv5 object detection model to YOLOv8, a stronger and more complex tool.

p1.png

A new feature added to improve the accuracy of our object detection system this semester was persistent memory. While the boat is attempting to detect buoys in real time, it attempts to label each frame in a video sequence. Because the boat is moving and the model may misidentify buoys in some frames, remembering buoys which appeared in previous frames can improve the boat’s knowledge of its surroundings. To implement this functionality the team deployed the logic in the following flow chart.

p2.png

A future goal of the CV group is building a neural network from scratch to serve as our new object detection model.

Sensors

Our sensor suite includes a ZED2i stereo camera, SparkFun MicroMod GNSS boards, tilt compensated magnetic compasses, temperature, and leakage sensors. The sensors group works closely with members of the Controls & Microcontrollers and Electrical System groups to integrate the sensors into our microcontroller framework.

​

In the future, we hope to integrate LiDAR into our sensor suite for more robust depth sensing and confirmation of the objects detected by CV.

p3.png
bottom of page