Skip to main content

Discipline: CS

Advisor: Richard Leinecker

Most drones today require the constant supervision of skilled operators using joysticks or technical commands, which limits who can use them and slows down automatable tasks. Our project solves this by creating a drone that can understand everyday human language and visual context, so that you can simply say "sweep search this area" or "model that object", and the drone will carry out the objective. It is designed for people who need drones for practical tasks, without requiring them to be drone experts. Our architecture features a robust multi-agent framework, a hexacopter with a depth camera, and NVIDIA Jetson devices for AI edge computing. Through a computer, human operators view a live video feed and give commands through a chat window. The agentic AI then interprets the commands, plans the mission, executes maneuvers, and adapts to changing conditions in real time. The system operates on a self-contained, offline network, ensuring all data is processed locally for maximum security and low latency. And unlike existing drones that rely on constant manual input or rigid preprogrammed routes, our project emphasizes drone autonomy and ease of use. Potential applications include search and rescue, non-invasive environmental monitoring, construction inspection, and rapid disaster response.

Back to Showcase

Team
First Name Last Name
Rayyan Jamil
Abraham Ng
Nathaniel D'Alfonso
Connor Hallman