I’ve had some more time on my hands recently and made the drone I have follow my face around my house using a CNN to detect faces, a tracker to follow them when CNN fails, and PID systems to move off of that. While doing so I ran into a ton of problems like how to get enough frames per second to control the drone in real time while recognizing faces, how to stabilize the drone, and how to estimate distance with one camera. I think for a lot of folks here with more experience these questions are easy, but for me they took a bit of work and I put together a video showing the problems and how I got around them. If you’re curious, here’s the video: https://www.youtube.com/watch?v=doKjqw0vSLg
Here’s the github, and there’s more documentation on this project in the README. https://github.com/MZandtheRaspberryPi/im_practical_programming/tree/master/all_seeing_drone
If you have thoughts or feedback, I’d love to hear it.
For my next project I’d like to do some location and mapping and maybe obstacle avoidance, so if you have ideas for platforms (ie, cool robots, maybe quadrapeds) i could use as a base to explore that let me know too!