In this post we will build a ROS node on a companion computer to subscribe to data being published by the flight control unit (FCU). This will allow us to use the many data streams available from the flight controller as inputs to our system and then be able to make decisions over how the UAV should be controlled.
This series of posts is for those who want to take more control of their UAV (or other robotic vehicle) using a companion computer connected to the flight control unit. Typically to get started, this might be a Raspberry Pi connected to a Pixhawk FCU.
22 students from France will be spending the next two weeks building and coding autonomous drones as part of the UWE Bristol Summer School. Along with Miles Isted s’Jacob, I am delighted to be leading on this activity and have produced a short sneak peek video of the challenge to share.
So six team drones racing autonomously on a single track? What’s not to like?
Code is based on that used for MAAXX Europe, so Python Dronekit, with ArduCopter on Pixhawk. However, the final code will be posted on my github at the end of the Summer School.
If you have heard of Robot Operating System and want to use it to monitor and control UAV flight, this post will get you started…
More specifically, this post details how to set up a Pixhawk flight controller running PX4 firmware, with a Raspberry Pi3 companion computer running Robot Operating System. This combination will give flexible control over the flight control unit and the ability to integrate a very wide range of features such as depth-sensing cameras and machine learning networks.
This is part of a series of posts outlining the evolution of my GroundHog hexacopter into a multi-role UAV. It is based on a Pixhawk flight controller with a Jetson TX2 companion computer. It has now been fitted with an Intel RealSense D435 depthcam.
Here’s a quick technical post for anyone attempting to harness the capabilities of a Realsense D435 camera on a Jetson TX2. For me, this is about getting usable depth perception on a UAV, but it has proved more problematic than I originally anticipated.
This post aims to provide some simple instructions that now work for me, but took a long time to find out!
The Intel librealsense2 library does not support ARM architectures as I write. This causes a fatal compile error when the file librealsense/src/image.cpp is accessed, as it queries the system architecture.
Modify image.cpp as in my Github gist here. This bypasses the architecture check.
I am getting some warnings of incomplete frames, but it’s not clear if this is due to a power issue on the powered hub or a software configuration. Despite this, the provided tools seem to work well and demonstrates some of the best depth camera capabilities I have seen (and this is my third depth camera to date).