Vehicle

The vehicle is a ducted fan quadcopter built out of 3-D printer components, COTS carbon fiber plates and square tubes, ducted fan motors kits, Jetson TX1 central processor, and various other subsystems and wiring to connect everything.

Propulsion and Lift System

The airframe has 4 ducted fans that are fixed to the airframe. Thrust differentials are created by changing the rotational speed of individual fans to affect the attitude. This allows the airframe to steer, lean, and move laterally at various speeds and inclinations. Additionally, the ducted fans localize thrust, allow for a large area to interact with the dorsal strike plate, and create inherent lateral collision durability as they are physical contained in the propeller shroud.

Guidance, Navigation, and Control

The fundamental flight controller is CC3D Revo which provides attitude data, fan speed modulation, GPS (disabled), historical data and logging, and various other features. On top of this basic flight controller is a Jetson TX1 with an Ubuntu installation and a ROS environment running. This allows us to do intensive localized processing with a full featured and readily extensible framework. For example, we just plug-and-play with MoveIt! for path planning in a supplied 3D environment. Additionally, we were quickly able to integrate DSO slam for a bootstrapped representation of the environment shortly after takeoff. This, combined with heavily developed simulation features, gives us pre-existing messaging framework to do decision level probabilistic heuristic command and control.

Stability Augmentation System

The CC3D Revo has inherent SAS built into the controller. We are not manually sending fan speeds, attitude, orientations, etc to the airframe and instead allow the COTS flight controller to ensure that we do not have run-away stability problems or system induced oscillations.

Navigation

MoveIt! is a ROS plugin that allows predictive and actualized path planning based on a passed 3D environment. It is also capable of attitude achievement in route and dynamic reorientation. The main trade off, as in any planning algorithm, is the distance/time in the future that the algorithm is capable of planning out. We have used a heuristic load-balancing approach that takes distance to target, number of players available, and transition to state to minimize the amount of processing power necessary at certain times and allow rapid updating when approaching physical interaction with a single member of the herd.

Flight Termination System

The kill switch is an RF receiver that cuts electrical flow to the motors. The interrupter on the airframe receives a constant signal to affirm that the handheld safety switch is still active and transmitting. Cutting power to the handheld transmitter ceases the signal, resulting in the airborne kill switch being actuated, which cuts power to the motors. This will most likely result in damage to the airframe as it will be unable to do a controlled descent. However, it’s intent is to stop a run-away situation when under self-guidance or when meaningful manual control is no longer possible and the best option is just to kill power to the airframe.

Payload

The payload is a downward facing Leopard Imaging LI-USB 30 HD camera mounted to a dual gimbal allowing 2 degrees of freedom of movement. The entire system is enclosed in a plastic dome that allows full 360-degree view below the quadcopter (save the minimal landing gear). Additionally, the protective dome acts as a point of physical interaction for the dorsal strike plate on the iRobots.

Sensor Suite

The only EM radiation collector is the aforementioned camera. Additionally, we have a series of inertial attitude recorders, internal prop speed governors, and other internal performance monitors built into the flight controller that we can access for debug purposes.

Guidance Navigation and Control (GNC) Sensor

The internal flight controller is the aforementioned CC3D Revo. This is connected via USB to the Jetson TX1. The internal flight controller, with added information about the environment from the camera, sends information to the TX1 where it is integrated into the ROS environment and fed to the MoveIt! flight path planning software. After the software does its planning and path creation, that data is converted back into attitude and airspeed information for the flight controller. The CC3D then determines necessary rotation speed to create a certain attitude, vertical and lateral movement velocities, and converts that data into an electric potential which is sent to the individual motors. All of this data is logged at both the flight controller and central processor level for later debugging.

Mission Sensors

As mentioned earlier, the only mission sensor is the Leopard Imaging camera. First, the algorithm uses feature detection to differentiate the environmental and target date. The algorithm used allows visual triangulation of position based on historic data and reference key frame difference analysis from the environmental data. This allows us to have a single sensor that also serves as a reference for targeting the individual robots.

Target Identification

Targeting is essentially done by a separate processing analysis of the camera data. The OTS geospatial algorithm (DSO SLAM) is insufficient to determine what is a moving target or provide a real-time accurate picture of what those targets are doing. Thus, the other component of the visual data mentioned earlier is sent to a different algorithm authored by the SDSMT UAV team that converts visual data to points referenced against the UAV position and arena size and relative location to produce normalized cartesian vectors that help to create, and later rectify, the internal mission picture of probable and actual target locations.

Threat Avoidance

Threat avoidance is primarily done through a combination of the above described methods. Essentially, we have a visual filter that allows recognition of the obstacle robots along with anything above a certain altitude as being classified as a threat. In both cases, the current location and probable path is fed into a probabilistic decision heuristic in both the overall strategy decider and the path planning software mentioned earlier. Our overall outlook on threat avoidance can be considered a “no risk, no reward” method of path planning in that we will, at some point, hope our assumptions are good enough and proceed with the predetermined path once we lose the ability to visually acquire potential threats. Our flight controller combined with the overall durability of our airframe should withstand and recover from minor collisions as well.

Communications

The initial way of “waking” the robot is completely physical. We must plug in the battery sources and wake the primary and secondary processing units. Once online, there are two communication paths that we use to control the airframe. The first is an RF interface that allows manual control of the airframe through a handheld RF transmitting controller. The RF controller additionally can send basic commands to the airframe, resulting in actions like power on, automated takeoff, “enter mission mode”, etc. The second is over TCP/IP into the integrated WiFi chip on the TX1. We will have a base laptop and router as part of our set-up to receive telemetry. These two communication methods allow the robot to enter various modes of flight and competition and transmit telemetry and performance data for monitoring.

Power Management System

Power is supplied by a pair of lithium polymer battery packs, which can supply the necessary current to the ducted fan motors, controllers, and computers. Power distribution to the ESC is performed by a wiring harness. Power regulation and distribution to the controllers and computer is through a COTS switching regulator for RC aerial vehicles.

Operations

The general operation of the airframe has multiple steps, phases of flight, and operational functionality to verify. The overall walk-around is detailed below. After the airworthiness is determined and the command and control base station is set up, we are ready to engage in the competition round. Essentially, we run up the aircraft, command it to takeoff and center over \the arena, tell it to enter mission mode, allow the 10 minute round to expire, tell it to exit mission mode and re-center over the arena, command it to land back at its takeoff point, and then power the whole system down so we can change out batteries and prepare for the next run. Additionally, at all phases of flight we will be ready to take manual control if we enter a run-away or potentially dangerous situation to avoid loss of aircraft or potential harm to any person. If that is not sufficient we have a “kill switch” emergency procedure to cut all power to the motors sacrificing the airframe instead of causing injury.

Flight Preparations

The initial flight preparation involves setting up the base station laptop and private network on associated router and getting all the command software and shells up and running. Additionally, we will check all the connections on the airframe to ensure structural stability and electrical connectivity during the flight. We will spin the rotors and look for cracks, make sure the batteries are sufficiently charged and intact, ensure the landing gear is stable, and verify the overall airworthiness of the aircraft.

Checklists

Normal Start-up

Takeoff

Enter Mission Mode

Exit Mission Mode

RTB and Landing

Normal Power-down

Emergency Shutdown

Emergency Manual Control

Man/Machine Interface

The main controller is a X9D Plus programmable RF transmitter/receiver with status display. It contains numerous digital control switches, dials, joysticks, and trim tabs. We are using only the front switches to send modal commands and the joysticks for manual throttle and attitude control when necessary.

Risk Reduction

The main overarching paradigm for our risk reduction is making the robot as robust and durable as possible. Everything is enclosed in some shape or fashion. The fans are ducted to prevent lateral collision. All of the computational components are on the top of the airframe ensuring that it will only be damaged from a vertical collision or complete loss of control of the airframe and inversion. The camera is protected by a plastic dome that also serves as a strike plate. The kill switch is another form of risk reduction as outlined previously. And the final piece, which is also the most likely to fail, is the software controlled obstacle avoidance.

Vehicle Status

Our vehicle status is communicated locally on the airframe and remotely to the RF controller and laptop. Locally, we get basic health status downlinked from the NVidia Jetson TX1 to the flight controller based on the time of last commanded input. If ⅕ of a second passes, the flight controller assumes that something may be wrong with the control algorithm on the NVidia Jetson TX1 and attempts to halt movement and hover in a neutral attitude. If 10 seconds pass with no command from either the NVidia Jetson TX1 or RC Transmitter the flight controller attempts to land in its current position.

Shock and Vibration

All of the components have foam tape at between the contact points of intersection and attachment. This should isolate vibration and provide limited shock absorption. It will also provide a dampening effect from higher frequency sympathetic vibrations. Additionally, the tolerance of the parts is low enough, in both COTS and 3D printed, to provide sufficient rigidity to prevent bowing, ambulation around joints, and low frequency oscillations across the airframe

EMI/RFI Solutions

The only command and control is through RF in the form of the RF controller and the kill switch. Our approach is first to use different frequencies for outright deconfliction. The second is to avoid communication as much as possible. Capable onboard processing drastically reduces the importance of OTA communications. Lack of status/command and telemetry to the RF controller and base station respectively does not prevent function of the airframe. Only lack of a signal from the kill switch will produce a deleterious state.

Safety

The main safety features are inherent in the physical design methodology and hardware/software mentioned earlier. Self-contained and enclosed systems prevent damage of fragile components and also reduce abrasive/free-rotating/cutting surfaces at the same time. Hardware redundancy in control through automatic, manual, and emergency kill switch ensures that the airframe will be ultimately under human control. Software redundancy can be seen in Figure 4 earlier.

Modeling and Simulation

Physical modeling was primarily done in SolidWorks 2017 for component creation and printing. It was re-created in Onshape for display, rendering, and exportation to simulation. All of the simulation was done in Gazebo. This is an open-source robot modeling program that gives pre-existing models for iRobot Creates, programmable behavior, a physics engine, import capability, and plug-in compatibility for ROS and thus all of the operational algorithms mentioned previously.