Digital Engineering Projects

Lecturers

  • Christoph Steup
  • Sanaz Mostaghim

Time and location

  • Time: The first meeting will take place on April 6th 2016 at 17:00 (Note: If you are interested to take this project, you must attend this meeting in which we assign you in groups)
  • Location: G29 - 035

Usefull Skills

  • Programming Languages: C(++), Lua
  • Paparazzi UAV Framework: Overview
  • Control Theory: PID Controllers
  • Sensor and Signal Processing
  • Image processing

Organization:

The course will be taken in groups of 3-4 Students per Topic. The  students and the groups will be chosen by us depending on your background. The individual topics are not fully fixed, extensions and modifications are possible depending on the skills and interest of participating students. This will be discussed in the first meeting. The result of each project is a working demonstration with commented source code and a written documentation indicating the general concept and a Howto to start the demo.

Available Topics:

External tracking of copter's position  and attitude

The FINken quadcopters are designed to be fully autonomous and do not rely on any external sensors. As a consequence no external positioning like GPS is considered. For the evaluation of the performance of the copters, it is necesary to track the movement of the copters quite accurately. To this end a non-real-time external positioning system for indoor use is necessary. We envision the usage of a camera system for this purpose. The camera is mounted centrally on the ceiling over the arena.  The goal of this Digital Engineering Project is to provide a software to extract the positions (x and y-coordinate), the orientation (yaw-angle) and the id of multiple copters from the video stream delivered by the camera. A prototype software already exists that tracks a single copter. This software however cannot handle multiple copters or provide identity infomration. Addtionally, the used visual approach needs a lot of manual tuning and pprovides orientation with limited stability. The existing software is based on ROS and already contains everything needed to fetch the video stream and even store and replay it.

Sonar-based position estimation and attitude estimation

The FINken quadcopters shall only use their internal sensors for position estimation and do not rely on external information. Currently, our main sensor array is an array of 4 sonar sensors mounted in circle on top of the copter. These sonars may detect walls an obstacles, but may also be used to infer the position of the copter in a known environment. The goal of this DE-Project is to provide an algorithm that extracts pose information (x-y coordinates and orientation) based on the data of the sonar sensors. We have already a prototypical software using logged sonar data to infer the positon post-mortem. It is viable to evaluate this software using more logs with possibly higher data rates. Additionally, the current attitude, accelerations and turn rates of the copter may be used to further enhance the quality of the result.

Extended Kalman filter for position estimation

The FINken quadcopters are a very complex dynamic systems with a lot of variables defining their behaviour. We now have a non-inear model of the copters movement based on the control inputs of the autopilot software. Together with an existing Kalman-Filter implementation is might be beneficial to develop an extended Kalman-Flter to fuse the various sensor values provided by the copter to a single stable and reliable state estimation. We currently have accelerations, turn rates and angles as direct sensory input. Additionally, we may use the states of other copters as sensory inputs to increase the quality. Another available sensor is a optical flow sensors easuring the speed in x-y direction and the yaw rate of the copter. However, this output is very dependent on the surface and needs other sources to be validated.

Visual SLAM-based Position Estimation

Another approach towards positioning of the copters is based on optical tracking of the sourrounding environment. To this end, a camera together with a power-full embedded compuuter board needs to be integrated in the copters. The camera observes the sourroundings of the copter an tries to find landmarks to compare them to the existing partial map. There are multiple ways to locate the copter in the map and to update the maps information. The goal of this project is too establish the hardware setup necessary to aquire visual information and process it in a seperate computing unit. Afterwards existing Visual  SLAM algorithms shall be investigated for their usage in the indoor copter scenario. The most promising algorithm shall then be implemented on the computing platform an evaluated using the external localizations information.

Special topics are available on demand. If students have own ideas we will gladly try to provide appropriate topics.

Slides

Kickoff

 

 

Last Modification: 21.04.2016 - Contact Person: Webmaster