Pixhawk-compatible telemetry on your RC transmitter display

Cool new product from Anysense that works with all 3DR flight controllers: 

AnySense Pro turns your Radio Control into a real-time telemetry system with no costs for additional hardware, such as a GPS sensor, voltage module or vario.Additionally all data will also be saved on the storage card of your remote control. With the right tools, you can easily visualise the content of the storage card, export it to Excel or even embed it into existing videos.

Here’s the display on a Taranis transmitter:

Powered by WPeMatico

Drone Workshop also works in China

Here is Shanghai Mushroom Maker Space, funded by DFRobot and Intel. This maker space wins lots of competition.

They certainly good at drone. At the end of January, a drone workshop started in here.

There was seven high school students joined in this workshop. After two days, they will have their own copter based on pixhawk.

Instructor is a former developer of Ehang. I forgot to take his picture.

Students building his own copter.

group photo

the second person is me. the third is the instructor.

It’s a successful workshop, all students built his own copter very well. This era is very amazing. I can’t imagine that when I was young.

Written By Stone

I think maybe I should startup a market research company later…….OTZ……

Powered by WPeMatico

Is Deep Learning the future of UAV vision?

A few weeks ago Chris Anderson shared a post about the efforts of bringing obstacle avoidance to PX4. This efforts are based on Simultaneous Location And Mapping  (building a map of the world in real time) and Path Planning — For the sake of conciseness i will call this approach SLAMAP.

 

I want to offer some thoughts on why this might not be a very promising approach, and propose an alternative that i think is more interesting.

 

One of the major shortfalls of SLAMAP is its inability of handling dynamic objects (objects that move)

For example, the block on the middle of this 3D map could either be a still box, or a vehicle moving at full speed towards the drone. Just take a moment to think how fundamental the perception of motion is to your ability of moving around or driving. Think about how would it be to have no information about the velocity of the objects around you.

 

Another problem is the binarity of the information stored in 3d maps — The cell is either empty or solid, seen or unseen.

Look at this image. To Assume that the cloud of snow powder is a solid wall is a huge constraint on the path of the drone. If the drone were following the snowboarder from behind, it would be forced to suddenly stop.

 

The idea of seen and unseen is also very limiting. If a building you have previously seen is now out of line of sight it’s reasonable to assume it hasn’t moved. But you cannot say the same about a person or vehicle. Similarly you can’t assume that there is nothing in every place you looked and saw nothing. — This relates to the lack of understanding of dynamic objects.

 

What I am arguing here is that to successfully interact with the environment, a drone needs a semantic understanding of the world (materials, physics etc) and ability to handle uncertainty. SLAMAP can’t do that.

 

Another difficulty with SLAMAP is that the utilization of this framework to provide the desired functionality is not trivial. Path planning solves the problem of finding a feasible route from A to B. But following a target is not the same problem. Reframing the problem to track a target, while also filming beautiful smooth footage and avoiding obstacles is very hard.

 

And finally there is a very empirical argument against SLAMAP. After decades of research it seems to have failed at finding applications outside academia. Most industrial applications in which SLAMAP is used, are simple and highly controlled — nothing like drone flying.

 

In short, the shortfalls of SLAMAP are:

  • Does not support dynamic objects

  • Does not handle uncertainty

  • Does not have semantic understanding of the world

  • Is difficult to get the desired behaviours

  • Empirically, it has been around for quite a while and hasn’t been very successful.

 

So, is there another option? Yes there is, and is called Deep Learning.

 

The idea behind deep learning is drop all the handmade parts of a system, and replace them all with a single neural net. This neural net can then be trained with examples of how to do a task, so that it learns the desired behaviour.

 

So, instead of having a stereo visión block, sparse SLAM block, dense octomap bock, path planning block, etc, now there would be a single neural net. To train it to control a drone you could use two basic methods. Give it footage of a person flying it or simply tell it if it’s doing a good job or not — or a mixture of the two.

 

Deep learning has proven incredibly successful in a wide variety of tasks. One of the earliest and most important is object recognition. This year deep nets outperformed humans on the ImageNet challenge — classifying an image among a 1000 classes — achieving an error rate of 3.5%. For perspective, humans are estimated to score around 5.5% while the best non deep learning approaches get arround 26%.

 

The state of the art speech recognition systems are also based on deep learning. See this post by google.

 

Deep learning has also been used to outperform the current systems in video classification, handwriting recognition, natural language processing, human pose estimation, depth estimation from a single image and many others.

 

And it has also been applied to broader tasks outside classification. One of the most famous examples is the deep net that learned to play atari games, sometimes at a superhuman level.

 

And of course Alphago, a go player that recently beat the go master Lee Sedol 4-1. A feat that was thought by many to be decades away.

 

Very recently NVIDIA published a paper of an end to end steering system for a car. It showed a simple deep net — so simple that i was amazed — with only 70 hours of driving experience, running on a mobile GPU at 30 fps perform very well at driving on all kinds terrains and weathers.

 

But aside from off the empirical success of deep learning, the reason i believe it is more promising than SLAMAP is that it has the capacity to understand all the things SLAMAP cannot. All of the inherent limitations of SLAMAP i previously mentioned don’t exist in a deep net.

 

A deep net can learn to understand a dynamic world – tell the difference between a truck moving at 100 mph and at rest. And they can also learn meaningful semantics like: that snow powder is nothing to worry about, but that water is dangerous. And it can then learn how to use  this understanding

 

It might seem too good to be true. But would it really be that surprising that the methods that succeed were based on the only machine that can currently do this task — the brain.

 

I am pretty sure that with the available mobile hardware, deep learning frameworks and sea of freely accessible research, a decent team in less than a year would develop a better system than SLAMAP would ever lead to.

 

Do you agree?

 

Powered by WPeMatico

APM:Plane double release 3.5.3 and 3.6.0beta1

As mentioned on the 3.5.2 release discussion we have decided to do a 3.5.3 release to fix a couple of important bugs found by users in 3.5.2.

The main motivation for the release is a problem with flying without a compass enabled. If you fly 3.5.2 with MAG_ENABLE=0 or no compass attached then there is a risk that the EKF2 attitude estimator may become unstable before takeoff. This can cause the aircraft to crash.

The other changes in the 3.5.2 release are more minor:

  • fixed loiter radius for counter-clockwise loiter
  • fixed the loiter radius when doing a RTL at the end of a mission
  • provide reasons to the GCS when a uBlox GPS fails to properly configure
  • support a wider variety of NMEA GPS receivers
  • use EKF1 by default if no compass is enabled

For those of you feeling a bit more adventurous you might like to try the 3.6.0beta1 release. There is still a fair bit more to do before 3.6.0 is out, but it already has a lot of new features and I have been flying it for a while now.

The biggest single change in 3.6.0beta1 is the update of PX4Firmware. This brings with it a lot of changes, including much better support for the Pixracer flight board.

There has also been a lot more work on QuadPlane support, which a new QRTL flight mode, plus support for using the forward motor in VTOL flight and active weathervaning support. If you are flying a QuadPlane you’ll find the names of the copter rate control parameters have changed (just like they have in copter). You should be able to find the new parameters OK, but if not then please ask. I’ll provide more detailed documentation with the final 3.6.0 release.

Detailed changes in 3.6.0beta1 include:

  • merge upstream PX4Firmware changes
  • new AC_AttitudeControl library from copter for quadplane
  • modified default gains for quadplanes
  • new velocity controller for initial quadplane landing
  • smooth out final descent for VTOL landing
  • changed default loop rate for quadplanes to 300Hz
  • support up to 16 output channels (two via SBUS output only)
  • fixed bug with landing flare for high values of LAND_FLARE_SEC
  • improved crash detection logic
  • added in-flight transmitter tuning
  • fix handling of SET_HOME_POSITION
  • added Q_VFWD_GAIN for forward motor in VTOL modes
  • added Q_WVANE_GAIN for active weathervaning
  • log the number of lost log messages
  • Move position update to 50hz loop rather then the 10hz
  • Suppress throttle when parachute release initiated, not after release.
  • support Y6 frame class in quadplane
  • log L1 xtrack error integrator and remove extra yaw logging
  • limit roll before calculating load factor
  • simplify landing flare logic
  • smooth-out the end of takeoff pitch by reducing takeoff pitch min via TKOFF_PLIM_SEC
  • added support for DO_VTOL_TRANSITION as a mission item
  • fixed is_flying() for VTOL flight
  • added Q_ENABLE=2 for starting AUTO in VTOL
  • reload airspeed after VTOL landing
  • lower default VTOL ANGLE_MAX to 30 degrees
  • Change mode to RTL on end of mission rather then staying in auto
  • implemented QRTL for quadplane RTL
  • added Q_RTL_MODE parameter for QRTL after RTL approach
  • reduced the rate of EKF and attitude logging to 25Hz
  • added CHUTE_DELAY_MS parameter
  • allow remapping of any input channel to any output channel
  • numerous waf build improvements
  • support fast timer capture for camera trigger feedback
  • numerous improvements for Pixracer support
  • added more general tiltrotor support to SITL

I flew both 3.5.3 and 3.6.0beta1 today and both are really flying nicely. I hope you all enjoy flying it as much as the dev team enjoy working on it!

Happy flying!

Powered by WPeMatico

Solo NAB announcements: new cameras, power tether, software, more

If you got a chance to see the big 3DR area at NAB in Las Vegas last week, you’ll have seen the new cameras and accessories for Solo first-hand (along with the new software features such as sense-and-avoid and smart rewind announced earlier). But here’s a quick recap:

The new hardware:

The Sony UMC-R10C (above) for 3DR’s Site Scan enterprise solution. Available for preorder this summer.

Capture ultra-high resolution images with the gimballed Sony UMC-R10C. The large APS-C image sensor allows for exceptional low-light and low-noise performance. Automatically trigger 20MP stills that are ideal for high-resolution inspections and photorealistic georeferenced 2D and 3D models.

Kodak PixPro SP360 4K 360 cameras. An immersive 360-degree VR camera from Kodak, Made for Solo with custom stitching software. It ships this summer, either as a bundle with the vibration-isolated hard mount, or, if you have or buy the cameras separately, with the hard mount alone.

As if Solo wasn’t cool enough.

  • Record 4k immersive 360 VR video from the air
  • Vibration-isolated attachments for two SP360 4k action cameras
  • Wearable remote control for synched recording
  • State-of-the-art post-production stitching software customized for Solo
  • Ships this summer

 The Fiilex AL250 Light for Solo

Sky light:

  • Dynamic set lighting from virtually any angle
  • Detachable, for hand-held use
  • Powerful enough for search & rescue and after-hours work
  • Mount with a gimbaled camera
  • 5600K CCT  2000 Lumen
  • Shipping in the next few weeks!

The Hoverfly Tether for Solo

Endless flight:

  • 150 ft-long powered tether (plugs into standard outlets)
  • Set up a tripod in the sky: sports, events and job sites
  • Pair with Fiilex light for scene lighting
  • With Solo’s HDMI out, the perfect tool for live broadcast

 

A gimbal for the Flir Vue and the Flir Vue Pro IR cameras from RHP International.

Thermal imaging is here!

  • Active 3-axis stabilized gimbal system
  • Supports Flir Vue and Vue Pro thermal cmaeras
  • Pan & tilt control from 3DR controller
  • Video and image capture on integrated mini SD card
  • Consolidated wire system

 

And lastly, some safety accessories from PolarPro: Prop guards and bright front/back LED lights, all Made for Solo.

PolarPro’s prop guards for Solo add an extra element of protection to your drone. They slip onto each Solo arm, with a retaining clip that locks them in place. Prop guards won’t prevent all crashes; however, they’re a great way to reduce the risk of damaging your drone.

The 3DR Solo LED lights from PolarPro securely mount to Solo for increased visibility while flying. The headlights are white and the taillights are red to allow pilots to determine their drone’s orientation in low light or at a distance.

Powered by WPeMatico

Drone Post Flight – from your web browser!

This is a web-based application that allows you to upload your flight .log file and review the flight in 3D directly in your web browser!  Feel free to register and start uploading and exploring your flights!

www.dronalyze.com

YouTube Video

I wanted to put this out to the community and gather some feedback, so please let us know what you like, what you dont like, and additional features or compatibility issues you come across!

Thanks!

Powered by WPeMatico

Some new I2C devices from AUAV Co.

Hi guys,

AUAV Co. team is proud to announce two new useful gadgets – an I2C AIRSPEED sensor and 3.3/5V UNIVERSAL I2C HUB.

The newly designed AIRSPEED sensor now incorporates JST GH connector + additional solder pads on the bottom side, and 5V->3.3V LDO which makes it compatible with 3.3V I2C. The size is 15x21mm.

Our new I2C HUB incorporates PA9516A + 3.3V LDO + SPST slide switches for all channels, thus making it compatible with either 5V I2C devices or 3.3V I2C devices at the same time. Using the slide switches on the top, one can choose 3.3V or 5V interface to his peripheral I2C devices. The I2C HUB size is 14x42mm.

Best regards

Nick

Powered by WPeMatico

Hybrid FPV racer

In Airnamics we wanted to do something just for fun and make an FPV racer that is a cross between a multi rotor and a fixed wing. It opens up completely new performance capabilities because you can use the wing as the main lift device, an air break, or anything in between.
On the other hand we would like to combine classical drone racing with gaming aspects. You would have a limited amount of energy available per lap but would unlock additional energy reserves every single lap for reaching specific goals (e.g. highest top speed, highest continuous g-load, quickest lap time, proximity flying, etc.).
 
The racer is built on top of our UAS development platform but we would consider developing an open source based production version if the market feedback would be favorable.
We would sincerely appreciate your opinion about the system. How interesting do you find it? Any suggestions on how to make it even better? Thanks for sharing!

Powered by WPeMatico