3D-printed bat drone

From the video description:

A microprocessor-based onboard computer, a 6 DOF IMU sensor package, five DC motors with encoder feedback for flapping and wing articulation (asymmetric wing folding and leg/tail control), power/comm electronics, carbon-fiber frame, 3D printed parts, and silicone based membrane wings — all at 92 grams. 

No motion capture system was used for indoor closed-loop control flight.

A. Ramezani, X. Shi, S.-J. Chung, and S. Hutchinson, “Bat Bot (B2), A Biologically Inspired Flying Machine,” Proc. IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, May 16-21, 2016.

Dr. Alireza Ramezani, Xichen Shi
Prof. Soon-Jo Chung (http://publish.illinois.edu/aerospace…)
Prof. Seth Hutchinson (http://www-cvr.ai.uiuc.edu/~seth/)

Powered by WPeMatico

Amazon patents drone propellers that can say "watch out"

Spotted by The Register:

Amazon has filed a fascinating patent for automated aerial vehicle (AAV) technology.

The invention has two aspects, the first of which is the use of multiple propellors rotating in different directions to reduce noise. Amazon imagines one propellor rotating in one direction to provide lift. “While the second propeller may cause lift of the AAV,” the patent suggests, it “may also be operable to produce sound that cancels noise generated by the first propeller.”

“In some cases, an audio sensor (e.g., a microphone) located near the first propeller may detect the noise generated by the first propeller. A controller may be in direct or indirect communication with the audio sensor. The controller may be configured to receive a signal representing the noise detected by the audio sensor. The controller may also be in direct or indirect communication with the second propeller, and may cause the second propeller to produce anti-noise sound that cancels the noise generated by the first propeller.”

The second aspect of the patent suggests using propellors to communicate with people on the ground, in two ways.

The first is using propellors to make sound. Here’s the explanation:

Suppose, for instance, that the AAV were delivering an inventory item to a location. Upon approaching the location, the AAV determines (e.g., based on a video signal fed as an input parameter to the controller via a camera) that a person is situated at or near an intended or a suitable landing area corresponding to the delivery location. Such an input parameter may satisfy a flight condition corresponding to an audible communication, such as a “Watch out!” warning message. Accordingly, the controller may determine and cause to implement modulations of the rotational speed of a propeller, thereby causing the propeller to produce a series of sounds that are audibly perceptible as “Watch out!”

A second method would see the drones figure out they need to alert humans to their presence. In such cases “light sources [e.g., light-emitting diodes (LEDs)] coupled to one or multiple propellers may be caused to intermittently emit light in a synchronized manner while the propellers are rotating to generate patterns that are visibly perceptible as ‘HELLO’.”

Noise is major issue for the aviation industry: airport operators and aircraft manufacturers alike want planes to be as quiet as possible, so as not to get nearby residents grumpy. Amazon clearly gets this is researching ways to make its drones as quiet as can be.

Powered by WPeMatico

Double Frequency L1+L2 GNSS RTK Receiver, being integrated on drones, vehicles and land robots.

The RTKITE receiver is being integrated right now in all these futuristic solutions, including vineyard robot helpers, selfdriving buses, industrial geology and facility inspecting drones, high speed planting robots, auto-steering agricultural systems, and many more!

The professional grade RTKITE is a complete GPS SOM Receiver (System On Module), based on the globally tested SmaRTK GNSS technology, using the same components and with the same SmartOS running on its powerful processor, has been reduced to a minimal module that is ready for immediate integration into any high or medium level application that requires true Real Time Millimetric positioning, without any limitation with full compatibility and high speed.

– Supports GPS, GLONASS, COMPASS, SBAS satellites on L1, L2, L5, etc.

– Compatible with CORS, RTK, DGPS, PPS, PPK, LBAS and every professional Navigation, Positioning or Surveying mode.

– Able to geotag pictures or events at 200 Hz.

– RTK navigation at 20 Hz included.

– Internally logs and formats Events with GPS, IMU and Compass.

– Integrated GSM/GPRS quad-band modem to Broadcast RTK corrections (RTCM, CRM, etc) or receive from standard CORS stations or another RTK receiver.

– Compatible with any RF (UHF, VHF etc) radiomodem and is supplied with a professional 2W Tx/Rx module.

– Works as Base or Rover with many modes selected by software: (https://play.google.com/store/apps/details?id=com.northsurveying.smartosconfig)

– Compact and light, less than 100 grams, including GNSS antenna.

– Includes SD card memory slot, Bluetooth module, and ports for Serial/USB interfase, IMU/Compass, 4 indicator leds, Trigger, Mode selection, Rx/Tx, and more.

– And it’s Simple, Complete and Low Cost!

You can see its full details here: GNSS RTKITE Receiver

And if you have any question we are always available for one to one communication, write to: emea@northgps.com

North is a fully european company supported by the European Space Agency and located in Barcelona, Spain.


Powered by WPeMatico

Robust GPS-Agnostic Navigation – Visual Planning, Estimation and Control for MAVs

With all the talk of integrating drones into civilian airspace, there is need for better safety and GPS-agnostic navigation methods – visual navigation and obstacle avoidance is paramount to integrating drones (micro-aerial-vehicles, MAVs) into our cities and elsewhere, because external navigation aids cannot be relied on in every situation, and neither can pilot experience.

Visual navigation is the solution to these challenges, and we present an aerial robot designed from scratch and ground up to meet these requirements. We demonstrate several facets of the system, including visual-inertial SLAM for state estimation, dense realtime volumetric mapping, obstacle avoidance and continuous path-planning.

In search of better extensibility, and better fitness for the research goals I had pitted myself towards, I started working as a part of the open-source PX4 Autopilot development community in 2013. Aware of the limitations in the field, I started Artemis, my research project on visual navigation for aerial vehicles in 2014. Today, I’m happy to present our intermediary (and satisfying!) results.

At its very core, Project Artemis is a research project which aims to provide improved navigation solutions for UAVs.

All of the Artemis MAVs follow the same basic, distributed  system architecture. There is a high-level onboard computer, and a low-level embedded flight controller, typically a PX4 autopilot board, or similar derivative. The middleware of choice on the high-level companion computer is ROS (Robot Operating System) and the PX4 Middleware on the deeply embedded controller.


Visual Navigation

Multiple cameras provide proprioceptive information about the environment, used for mapping and localisation. Forward stereo cameras are used to compute depth images in realtime.


All cameras are synchronised in time with respect to each other, and to the IMU (Inertial Measurement Unit) of the flight controller. Depth images are inserted into a volumetric mapping framework based on Octree representations, and a 3D map of the environment is built incrementally.

We also use a SLAM (Simultaneous Localisation and Mapping) technique on our robot. The system continuously constructs a sparse map of the environment which is optimised in the background. Visual SLAM is globally consistent, and centimetre-level accurate unlike GPS, and works both indoors and outdoors. Tight fusion with time-synchronised inertial measurements greatly increases robustness and accuracy.


State Estimation


The system is designed to navigate using all available sensors in the environment, which includes both GPS and vision outdoors and pure vision indoors. Since sensor availability is not guaranteed, a modular sensor fusion approach using a hybrid Kalman filter with fault detection is used to maintain a robust state estimate. Motivation to use all the information from all the sensors is that even if a particular subset or module were to fail, the overall system performance would not be compromised.

Obstacle Avoidance

The global volumetric map is used to continuously compute a collision-free trajectory for the vehicle. In operator-assist mode, the motion planner only intervenes if the operator’s high-level position commands could lead to a possible collision. In autonomous modes, the planner computes optimal trajectories based on a next-best-view analysis in order to optimise 3D reconstruction. The planner sends its commands to the minimum-snap trajectory controller running on the low-level flight controller, which computes motor outputs.

It is important to point out that this can be achieved *today* with open-source systems, albeit with some perseverance and experience. Better documentation on how to achieve a relatively close reproduction of our results is underway. It will be made available soon via the UASys website (http://www.uasys.io/research) and the PX4 Autopilot developer wiki (http://dev.px4.io)

Our open-sourced portions of the software stack are available here : www.github.com/ProjectArtemis

I will also be presenting a talk on Project Artemis and our software stack at Embedded Linux Conference at San Diego, CA. Please attend if you’d like to get an in-depth view into the system’s workings! The presentation will aim to accelerate the introduction to the current state of the aerial vehicle market, and the several limitations that it faces due to limited technological breakthroughs in terms of consumer systems. Newcomers and existing developers / system integrators will get a chance to understand these limitations, and what embedded Linux systems can do for the field, including but not limited to visual (GPS-denied) navigation, mapping, obstacle avoidance and high-definition video streaming. The talk also aims to encourage the current open-source development communities, and how they can contribute better to improving the current state-of-the-art, be it with cross-middleware interoperability, modular and reusable software design or inexpensive and extensible hardware design.

Slides are available here : http://events.linuxfoundation.org/sites/events/files/slides/artemis_elc16_final.pdf

Learn more about my session at http://sched.co/6DAs and register to attend at http://bit.ly/1ZuUtiu

Stay updated! –
Wesite : http://www.uasys.io
GitHub : https://www.github.com/mhkabir
Instagram : https://www.instagram.com/uasys/
Twitter : https://twitter.com/UASysOfficial
Facebook : https://www.facebook.com/UASys



Powered by WPeMatico

NASA using an IRIS+ for methane detection

Love this report from NASA, via Space Daily:

NASA’s Gas-Sniffing Drone Can Spot Methane Leaks More Accurately Than Previous Instruments

As part of a project to improve safety in the energy pipeline industry, researchers have successfully flight-tested a miniature methane gas sensor developed by NASA’s Jet Propulsion Laboratory, Pasadena, California, on a Vertical Take-off and Landing small unmanned aerial system (sUAS).

The sensor, similar to one developed by JPL for use on Mars, enables detection of methane with much higher sensitivity than previously available for the industry in hand-carried or sUAS-deployable instruments.

The tests were conducted in central California at the Merced Vernal Pools and Grassland Reserve, and were funded by Pipeline Research Council International (PRCI). The jointly conducted test of NASA’s Open Path Laser Spectrometer (OPLS) sensor is the latest effort in a methane testing and demonstration program conducted on various platforms since 2014.

The ability of the OPLS sensor to detect methane in parts per billion by volume could help the pipeline industry more accurately pinpoint small methane leaks.

Researchers from JPL and the Mechatronics, Embedded Systems and Automation (MESA) Lab at the University of California, Merced, conducted the flight tests in late February. They flew a small unmanned aerial system equipped with the OPLS sensor at various distances from methane-emitting gas sources. Tests were done in a controlled setting to test the accuracy and robustness of the system.

The advanced capabilities provided by sUASs, particularly enhanced vertical access, could extend the use of methane-inspection systems for detecting and locating methane gas sources.

Additional flight testing this year will feature a fixed-wing UAS, which can fly longer and farther. Those capabilities are necessary for monitoring natural-gas transmission pipeline systems, which are often hundreds of miles long and can be located in rural or remote areas.

This latest round of tests furthers the team’s goal to develop sUASs to improve traditional inspection methods for natural-gas pipeline networks, which may enhance safety and improve location accuracy.

“These tests mark the latest chapter in the development of what we believe will eventually be a universal methane monitoring system for detecting fugitive natural-gas emissions and contributing to studies of climate change,” said Lance Christensen, OPLS principal investigator at JPL.

NASA uses the vantage point of space to increase our understanding of our home planet, improve lives, and safeguard our future. NASA develops new ways to observe and study Earth’s interconnected natural systems with long-term data records.

The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.

Powered by WPeMatico

Solo Smart Drone Lands Smartly: PART 2

Our precision landing Solo is flying great! 🙂 This test rig has an IR-LOCK sensor (via I2C), an SF/10A rangefinder (via Serial), and is running a customized precision landing firmware based on APM:Copter (solo-rebase). The video above shows two vision-guided landings. The full 10-landing test sequence is shown below. The typical landing accuracy ranges from 0-10cm, even in moderate/gusty wind.

One of the primary challenges of designing a precision landing UAV system is the controls tuning. Every UAV has different flight characteristics, and finding the right PID (+EKF) parameters can be a painstaking process. So it is very helpful to have a popular, consistent hardware platform (e.g., Solo) to develop on. Solo flies nicely out of the box without significant parameter modifications. The precision landing performance is noticeably better than our IRIS+ platform, and our previous Solo test platform (w/out a laser rangefinder). In the 10-landing test sequence, the typical landing accuracy ranged from ~0-10cm. There was only one ‘failure’ (landing #5 out of 10), in which the copter landed far outside of the specified bounds. This will be analyzed and corrected via the controls code.

This test platform is running a customized precision landing firmware, which uses the vision-based localization data to actively manage the landing accuracy. You can read more about the code, and see previous testing here. USER BEWARE: this code is experimental, and it assumes that you have a reliable rangefinder connected. 

Customized Firmware: ArduCopter-v2.px4

Github: http://bit.ly/1q3fIvS

Connecting IR-LOCK to Solo: Hackster.io article

Powered by WPeMatico

Full Scale Quad – I Frame Design

Here is a concept shot of what is a continual evolution of designs.  This is way off in the future, but is the end goal of what I’m working towards.  Now that I’m starting to acquire actual parts and pieces I’m a bit concerned about the weight.  But first things first, and one step at a time.  I still have yet to build even one good rotor blade, but I do have a plan now for the first rotor set, and the variable pitch rotor head.  Drive system will need lots of testing but that is phase 2.  Phase 1 is just doing some performance testing on a single rotor head.  Determine lift, drag, power, roll rate, etc.

Powered by WPeMatico

Great example of 5-axis multipoint cablecam with a Solo

This is a great demo of the new multipoint cablecam feature on Solo, with five-axis splines and smoothed timing on each path. All programmed with a few button presses.  There’s simple autonomy and there’s Hollywood-turned-into-an-app autonomy — and this is just a glimpse of what you can do with Dronekit on Solo

Powered by WPeMatico

Beyond the Delivery Drone – Revolutionary robotics keeping workers out of harm’s way

Apellix utilizes large lift capacity drones attached to a base station with an umbilical cord and tether system

In the United States alone, 95 climbers working on industrial towers died between 2004 and 2012. Worker safety is a concern across industries but perhaps no more so than in large-scale workplaces like oil & gas, shipping, and infrastructure. Add in the financial expense of maintaining massive structures like ships, water towers, bridges, and skyscrapers — it’s an industry that has needed technological innovation for decades.

Enter the drone, those charming little creatures that take our picture at concerts, capture amazing fireworks displays at night, and deliver our packages. Couldn’t we utilize them to facilitate some real change? I think we can, so I founded a company to prove it.

Let’s look at an example of what it is currently like to paint a large cruise or cargo ship.

Fun facts:

  • A mega cruise ship has an underwater hull area (the part you don’t see), keel-to-deep-load line of over 120,000 square feet. That’s about four times larger than a big box store like Wal-Mart. In other words — HUGE
  • It takes over $100,000 worth of paint for a standard size cruise or cargo ship (60,000 to 80,000 square feet).
  • It takes a work crew of about 30 people four days to set up scaffolding, paint, and remove the scaffolding.
  • The time that ship spends in dry dock while it’s painted translates into lost revenue, an amount that can exceed $1,000,000 a day

Now imagine what it would mean if a single machine operator, with no work crew and no scaffolding, from the safety of the ground, could launch an industrial robotics system that completes the job in a single day. That is what we make possible at Apellix.

Our Worker Bee robotics system eliminates the need to place human workers at dangerous heights. Using large lift capacity drones attached to a base station with an umbilical cord and tether, and custom software that ties it all together, the robotics modular system can paint buildings and towers, wash windows, safely apply chemicals and much much more.

By replacing human judgement with science, you can increase safety and productivity, reduce costs, and provide measured standardized quality with environmentally sustainable benefits. Unlike most industrial technology, which solves operational problems or increases output efficiency or quality, the Apellix Drone + Robotics platform keeps workers safe so companies can focus on their operations.

We believe this is pretty unique. Apellix uses proprietary hardened, industrial customized software to enable its drone-based applicators to do things drones have never done before. The market potential is huge — $127B worth of paint is sold each year. And the possible applications are myriad, from painting to cleaning industrial equipment to rapidly deploying shelter to disaster areas to 3D-printing at tremendous scale.

Today marks the official launch of the Apellix Worker Bee concept. We’re currently in pre-production with a fourth-generation industrial robotics spray paint system and hope to enter into commercial testing with a partner in the coming quarter. If you share a vision of harnessing technology to create safer industrial environments, or are interested in staying informed about what we are doing, reach out to us at our website or follow me onTwitter or LinkedIn.

Best Wishes and Happy, Safe, Industrial Workplaces

Demo Video of the Pre-Production Worker Bee Spray Painting Drone based system in action

#DronesDoGood #DronesSaveLives #GoodIdeas #DronesAreSoftware

Powered by WPeMatico

RTKLIB Web Console by DROTEK : you'll forget command-line tool after using it !

We recently talked about physical characteristics of our SMARTNAV-RTK GNSS and we now want to introduce the features of the application installed on the device and who make SMARTNAV-RTK plug and play for the average person and easily customizable for advanced users.

RTKLIB Web Console was developed in order to make the use of RTKLIB more user-friendly. We’ve totally rethink the way of using RTKLIB and your SMARTNAV-RTK GNSS can be manage exclusively from Web Console. No need to remote login to your device to configure additional parameters or execute shell script … all you need is Web Console.

Here is a short presentation of features available :


Application architecture can be resume as follow: 

–       The Front-end use AngularJS Framework

–       And the Back-end use NodeJS Framework

The design is totally web responsive; means that the application support many devices.  


This tab show you the quality of GNSS signals, it can help you finding a good position for the base :

–       RTK Satellites chart shows SNR for common satellites between base and rover

–       BASE Satellites chart shows signal-to-noise ratio for the base


Map page is useful for monitoring the quality of solution in real-time. Choose your scale and visualize the deviation map where you can read position variations over time. 


From this page you can choose the mode you want to run (rover or base) and manipulate all the parameters available for RTKLIB.

Preloaded configuration files are accessible but you can create your own and save it.


Consult log files from the Web Console. 


Administration tools for smartnav-rtk :

–       Control the status and manipulate the state of RTKLIB service

–       Export logs files without using any third-party application

–       Sync system time with GPS Time.

–       Update system with the latest release of SMARTNAV-RTK firmware.  


For the moment SMARTNAV-RTK is only available in XL version but the light version is on the rails and will be available soon.

Due to user demand, we’ve updated our XL price list so that you can buy unit one by one and without any antenna. Don’t forget to add a second unit for the first order 😉

The source code will be available at the end of the week; we progressively migrate our work through github, sorry for the delay.

If you have any questions, don’t hesitate 😉  

Powered by WPeMatico