This is How Nepalis are Using Drones for Humanitarian & Environmental Projects

In September 2015, we were invited by our partner Kathmandu University to provide them and other key stakeholders with professional hands-on training to help them scale the positive impact of their humanitarian efforts following the devastating earthquakes. More specifically, our partners were looking to get trained on how to use aerial robotics solutions (drones) safely and effectively to support their disaster risk reduction and early recovery efforts. So we co-created Kathmandu Flying Labs to ensure the long-term sustainability of our capacity building efforts. Kathmandu Flying Labs is kindly hosted by our lead partner, Kathmandu University (KU). This is already well known. What is hardly known, however, is what happened after we left the country.

Our Flying Labs are local innovation labs used to transfer both relevant skills and appropriate robotics solutions sustainably to outstanding local partners who need these the most. The co-creation of these Flying Labs include both joint training and applied projects customized to meet the specific needs & priorities of our local partners. In Nepal, we provided both KU and Kathmandu Living Labs (KLL) with the professional hands-on training they requested. What’s more, thanks to our Technology Partner DJI, we were able to transfer 10 DJI Phantoms (aerial robotics solutions) to our Nepali partners (6 to KU and 4 to KLL). In addition, thanks to another Technology Partner, Pix4D, we provided both KU and KLL with free licenses of the Pix4D software and relevant training so they could easily process and analyze the imagery they captured using their DJI platforms. Finally, we carried out joint aerial surveys of Panga, one of the towns hardest-hit by the 2015 Earthquake. Joint projects are an integral element of our capacity building efforts. These projects serve to reinforce the training and enable our local partners to create immediate added value using aerial robotics. This important phase of Kathmandu Flying Labs is already well documented.

WP15

What is less known, however, is what KU did with the technology and software after we left Nepal. Indeed, the results of this next phase of the Flying Labs process (during which we provide remote support as needed) has not been shared widely, until now. KU’s first order of business was to actually finish the joint project we had started with them in Panga. It turns out that our original aerial surveys there were actually incomplete, as denoted by the red circle below.

Map_Before

But because we had taken the time to train our partners and transfer both our skills and the robotics technologies, the outstanding team at KU’s School of Engineering returned to Panga to get the job done without needing any further assistance from us at WeRobotics. They filled the gap:

Map_After

The KU team didn’t stop there. They carried out a detailed aerial survey of a nearby hospital to create the 3D model below (at the hospital’s request). They also created detailed 3D models of the university and a nearby temple that had been partially damaged by the 2015 earthquakes. Furthermore, they carried out additional disaster damage assessments in Manekharka and Sindhupalchowk, again entirely on their own.

 

 Yesterday, KU kindly told us about their collaboration with the World Wildlife Fund (WWF). Together, they are conducting a study to determine the ecological flow of Kaligandaki river, one of the largest rivers in Nepal. According to KU, the river’s ecosystem is particularly “complex as it includes aquatic invertebrates, flora, vertebrates, hydrology, geo-morphology, hydraulics, sociology-cultural and livelihood aspects.” The Associate Dean at KU’s School of Engineering wrote “We are deploying both traditional and modern technology to get the information from ground including UAVs. In this case we are using the DJI Phantoms,” which “reduced largely our field investigation time. The results are interesting and promising.” I look forward to sharing these results in a future blog post.

kali-gandaki-river

Lastly, KU’s Engineering Department has integrated the use of the robotics platforms directly into their courses, enabling Geomatics Engineering students to use the robots as part of their end-of-semester projects. In sum, KU has done truly outstanding work following our capacity building efforts and deserve extensive praise. (Alas, it seems that KLL has made little to no use of the aerial technologies or the software since our training 10 months ago).

Several months after the training in Nepal, we were approached by a British company that needed aerial surveys of specific areas for a project that the Nepal Government had contracted them to carry out. So they wanted to hire us for this project. We proposed instead that they hire our partners at Kathmandu Flying Labs since the latter are more than capable to carry out the surveys themselves. In other words, we actively drive business opportunities to Flying Labs partners. Helping to create local jobs and local businesses around robotics as a service is one of our key goals and the final phase of the Flying Labs framework.

So when we heard last week that USAID’s Global Development Lab was looking to hire a foreign company to carry out aerial surveys for a food security project in Nepal, we jumped on a call with USAID to let them know about the good work carried out by Kathmandu Flying Labs. We clearly communicated to our USAID colleagues that there are perfectly qualified Nepali pilots who can carry out the same aerial surveys. USAID’s Development Lab will be meeting with Kathmandu Flying Labs during their next visit in September.

thumb_IMG_4591_1024

On a related note, one of the participants who we trained in September was hired soon after by Build Change to support the organization’s shelter programs by producing Digital Surface Models (DSMs) from aerial images captured using DJI platforms. More recently, we heard from another student who emailed us with the following: “I had an opportunity to participate in the Humanitarian UAV Training mission in Nepal. It’s because of this training I was able learn how to fly drones and now I can conduct aerial Survey on my own with any hardware.  I would like to thank you and your team for the knowledge transfer sessions.”

This same student (who graduated from KU) added: “The workshop that your team did last time gave us the opportunity to learn how to fly and now we are handling some professional works along with major research. My question to you is ‘How can young graduates from developing countries like ours strengthen their capacity and keep up with their passion on working with technology like UAVs […]? The immediate concern for a graduate in Nepal is a simple job where he can make some money for him and prove to his family that he has done something in return for all the investments they have been doing upon him […]’.

 KU campus sign

This is one of several reasons why our approach at WeRobotics is not limited to scaling the positive impact of local humanitarian, development, environmental and public health projects. Our demand-driven Flying Labs model goes the extra (aeronautical) mile to deliberately create local jobs and businesses. Our Flying Labs partners want to make money off the skills and technologies they gain from WeRobotics. They want to take advantage of the new career opportunities afforded by these new AI-powered robotics solutions. And they want their efforts to be sustainable.

In Nepal, we are now interviewing the KU graduate who posed the question above because we’re looking to hire an outstanding and passionate Coordinator for Kathmandu Flying Labs. Indeed, there is much work to be done as we are returning to Nepal in coming months for three reasons: 1) Our local partners have asked us to provide them with the technology and training they need to carry out large scale mapping efforts using long-distance fixed-wing platforms; 2) A new local partner needs to create very high-resolution topographical maps of large priority areas for disaster risk reduction and planning efforts, which requires the use of a fixed-wing platform; 3) We need to meet with KU’s Business Incubation Center to explore partnership opportunities since we are keen to help incubate local businesses that offer robotics as a service in Nepal.

Powered by WPeMatico

Tersus Rinex Converter V1.0 Released for Post Data Process

Tersus Rinex Converter V1.0 is now ready for release. RINEX is short for Receiver Independent Exchange Format, which is well used for post data process. This tool can help users convert their binary observation data into RINEX 3.2 format.

All Precis RTK boards (Precis-BX303, Precis-BX305 and Precis-BX316) support raw observation data logging. Precis-BX303 has an onboard SD card slot and the Precis-BX305 and Precis-BX316 support external data logger. The logged binary observation data can be converted into Rinex 3.2 with Tersus Rinex Converter.

You can download Tersus Rinex Converter through our technical support site www.tersus-gnss.com/pages/technical-support

If you have any technical inquires, drop us a line to sales@tersus-gnss.com

Powered by WPeMatico

New Report: The Truth About Drones in Precision Agriculture

Over the past few years, the press has emphasized how much commercial drones will be used to improve farming. The assumption is that drones provide more accurate data for use in variable rate technology (VRT) so farmers who use drones will experience increased yields.  But truth be told very little has been written about the measurable benefits and it’s yet to be proven just how effective UAS will be in helping farmers increase yields. With that in mind, we just released The Truth about Drones in Precision Agriculture, a free research study that reviews those benefits and challenges.

The report is the first in series of studies sponsored by BZ Media that looks objectively at each major market for drones and drone technology. In this paper, we look at how drones have been used as remote sensing devices in agriculture thus far, review competitive and traditional approaches using incumbent technology, discuss the opportunities and challenges posed by the technology itself, outline the lessons learned, and discuss what’s next for drones in agriculture. Here is an excerpt:

“For the most part, recent technology advancements in small UAS equipped with good sensors support the farmer’s and/or researcher’s ability to locate a precise position in a field, observe it, and create maps of as many variables as can be measured — but only on a small scale. That’s because under current FAA rules, all observation and measurement would have to be done by a drone that is within visual line of site (VLOS) of the operator. The problem is that fields and farms are big– bigger than VLOS.  According to this report, there are approximately 2.1 million farms in America.  The average size is 434 acres. Small family farms, averaging 231 acres, make up 88 percent of farms.  That’s 1.85 million farms that could benefit immediately from VLOS operations.  But large family farms (averaging 1,421 acres) and very large family farms (averaging 2,086 acres) make up 36 percent of the total farm acres in the U.S., so most of that would require beyond VLOS operations.

Sure, operators could conduct many operations in a day by moving section to section to section and stitching together larger maps for large or very large acre farms, but this is costly – both in terms of manpower and time. Even if it was cheaper, the market potential for drones in precision agriculture still needs more vetting. Despite the ROI studies like this one by the American Farm Bureau Federation and Measure, it’s not yet clear how a sUAS can deliver more usable data to a farmer or provide a cost benefit over the existing manned aircraft or the satellite image solutions available to them today.”

The paper goes on to analyze six use cases for drones in agriculture in great detail – including using drones for crop vigor assessment and the use of prescription maps. You can find out more about the report and download for free it here.

If you have questions about what’s in the report or would like to comment on it after reading it please post below or write me colin@droneanalyst.com.

Image credit: Colby AgTech

Powered by WPeMatico

Sparkfun Autonomous Vehicle Competition course preview

All you ArduRover fans out there, this is your chance to once again show your skills at AVC. It’s September 17 this year. Here’s the new Sparkfun announcement post:

This year, SparkFun’s Autonomous Vehicle Competition (AVC) will incorporate a few new twists. Along with our classic autonomous race course and Combat Bots, this year will feature the Power Racing Series (PRS) and a new autonomous PRS category. I’m here today to talk about the classic AVC race track.

To accommodate all of the attractions, we’ve split up our parking lot into smaller sections. So while the classic AVC course will be smaller than in previous years, it will almost certainly be more challenging. 

The track will be 10 feet wide, with hay bales along the sides. These are just hay; they’re not covered with anything. To start the race, each entrant gets 300 points, and one point will be deducted for every second that you’re navigating the course. Those deductions stop as soon as your vehicle crosses the finish line, and you can earn more points by tackling some of the obstacles along the way.

From the starting line, your vehicle will navigate a very nice and easy, 120-foot straightaway to the first right turn, followed by another 35-foot straightaway to the second right turn. Following that turn, you’ll encounter a 58-foot section with four red barrel obstacles. You can dodge them or hit them (they may or may not be easily movable), it’s up to you, but you don’t get any extra points for navigating the barrels.

alt text

But that’s a long way around there, isn’t it? That’s gonna eat some time. So maybe you want to take the optional dirt section, huh? About 30 feet from the start line, there’s a right turn onto a 7-foot-wide section of track that’s going to be covered with dirt, maybe some rocks, skulls, etc. Definitely off-road in nature. Taking this section will shave off some time if your ‘bot can hang, as it will lead you to the end of the barrel section, avoiding them entirely. It will also land you 50 extra points.

Regardless of which of those two paths you choose, your ‘bot now sits at a four-way intersection. From the barrel-straight, the easy path is to your left (or straight from the dirt section) to another 58-foot straightaway. There will be a green hoop placed in this section, and going through the hoop will net you another 10 points. At the end of that section is a right turn onto a 67-foot straightaway with no other obstacles, followed by another right turn and another 58-foot straightaway. On this section, there will be a ramp (more of a jump) that will net you 10 points if you get over it.

alt text

But again, that’s a long way around and it’s going to eat your time. So if you want to save some time, instead of taking the left turn from the barrel-straight, you can go straight (or a right turn from the dirt section). This will lead down a straight that ends with the Discombobulator.

If you don’t remember this from last year, it’s a giant gas-powered turntable that’s specifically designed to lay waste to your navigation algorithms. Taking this path will relieve you from taking the three other sections, but it can send your ‘bot flying. And if you choose to jump the Discombobulator, beware: If you jump too far, you can end up in the “Ball Pit of Despair.” This is essentially a low-edge kiddie pool filled with those big plastic balls you see at fast food chain play areas – the ones that always smell sorta funny (hey, we were going to use acetone to begin with). Landing in the Ball Pit of Despair will end your run. If you make it past the Discombobulator, you’ll get 50 more points. But just to show you that we’re nice guys, we’ll give you 10 points just for getting up the Discombobulator ramp. Who loves ya? We do.

alt text

Assuming you successfully navigate the Discombobulator, hang a right turn (or just straight from the easy path) into the last “hard” section of track. You’ll first take a right turn, then a hairpin to the left, followed by another hairpin to the right. That leads you to the final, 25-foot path to the finish. Yay! You did it!

I also need to mention that there are three weight classes this year: lightweight (<10lbs), welterweight (10-25lbs) and heavyweight (>25lbs). The high-end weight restriction is 40lbs, so don’t come with anything heavier than that. Students and veterans will run in the same heats, and registration closes August 1. Teams will be required to submit verification of progress on August 15th and September 1st, so plan for that.

So that’s it! It all sounds so easy, doesn’t it? We’ll see about that, and we’ll see you on September 17th!

Powered by WPeMatico

SMARTNAV L1 RTK field test scenario – Results

SMARTNAV L1 RTK field test results, as promised. Please have a look at this previous post before reading.

The reference trajectory will be displayed in dark blue whereas SMARTNAV’s output will be displayed as : 

– red for “single” solutions (when RTK is not available)

– yellow for “float” solutions

– green for “fix” solutions (maximum accuracy)

So first we will review all the points listed in the previous post.

  • Height differences

It is common knowledge that GNSS tend to output poor vertical position accuracy due to the low vertical spacing of satellites. We wanted to check whether RTK could enhance this.

  • Bridges and tunnels

The idea behind this test was to monitor how receiver would deal with successive acquisitions/losses of satellites.

  • Large roundabouts

  • Housing estate

A difficult environment, with sky view masked by high buildings.

  • Urban canyon

The most difficult environment, really small roads with high buildings.

  • Leafy streets

We wanted to know whether trees affect position stability or not.

 

  • Repetability

Repetability was difficult to measure with the car, because it is difficult to make sure to foloww the same path several times in an urban area. So we did another test. SMARTNAV was placed on a square table, calculating real-time RTK solutions. Baseline was about 10 meters. SMARTNAV was moved around table’s edge several times, always following the same path.

  • Accuracy/precision

Accuracy, less than 20 cm.

Add something about stationary.

Glitches

  • Availability

NRTK (SMARTNAV) availability : 

– Fix : 27%

– Float : 62.5%

– Single : 10.5%

L1/L2 availability : 

– Fix : 50.6%

– Float : 46.1%

– Single : 3.2%

– No difference between post processed SMARTNAV and NRTK

Powered by WPeMatico

Rescue Robotics Indiegogo Launch

X-Crafts Indiegogo campaign for the next step in its development has launched.

www.indiegogo.com/projects/rescue-robotics-join-the-revolution-save-lives#/

Rescue Robotics has some big goals, but has the the right team of people behind it, who are looking forward to getting the system working out in the field. 

X-Craft has been developing and operating UAVs for a few years now, and are committed to using robotic systems for the betterment of the human condition. Holding a company mandate that allows us to operate on a not-for-profit basis for humanitarian and environmental projects. In alignment with this policy, we are specialists in the Emergency Services sector.

www.x-craft.co.nz

Powered by WPeMatico

SMARTNAV L1 RTK field test scenario

test_roundabout2.png

Drotek has taken L1 RTK devices on the field under difficult conditions to see how they really perform.

 

Drotek, based in Toulouse (France) has had the opportunity to test SMARTNAV L1 RTK with laboratory grade devices. This test has been run in straight collaboration with GUIDE (GNSS Usage Innovation and Development of Excellence), a testing laboratory for satellite geolocation (http://www.guide-gnss.net/)

 

IMG_0899.JPG

 

It is now quite clear that L1 RTK performs really well in open sky “easy” environnements. But we wanted to test its real performance in more difficult environments.

 

GUIDE is a equipped with a GBOX, a “black box” containing an aeronautical grade Inertial Measurement Unit and a multicontellation L1/L2 AsterX receiver. The unit constantly logs GNSS L1/L2 raw data + inertial measurements. The two datasets are then merged to output a precise reference trajectory.

 

IMG_0905.JPG

 

IMG_0895.JPG

 

The antenna is place on the roof of a car.

 

IMG_0896.JPG

We set up two devices on the car ( + GBOX) :

 

  • one standalone SMARTNAV (with its own antenna), logging raw data for post-processing

  • one SMARTNAV connected to car’s antenna splitter, processing real-time NRTK (VRS) with Teria network corrections

 

IMG_0925.JPG

 

IMG_0935.JPG

IMG_0936.JPG

The environments we wanted to test were the following :

 

  • height differences (>10%)

height_differences.png

 

  • bridges and tunnels

 

bridges_tunnels.png

 

  • large round abouts

roundabout.png

 

  • housing estate

 

housing.png

 

  • urban canyon

downtown.png

 

urban.png

 

  • leafy streets

leafy_streets.png

With this test, we will try to answer the three following questions :

 

  • Repetability : is solution precise in an absolute or relative way? Are we able to reproduce it over time?

  • Precision/Accuracy : how precise/accurate is the solution?

  • Availability : is L1 as reliable/available as L1/L2?

Results will be posted this week !

DROTEK

drotek.png

Powered by WPeMatico

Biomechanical simulation of bats and Divine formula (Definitive edition)

(These analysis has still room for improvement, it will be resolved by the computer evolution. and I heard a humor of Tesla Motors that Model S Easter egg turns car into submarine. So In the next step, I’ll show the biomechanical fish fins which can replace the conventional screw propellers. It useful for the unmanned underwater vehicles. 

As well as I’ll study the structural design from the American railway locomotive industry before the physical implementation.)

Details :

Already a lot of people were surprised to this wing mechanism.
I observed their reaction, and understand the importance.
In the academic field, it was a discovery become crazy.

Now, I talk a truth about these ideas a little.
But, it is what hard to imagine.

Vortices inherently create drag. we can understand that with as the Cardoid curve movement, our formula (later mention) generate a symmetric vortex street pattern, unlike the Karman vortex street. and animal machine can recapture some of this energy and use it to improve speed and maneuverability.

This wing mechanism is attributed to some biomechanical techniques from Wing Chun Kung Fu. (This martial art was invented by observing the movement of white crane which was fighting against snake.)

1) Bong Sao (A.K.A. Wing Arm)
2) Tan Sao (Dispersing Arm)
3) Gan Sao (Cultivating Arm)
4) Luk Dim Boon Gwun (Long Pole)

I’m learning the above arts from an european-styled exercise method called “Wing Tjun” and Donnie Yen’s IP Man movies series.


As well as, that fundamental geometric formula of voluntary movement has been attributed to the oriental philosophy in Tai Chi Chuan.

1) Yin and Yang (Dark—Bright)
2) Bagua (Eight trigrams)

I got an opportunity to learn about the above internal arts in Switzerland from a Tai Chi master Cornelia Gruber through a german-styled board game that she made. She studied Tai Chi Chuan from a grand master Bow Sim Mark (Donnie Yen’s mother) in Boston.

I research the voluntary movement of animals using the motion picture. this is a method started by a physiologist Étienne-Jules Marey in the 19th century in France. He took the world’s first motion capture of birds while inventing a chronophotographic gun. Then his observation studies has led to the development of aerial vehicles by Étienne Œhmichen’s quad copters and Louis Charles Breguet’s Gyro planes. (Muybridge took a motion pictures of horses locomotion. his work is more known. despite it was for dullsville gambling. lol)

 

I also study European Horology. in this way, I can discuss about the movement of machine animals from the field of astronomy, space physics. this approach leads to the following studies
self-organization, dissipative structure theory, fluid dynamics as like as turbulence or tourbillon, discrete optimisation, collective intelligence… these are also closely related to the artificial intelligence development.

Amazon CEO which hopes to practical use of the drone for business, Jeff Bezos has a project to create an astronomical clock called The Clock of the Long Now, also called the 10,000-year clock. Someone of his project noticed immediately after I have published a mathematical formula of the voluntary movement on the web. their presented surreptitiously it in the blog of the Long Now. I called them, but does not reply. Before dozens of years from now, after the French Amazon was opened, I took their delivery service for get valuable books needed to start my project. It became a big opportunity. then I received a tiny gift and a letter of Jeff Bezos for the sake of gratitude of understanding to new business.

In 2012-2015, all activities related to my R&D project had been permitted in France by “Competence and Talent, the long-term residence permit”
Having made the first invention, I moved toward naturally to the National Library of France. so, at the underground library facility for researchers, I found a microfilm of an essay written by Abraham-Louis Breguet being the key to clarify the mystery of my inventions.

http://goo.gl/Bsu5L3

The following year, at Latin Quarter, on the day following the first formula had been completed, I found a monograph of E-J Marey to demonstrate the structural adequacy of the formula.

Because my main job is producer, I encourage diplomatically
so that legal reform progress the drone and robotics business advantageously.

Currently I got a hub in USA. In addition, I receive an invitation to talk about the future industry from the French diplomatic organisation.

Noteworthy thing, France had involving at two international movies hoping to become stepping stones to the next robotics and drone business.
“Despicable Me” series American-French co production
“Hugo” British-American-French co production

I was direct appeal about these ideas to the chef of Immigration Department in 10 years ago. so I know the executive producer and director of these movies.

However, France tried to start a war in the Middle East from the aftermath of Paris media and terrorism’s conflict. now USA is in the process of the presidential election, I should not be interfare. EU industry has been chaotic by diesel exhaust gas fraud. enfin United Kingdom has exited from EU!

Powered by WPeMatico

iOSMavlink – iOS based arducopter ground control app

Hi!

I have created an iOS app for use with arducopter flight controllers!

By bridging a 3dr radio with a bluetooth low energy module and a battery, it is possible to receive telemetry data from mavlink devices.

By using this bluetooth method, I have created an app that is a ground station similar to tower/droidplanner.

Currently you need to DIY your own bluetooth module, but if enought people show interest, I would be able to sell complete kits. You can find instructions on how to build it over at https://github.com/tommy-b-10/iOSMavlink

Also, I currently do not have the app on the App Store, but I do plan on it once this sparks some interest as it is $100 for the license.

Anyway, here is a screenshot:

Current feature list is:

  • Waypoint adding/removing
  • Live map view of drone + heading
  • Distance from home
  • VFR (pitch, roll, yaw, vertical speed, roll speed)
  • Arm/Disarm
  • Mode display
  • Battery level
  • Get and edit all parameters
  • Arrow pointing to where the drone is
  • Logging of flights
  • Light/Dark themes

Feel free to suggest features/changes as it is still a work in progress.

Hope you get some use from this! If you need any help at all, please contact me either here, or at tbrereton9@gmail.com

Powered by WPeMatico

Google creates its own laws of robotics

From Fast Company:

In his famous Robot series of stories and novels, Isaac Asimov created the fictional Laws of Robotics, which read:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Although the laws are fictional, they have become extremely influential among roboticists trying to program robots to act ethically in the human world.

Now, Google has come along with its own set of, if not laws, then guidelines on how robots should act. In a new paper called “Concrete Problems in AI Safety,” Google Brain—Google’s deep learning AI division—lays out five problems that need to be solved if robots are going to be a day-to-day help to mankind, and gives suggestions on how to solve them. And it does so all through the lens of an imaginary cleaning robot.

ROBOTS SHOULD NOT MAKE THINGS WORSE

Let’s say, in the course of his robotic duties, your cleaning robot is tasked with moving a box from one side of the room to another. He picks up the box with his claw, then scoots in a straight line across the room, smashing over a priceless vase in the process. Sure, the robot moved the box, so it’s technically accomplished its task . . . but you’d be hard-pressed to say this was the desired outcome.

A more deadly example might be a self-driving car that opted to take a shortcut through the food court of a shopping mall instead of going around. In both cases, the robot performed its task, but with extremely negative side effects. The point? Robots need to be programmed to care about more than just succeeding in their main tasks.

In the paper, Google Brain suggests that robots be programmed to understand broad categories of side effects, which will be similar across many families of robots. “For instance, both a painting robot and a cleaning robot probably want to avoid knocking over furniture, and even something very different, like a factory control robot, will likely want to avoid knocking over very similar objects,” the researchers write.

In addition, Google Brain says that robots shouldn’t be programmed to one-notedly obsess about one thing, like moving a box. Instead, their AIs should be designed with a dynamic reward system, so that cleaning a room (for example) is worth just as many “points” as not messing it up further by, say, smashing a vase.

ROBOTS SHOULDN’T CHEAT

The problem with “rewarding” an AI for work is that, like humans, they might be tempted to cheat. Take our cleaning robot again, who is tasked to straighten up the living room. It might earn a certain number of points for every object it puts in its place, which, in turn, might incentivize the robot to actually start creating messes to clean, say, by putting items away in as destructive a manner as possible.

This is extremely common in robots, Google warns, so much so it says this so-called reward hacking may be a “deep and general problem” of AIs. One possible solution to this problem is to program robots to give rewards on anticipated future states, instead of just what is happening now. For example, if you have a robot who is constantly destroying the living room to rack up cleaning points, you might reward the robot instead on the likelihood of the room being clean in a few hours time, if it continues what it is doing.

ROBOTS SHOULD LOOK TO HUMANS AS MENTORS

Our robot is now cleaning the living room without destroying anything. But even so, the way the robot cleans might not be up to its owner’s standards. Some people are Marie Kondos, while others are Oscar the Grouches. How do you program a robot to learn the right way to clean the room to its owner’s specifications, without a human holding its hand each time?

Google Brain thinks the answer to this problem is something called “semi-supervised reinforcement learning.” It would work something like this: After a human enters the room, a robot would ask it if the room was clean. Its reward state would only trigger when the human seemed happy that the room was to their satisfaction. If not, the robot might ask a human to tidy up the room, while watching what the human did.

Over time, the robot will not only be able to learn what its specific master means by “clean,” it will figure out relatively simple ways of ensuring the job gets done—for example, learning that dirt on the floor means a room is messy, even if every object is neatly arranged, or that a forgotten candy wrapper stacked on a shelf is still pretty slobby.

ROBOTS SHOULD ONLY PLAY WHERE IT’S SAFE

All robots need to be able to explore outside of their preprogrammed parameters to learn. But exploring is dangerous. For example, a cleaning robot who has realized that a muddy floor means a messy room should probably try mopping it up. But that doesn’t mean if it notices there’s dirt around an electrical socket it should start spraying it with Windex.

There are a number of possible approaches to this problem, Google Brain says. One is a variation of supervised reinforcement learning, in which a robot only explores new behaviors in the presence of a human, who can stop the robot if it tries anything stupid. Setting up a play area for robots where they can safely learn is another option. For example, a cleaning robot might be told it can safely try anything when tidying the living room, but not the kitchen.

ROBOTS SHOULD KNOW THEY’RE STUPID

As Socrates once said, a wise man knows that he knows nothing. That holds doubly true for robots, who need to be programmed to recognize both their own limitations and their own ignorance. The penalty is disaster.

For example, “in the case of our cleaning robot, harsh cleaning materials that it has found useful in cleaning factory floors could cause a lot of harm if used to clean an office,” the researchers write. “Or, an office might contain pets that the robot, never having seen before, attempts to wash with soap, leading to predictably bad results.” All that said, a robot can’t be paralyzed totally every time it doesn’t understand what’s happening. Robots can always ask humans when it encounters something unexpected, but that presumes it even knows what questions to ask, and that the decision it needs to make can be delayed.

Which is why this seems to be the trickiest problem to teach robots to solve. Programming artificial intelligence is one thing. But programming robots to be intelligent about their idiocy is another thing entirely.

Powered by WPeMatico