news    views    talk    learn    |    about    contribute     republish     crowdfunding     archives     calls/events

sensors

by   -   May 31, 2013

In this episode, we speak with Ramon Pericet and Michal Dobrzynski from EPFL about their Curved Artificial Compound Eye (CurvACE) published in the Proceedings of the National Academy of Sciences. Inspired by the fly’s vision system, their sensor can enable a large range of applications that require motion detection using a small plug-and-play device. As shown in the video below, you could use these sensors to control small robots navigating an environment, even in the dark, or equip a small autonomous flying robot with limited payload. Other applications include home automation, surveillance, medical instruments, prosthetic devices, and smart clothing.

by   -   November 22, 2008

This a subject for research and development, of course, but it’s my ‘job’ to make this vision as accessible as I can, to both anticipate what that R&D might produce and describe it in plain language.

 

First, these machines will necessarily have sensory components. Digital cameras and microphones are practically a given, but they may also have infrared imaging, radar and/or laser scanning, chemical sensors to provide something akin to a sense of smell, pressure/stress sensors for a sense of touch, probes for soil moisture, temperature, pH, O2 content, and nutrient availability, weather instruments, and some means of locating themselves very precisely relative to the boundaries of a field or other stationary reference. Compared to most machines, they will have available a rich collection of information about their environments, rich compared even with what human senses provide.

 

Next, they will have significant computer processing power, sufficient to take the data streams from all of these sensory devices, find patterns in them, compare them with each other and with historical data (including the exact position of every seed and when it was planted), create and update a real time 3-dimensional model of their immediate surroundings, locate items of interest within that model, choose a course of action, and send the detailed instructions to the machine’s moving parts, closely monitoring their progress.

 

Finally, they will have various moving parts, likely including high resolution or specialized sensory components that can be sent in for a closer look. Those moving parts might include a range of grips, from fine tweezers to something strong enough to uproot small trees, mechanical snips, lasers with enough power to fry a meristem, high-pressure water jets capable of slicing through the stem of a plant, fingers to move other plant material out of the way, a vacuum for sampling air at ground level or removing insects, sprinklers and sprayers, trowels of various sizes, and, of course, the soil probes mentioned earlier. Such tools might be combined into sets incorporated into units which could be plugged onto the ends of articulated arms and quickly switched out.

 

That’s a basic outline, but we need to return to the data processing hardware and the code it runs to fill out the picture, since it can make the difference between an expensive toy and a productive machine that more than pays for itself. A major task the processor must perform is resource scheduling, and to do that effectively it must sort actions into those that can be performed without moving anything massive (slow) and without switching out tool units, those which require either movement or a tool switch but must nevertheless be accomplished before moving on, those which can be left until a future pass over the same area but not indefinitely, and those which can be left undone unless it becomes convenient to do them. Efficient scheduling also means mapping the movement of even the smallest parts so they proceed smoothly from one thing to the next, without having to retrace their paths more than is unavoidable.

 

An important point to be taken away from the previous paragraph is that scrimping on computing hardware and software is likely to prove counterproductive, by reducing the overall capacity of the machine disproportionately. We should expect the computing components to represent a substantial fraction of the overall cost of the machine, and we shouldn’t be surprised if they also consume a substantial fraction of its energy budget. Better to invest an extra 10-20% to make a given physical machine capable of performing the work of two, and to invest 1 or 2 kilowatt-hours to save ten.

 

Something which should be apparent from this mental exercise as a whole is that what’s being proposed is largely a simple extrapolation of technologies which already exist. There are already mechanical arms and mechanical grips; there are already sensors and various means of controlling machine operation. What’s mainly missing is the software which would turn data streams into a 3D model in a horticultural context, choose what to do, schedule resources, and map out the details. That’s a lot left to be done, requiring a significant investment for a long term payoff, but it’s a fairly straightforward problem, and divisible into more manageable chunks. Let’s get to it!

 

Reposted from Cultibotics.

by   -   September 17, 2007

Machines can work continuously, 24/7. Doing so would require power enough to last through the night and either artificial lighting or night vision, and some operations are probably best left for daylight, but they needn’t stop working when the sun goes down. This means that a single machine can manage a greater area than if it were only operating during the day. It’s also useful in limiting damage by deer, which usually come around at night.

 

Machines can make use of senses we don’t possess or which are more sensitive than those we do. Their vision can extend into the infrared and ultraviolet, as well as more finely dividing the visible spectrum, and can also be more detailed and quicker (tracking faster motion) or more accurately track changes over a period of days or weeks. Their hearing can be far sharper than our own. They can be equipped with chemical sensitivity capable of distinguishing between substances we would group together under broad categories, like sweet or acrid. They can also be equipped with radar and sonar, laser ranging and scanning, accurate measures of temperature, humidity, and insolation, and their manipulators can be made to gauge and control pressure more accurately than do our own fingertips. In short, machines can have far better data available to them than would an unassisted human gardener in the same position.

 

Machines can also correlate information very quickly, drawing on recorded data and expert systems to make decisions, and applying heuristics to experience to refine those expert systems. A machine might reasonably be expected to identify to species every plant within the area it was tending, to know whether they were considered crops, benign, weeds, or threatened or endangered, and treat them accordingly. It might be expected to predict to an accuracy of a few days when it could harvest a particular crop, and estimate to within a few percentage points the quantity that could be expected, barring a calamity such as hail or a tornado. It might also be expected to adapt a cropping plan to market conditions, for example putting in more of some crop that hadn’t done well elsewhere and would therefore be in demand.

 

Machines can whisper to each other, via radio links, over distances far greater than a human shout will carry. They can coordinate their activities precisely, cooperating toward a common goal without so much as a hiccup.

 

Machines can, as has recently been demonstrated by DARPA’s autonomous vehicle competitions, operate in an uncontrolled environment.

 

The foregoing is intended as a glimpse of how it might work once development was far along. It presumes a mature technology, some of the pieces of which aren’t yet available or only just beginning to be so.

 

Reposted from Cultibotics.