news    views    podcast    learn    |    about    contribute     republish    

Sensing

by   -   August 3, 2017

New machine-learning system can automatically retouch images in the style of a professional photographer. It’s so energy-efficient, however, that it can run on a cellphone, and it’s so fast that it can display retouched images in real-time, so that the photographer can see the final version of the image while still framing the shot.

New simulation methods enable easier, faster design of elastic materials for robots and other dynamic objects.

Folding robots based on origami have emerged as an exciting new frontier of robotic design, but generally require onboard batteries or a wired connection to a power source, limiting their functionality. Scientist have now created battery-free folding robots that are capable of complex, repeatable movements powered and controlled through a wireless magnetic field.

Update: The response to Tertill’s crowdfunding campaign has amazed and delighted us! Pledges totalling over $250,000 have come from 1000+ backers. We’re shipping to all countries, with over a fifth of Tertill’s supporters coming from outside the United States. But the end is near; Tuesday (11 July) is the last full day of the campaign. After that Tertill’s discounted campaign price will no longer be available and delivery in time for next year’s (northern hemisphere) growing season cannot be assured.

Franklin Robotics has launched a Kickstarter campaign for Tertill, their solar-powered, garden-weeding robot.

Crops are key for a sustainable food production and we face several challenges in crop production. First, we need to feed a growing world population. Second, our society demands high-quality foods. Third, we have to reduce the amount agrochemicals that we apply to our fields as it directly affects our ecosystem. Precision farming techniques offer a great potential to address these challenges, but we have to acquire and provide the relevant information about the field status to the farmers such that specific actions can be taken.

This paper won the IEEE Robotics & Automation Best Automation Paper Award at ICRA 2017.

Why are spiders’ webs so complex? Might they have other functionalities besides being a simple trap? One of the most interesting answers to this question is that spiders might use their webs as computational devices.

File 20170609 4841 73vkw2
A subject plays a computer game as part of a neural security experiment at the University of Washington.
Patrick Bennett, CC BY-ND

By Eran Klein, University of Washington and Katherine Pratt, University of Washington

 

In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

interview by and   -   June 9, 2017
Credit: sk.ru


In this episode, Audrow Nash and Christina Brester conduct interviews at the 2016 International Association of Science Parks and Areas of Innovation conference in Moscow, Russia. They speak with Vadim Kotenev of Rehabot and Motorica about prosthetic hands and rehabilatative devices; and Vagan Martirosyan, CEO of TryFit, a company that uses robotic sensors to help people find shoes that fit them well.

Would you like to make a robot to grasp something, but you think that is impossible to you just because you can’t buy a robot arm? I’m here to tell that you can definitely achieve this without buying a real robot. Let’s see how:

I was recently asked about the differences between RADAR and LIDAR. I gave the generic answer about LIDAR having higher resolution and accuracy than RADAR. And RADAR having a longer range and performing better in dust and smokey conditions. When prompted for why RADAR is less accurate and lower resolution, I sort of mumbled through a response about the wavelength. However, I did not have a good response, so this post will be my better response.

As the last in our series of blog posts on machine learning in research, we spoke to Dr Nathan Griffiths to find out more about machine learning in transport. Nathan is a Reader in the Department of Computer Science at the University of Warwick, whose research into the application of machine learning for autonomous vehicles (or “driverless cars”) has been supported by a Royal Society University Research Fellowship.

If you have always wanted to know how a SICK LIDAR worked inside, now is your chance. If you did not always want to know, this is still your chance to see. Keep reading for some cool pictures.

by   -   March 15, 2017

If you take humans out of the driving seat, could traffic jams, accidents and high fuel bills become a thing of the past? As cars become more automated and connected, attention is turning to how to best choreograph the interaction between the tens or hundreds of automated vehicles that will one day share the same segment of Europe’s road network.

by   -   March 13, 2017

Automated cars are hurtling towards us at breakneck speed, with all-electric Teslas already running limited autopilot systems on roads worldwide and Google trialling its own autonomous pod cars. However, before we can reply to emails while being driven to work, we have to have a foolproof way to determine when drivers can safely take control and when it should be left to the car.

by   -   March 9, 2017

This comprehensive tutorial offers a step-by-step guide to using UgCS software to plan and fly UAV drone-survey missions.

← previous page        ·         next page →



Using Natural Language in Human-Robot Collaboration
November 11, 2019


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign