Dear Robohub readers: We need your help. Robohub is growing. Robohub is now a community with over 70 contributors and more than 30,000 unique visitors each month. In order for us to continue covering the latest in robotics research, bring in-depth coverage of conferences worldwide, and showcase interviews with leading roboticists, we need your support.
Your donation will go towards expanding our coverage, paying the salaries of our dedicated staff, and maintaining the website.
Keep Robohub alive for another year by donating to our campaign today.Thank you!
If we try to describe any kinds of robot by mathematical models, they must be very complicated equations. Even, such robotic systems are categorized as a conventional dynamic system consist of positive definite inertia matrix, Coriolis’ force term, friction term, gravity term and so on, we cannot derive any authentic control scheme from such complicated nonlinear systems.
Of course, we can make a servo system for a joint control but we do not have any systematic control scheme for whole robots.
“It has increasingly been realized that some of the key characteristics underlying real-world complex dynamical systems (such as economical, financial and ecological systems) can only been modelled and thus understood and predicted at qualitative level directly.
Animals and Humans utilize their compliance: muscular system and soft skin. The compliance plays an important role to exhibit intelligent behavior. To build an intelligent artificial agent, therefore, soft material is crucial. In the lecture, I will talk about several robots consisting of soft material for adaptive behavior. They are viewed from the design principles: cheap design, material property and redundancy.
Two images remain in my mind from IROS 2013 last week in Tokyo. The respect for Professor Emeritus Mori and his charting of the uncanny valley in relation to robotics, and the need for a Watson-type synthesis of all the robotics-related scientific papers produced every year.
Let me explain.
Almost all of the presentations at IROS were abstract and technical except for the discussion about Prof. Mori’s Uncanny Valley theory. First of all, he was there and described how he came to observe the uncanny valley under different situations and circumstances. Secondly, all of the presenters and audience were respectful of Prof. Mori’s work, his theory, and him as a person. Third, and most interesting to me, each of the other speakers in this special lecture session described how the uncanny valley theory was relevant in different settings and disciplines. In art, philosophy, psychology — in the works of David Hanson and Hiroshi Ishiguro (both of whom were there) — as well as in medicine, prosthetics and in robotics in general. To me it was a reminder that robotics crosses sciences and connects with humans in many different forms, and this tribute presentation at IROS brought the personal relationships and the breadth of their reach to the forefront, and away from the abstract, theoretical and mechanical side of IROS.
In this video by IEEE/Spectrum, filmed outside the door of the room where the session was held, one can clearly see the multi-science and psychological/philosophical aspects of the theory:
Worse, 90% of scientists don’t even know whether their research is “new” or not.
Ever since I learned of the IBM Watson Jeopardy project my mind has been fascinated with possibilities for practical applications. IBM is on that trail as well and is using Watson to help with medical diagnoses and legal research and briefing. My idea is to get the NSF and IEEE (and other organizations) to commission a Watson project to synthesize robotics and AI-related science papers into a meaningful resource for all to use. At present, there are so many papers published that a researcher cannot possibly read them all. Consequently we don’t even know what we already know. But with Watson, we could know — and we could redirect research activities truly into the unknown without reinventing things over and over.
How can science fiction help prototype emerging science theory and experimentation? Expanding on the framework of consumer experience architecture, this talk explores how a fictional story, based specifically on current works of scientific research, can lead to the expansion and further experimentation of a dramatic new approach to artificial intelligence and domestic robots.
Animal locomotion control is in a large part based on central pattern generators (CPGs), which are neural networks capable of producing complex rhythmic patterns while being activated and modulated by relatively simple control signals. These networks are located in the spinal cord for vertebrate animals. In this talk, I will present how mathematical models and robots can be used as tools to get a better understanding of the functioning of these circuits. In particular I will present how we model CPGs of lower vertebrates (lamprey and salamander) using systems of coupled oscillators, and how we test the CPG models on board of amphibious robots, such as a new salamander-like robot capable of swimming and walking. I will also show how the concept of CPGs implemented as coupled oscillators can be a useful control paradigm for various types of articulated robots from snake to humanoid robots.
Bernard Horan presents an overview of a trial undertaken at the University of Essex in the use of a mixed-reality teaching environment. The lecture includes a summary of the motivations for mixed reality teaching, its implementation using Project Wonderland and a summary of the results of the trial. From the 2009 ShanghAI Lecture series.
The ShanghAI lectures have brought us a treasure trove of guest lectures by experts in robotics. You can find the whole series from 2012 here. Now, we’re bringing you the guest lectures you haven’t yet seen from previous years, starting with the first lectures from 2009 and releasing a new guest lecture every Thursday until all the series are complete. Enjoy!
A new video released today by researchers from the Flying Machine Arena shows how a quadrocopter is able to learn from prior experience to improve future performance.
This new research is an extension of results published last year by the same group, which show how quadrocopters can learn to fly high-performance slalom courses (video).
Much like humans learn through repetition and practice, the quadrocopter repeatedly flies the slalom course, records any errors made and then tries to compensate for these errors during the next attempt.
Mapping is essential for mobile robots and a cornerstone of many more robotics applications that require a robot to interact with its physical environment. It is widely considered the most difficult perceptual problem in robotics, both from an algorithmic but also from a computational perspective. Mapping essentially requires solving a huge optimization problem over a large amount of images and their extracted features. This requires beefy computers and high-end graphics cards – resulting in power-hungry and expensive robots.
In the video below Master student Vlad Usenko shows how robots can sidestep this problem.
One of the oft quoted paradoxes of consciousness is that we are unable to observe or experience our own conscious minds at work; that we cannot be conscious of the workings of consciousness. I’ve always been puzzled about why this is a puzzle. After all, we don’t think it odd that word processors have no insight into their inner workings (although that’s a bad example because we might conceivably code a future self-aware WP and arrange for it to access its inner machinery).