Robohub.org
ep.

189

podcast
 

Robots and Communication with Eleanor Sandry

by
21 August 2015



share this:


Transcript included.

In this episode, Ron Vanderkley speaks with Dr. Eleanor Sandry of Curtin University about her new book Robots and Communication. In the interview, we explore human to animal communication and what we can learn from it; human to humanoid robots interaction; and human to non-humanoid robots interactions. Also, we discuss Western and Eastern perceptions of robotics.

book_cover

Dr. Eleanor Sandry

Eleanor SandryEleanor Sandry is a Lecturer in the Department of Internet Studies at Curtin University in Perth, Western Australia. Her research is focused on developing an ethical and pragmatic recognition of, and respect for, otherness and difference in communication. Much of her work explores communication theory and practice by analysing human-robot interactions in science fact, science fiction and creative art.

 

Links:


Transcript

Ron Vanderkley: Good morning Eleanor, if I can first ask you to introduce yourself to The Podcast listeners.

Eleanor Sandry: Hi there, well I’m Eleanor Sandry. I’m a lecturer in Internet Studies at Curtin University in Western Australia. My research is about communication in general, not just on theinternet and I’m particularly interested in human-robot interactions so that we can understand communication more broadly.

Ron Vanderkley: What got my attention to your subject was an up and coming book that appeared at a conference listing, if you can briefly tell us a little bit about the work on this book?

Eleanor Sandry: Yeah, so the book is called Robots in Communication and it’s been published by Paul Grave in the Pivot Series, which means it’s available online and print on demand. It’s a short format book, 45,000 words. It’s shorter than your average monograph and that’s because I wanted it to be accessible to a range of different people. It’s pitched towards people who are interested in robots, people who are interested in communication – human communication theories – so it’s trying to cater for quite a large audience. What it does is look at human-robot interactions and it tries to unpick them in some detail from a number of different theoretical perspectives.

I’m particularly interested in broadening the size of communication beyond just the transmission of information and ideas of success that surround accurately transmitting it. I’m interested in non-verbal communication; the idea that it isn’t just about what we say, but it’s about us using facial expressions and also movements more generally. I found that robots were just agreat way of looking at a huge range of different types of communication because they are sometimes built to look as human-like as possible, but on other occasions are completely machine-like, and yet what you find is the people who interact with them often develop some kind of relation to them. Even to a Roomba and other iRobot floor cleaners. Just looking at that gives me a whole range of different forms of communication to consider and things I can say about communication in general.

Ron Vanderkley: You speak about this idea of humanoid robots and the non-robots as a point of contact. What actually steers you towards the humanoid? Is it the facial features, the gestures, themovements?

Eleanor Sandry: I think that most people are studying humanoid robots because of particular facial features and expressions, in some cases. But, in others, it’s to do with physical ability to move around in human-like environments. One ofthereasons the Atlas robot – which is about to compete in the darker challenges – is human-like is because it’s expected to go and work in environments that are tailored for humans, but where humans can no longer enter, for example because of some catastrophic disaster.

In fact, I am much more drawn to non-humanoid robots and I’m particularly interested in the way that people will still read pretty much anything that moves as somehow trying to communicate with them. I’m also conscious of the fact that people who are trying to work with robots do better if they do develop this kind of relationship with them. Part of what I have been writing in the book is actually about looking at how those relationships develop and seeing those, say, between humans in ordinance explosive disposal and their robots. Relationshipsbetween people and robots are not necessarily a bad thing or something that we should be worried about; it actually allows those teams to operate better in real world environments.

Although a lot of people say that a sociable robot needs to be able to make gestures, maybe to also have facial expressions; that depends on the structure of the robot. I think there is a lot to be learnt from human-animal interactions. The things that humans and animals are capable of doing together open up possibilities for lots of other forms, for robots to be seen as effective teammates, basically.

Ron Vanderkley: For instance, canine interactions, this kind of trust and understanding. That is a whole different facet and we are still coming to termswith how wecan do that in robotics. Are you picking that apart and trying to figure out how wecan do that?

Eleanor Sandry: Yeah, I’m particularly interested in the idea of the need to have, almost, a trust and respect relationship in order for the team to operate effectively together. In the human-canine relationship, there would always be an overarching power relation. The human is almost always in control of the overallsituation and is working with the dog to achieve a particular purpose, for instance tracking a scent trail. The same is true with humans and robots and, at the moment, say with ordinance disposal robots, broadly, the human is in control almost 100 per cent of the time: they’re still mainly radio controlled or controlledvia wire connection. There are some things that they can perform autonomously like righting themselves if they fall over and maybe also returning to base.

What I’ve been trying to unpick, I suppose, is the idea that if a robot is going to become more autonomous, it will be out of contact with its human controller. There needs to be a more complex relation between the human andthe machine in order for that team to operate well. Thehuman effectively has to relinquish control to the machine at the point of which it moves out of radio contact, they have to trust in the robot to do its job. All of these ideas – even the terminology of me saying to do its job – isproblematic, historically, in understanding human-animal relations. Because many people think that a dog doesn’tknow what its job is, or isn’t going to take responsibility, and yet people who train dogs will tell you that this is precisely the way that you need to think about the dog in order tounderstand what is happening in the team, because you don’t have the nasal senses that the dog does.

It’s actually important that you trust a dog to do its job, so when it appears to be moving off a trail, in fact its followinga path that you have no idea exists and it will bring you to the final point; will complete the task effectively. I think thereare possibilities of the same thing between human robot teams, where the robot has senses that a human doesn’t. You have to learn to trust in those senses and I think it’s an interesting way to think about these relations. I think it has practical possibilities in the robotics community, but I also think it tells me all sorts of interesting things about how humans and other things interact.

Ron Vanderkley: The idea that we could try to use speech recognition between robot andobject, that being the only means of communication in the animal or non-animal world, is not entirely true. It’s looking at gestures, our voice.Is this also something that we may not have looked at in the robotic world?

Eleanor Sandry: Well I think there is a tendency. It’s not so much that it hasn’t been looked at but that the final goal is seen as being able to create a robot that it’s possible to activate entirely with just your voice and that will understand human expressions and gestures perfectly. I think that what’s being missed is that the robot could understand things differently from us. It’s not actually necessary to embed the robot in the world ina human-like way. It can be embedded in its own way and understand us in particular ways, even if that’s using sensors on hands that help direct the robot. Actually looking at those as real possibilities, not a problem that needs to be overcome, has potential forincreasing the flexibility of the way robots work.

The other reason I’m particularly interested in that is because I hope to start writing more about critical disability studies. That has really opened my eyes to the fact that technologies that are currently being directed towards very human-like interfaces may actually make it harder for people with disabilities to interact with a robot that might otherwise be very helpful to them in living their everyday lives independently. Really looking at a whole range of different interfaces for machines like this is important; not just thinkingof trying to perfect this notion of the human-like interface. I just don’t want them to cut down possibilities I suppose.

When I talk to people sometimes about robots, I get two reactions: normally, I get the people who are really firmly interested in human-like forms and want to make sociable robots as human-like as possible, and then I sometimes get other people that say “we never make robotshuman-like.” I get this divergent reaction and, really, I’m somewhere in the middle, trying to say “well, can’t we think about this differently? Can’t we think about this froma communication perspective, where you start seeing all the different ways that people and things and animals can communicate?” Because you’re quite right, of course, dogs don’t just respond to voice, they respond to tone of voice. They also understand things in ways that we can’t really possibly fully appreciate because wejust don’t have the senses that a dog does.

Ron Vanderkley: The other aspect I was considering is that we are talking about robots actually understanding us. I spoke to a guy in the UK about, I think, a robot called Hector, where it’s entertainment. It’s a robot telling the general public what is going on, with hand gestures, eye movements, et cetera. It’s a teaching tool, almost. I found that very interesting. Is that also something that you’re looking at?

Eleanor Sandry: I suppose it’s not my primary area of interest but that doesn’t mean I don’t think it’s important. I think that sometimes when I talk it’s easy for me to get obsessed with all the non-humanoid robots and it makes it sound a bit like I’m actually saying that we shouldn’t have humanoid robots and I don’t think that’s correct at all. I’m sure there are going to be situations where a humanoid robot works much better and that is probably a very good example. If you have a robot that you want to actually present to people and you want, effectively, to present it in a human-like way, then that is what you’re aiming for. You’re aiming for something that inspires the audience to pay attentionto it because of its abilities – that may sometimes be very important.

I think that thinking that that’s the only way to build a robot – that can be captivating, or attract attention, or draw people into communication – is probably incorrect and there are other examples. Certainly robots in art, for example, where a totally non-humanoid machine completely captivates its audience through its movements and the things it does, the sound it makes. It’s a very different idea; it’s not to try to transmit information directly but it’s still building a kind of relation between the human andthe machine that may lead us to question the ways that things exist in the world, which I think is usually quite a good thing. Also when things are framed by particular tasks, so anything where people and the robot aretrying to complete a task together kind of frames the communication that takes place.

I think it offers many more possibilities for non-humanoid communication in that particularsituation because you know what you’re aiming to do. And yourcapability of reading non-verbal communication is probably much greater in that situation. With the example of the robot up on stage, often a humanoid robot is going to win the game there.

Ron Vanderkley: When you look at the history of, say, robot interaction targeted at children or things like the first robot dog, where children actually relate to that almost as a real dog, and treat it as a real animal, do you think that’s the precursor to a closer relationship with non biological entities?

Eleanor Sandry: Well, I think its one direction in which to go. I’m slightly concerned by the idea of trying to create robots that effectively replace a relationship that someone would have with something that already exists in the world. It’s one of the reasons that I’m not so interested in humanoid developments, and also in the very animal-like robot developments, because it may notbe a good thing to replace the relationships that people already have with living things with a robot equivalent. Children are actually a fascinating example of humans who are freed from the need to justify all the decisions they make about how they’re going to interact with something and so, yes, theyreact to things, like a robotic dog,strongly. Their understanding, if something goes wrong with it, that it needs to rest or sleep, shows just how strong their understanding of the robot is.

I think that’s one of the reasons why I’m interested in robots that appear more overtly machine-like, because they remind you that you’re in quite acomplex new relationship with something that’s different. It’s different froman animal, it’s different from a human; it is its own thing.For example, I looked at interactions between humans and Guy Hoffman’s robot, which is a robotic desk lamp, a lighting assistant, and they did experiments together. People’sinteractions during the interaction were very much as if it was somewhat alive, not necessarily like an animal or anything else, it was an alive desk lamp. But you could still switch it off; it was still a machine as well.

Humans are capable of holding both of those ideas. I think children may be less worried about that, so as a specific example they do show just how far those relations can go. I suppose I’m interested in how you can signify that those relationships are actually different, that it is still a machine. However, I don’t really think that robots should replace human-animal relations. I just read something recently about a different researcher in Australia who has written about how he thinks, in the future, humans will have robotic animals and that immediately made me think of Doandroids dream of electric sheep? The original book has a very strong section in the front that is about the ownership of robotic animals being all that was possible for most people. The richest people had a real animal. It seemed to be that future idea and I hope and pray that we are not going to go that way.

Ron Vanderkley: No. There’s a concern throughout the entireinternetaboutthe future of autonomous robots, AI being this Skynet-kind of world, by those who I think are quite smart people. It kind of sends the wrong signals. I just wanted to see your take on things.

Eleanor Sandry: Well, my take on it is artificial intelligence has a long way to go before it ever getsto that kind of level. I also think that the way that it’s being developed at the moment, it’s not necessarily ever going to be the same as a human-like intelligence. An artificial intelligence is going to be different almost certainly. Yes, I get frustrated at the hype around the technology, and you’re right, it’s very respected voices that are raising these kinds of issues,and in the near future as well. I simply don’t think it’s going to happen in the near future and I don’t actually think it’s going to happen the way that some people are suggesting it might either.

I would definitely want to draw back from that and point out just how far away artificial intelligence is from being at that level. I mean Google Drive. These cars don’t drive everywhere; they drive in one particular place, where it’s been mapped sufficiently for them to be able to operate. I don’t think that people are necessarily aware of that when it’s reported in the mainstream media.

Ron Vanderkley: Not at all, and humanoid robots actually cost an awful lot of money to build …

Eleanor Sandry: Huge amount.

Ron Vanderkley: I don’t even know if Sony is actually able to clear a real amount of money.

Eleanor Sandry: I doubt it very much. I mean, that robot has been in a series of developments, way over a decade now, and still has issues being placed into a normal physical environment and operating successfully. It has to be ina controlled environment. I think that the general public probably isn’t aware of that. The difference between seeinga robot acting in a laboratory, in a controlled setting, and actually being able to interact in the normal physical world that we live in is something completely different. They are streets apart.

I saw something about the darker challenge basically explaining that they would be very impressed if one of the Atlas robots, or any of the other bipedal robots, actually manages to make it through the course without falling down. I think that that’s a realistic understanding of what’s possible at the moment. The idea that robots with four legs might actually operate more easily in those types of environments is fair and should be perceived as well. It’s a very interesting question.

Ron Vanderkley: Absolutely, and just in a previous podcast I was talking to some guys about soft robotics. I mean that’s completely going …

Eleanor Sandry: Totally, a very fascinating area.

Ron Vanderkley: Yeah, completely, a different direction and its early days.

Eleanor Sandry: Yeah, very early days, but there is so much potential for soft robotics but also mixtures of soft and hard. I saw ages ago that, rather than using a hand-like gripper to pick things up, someone had developedsomething that could actually… it was almost like something filled with coffee.

Ron Vanderkley: Like a bean bag.

Eleanor Sandry: Like a bean bag yes, that could grip. Those sorts of innovative ideas that are thinking of totally different ways of solving the task, rather than trying to make something that has a human-like hand, seem to me to offer tremendous potential and so I’m fascinated by the idea of what a robot might end up looking like in the future. I’m kind of encouraged that maybe robots in the future won’t look like us and actually that would be a really good thing.

Ron Vanderkley: That’s another human concern. Someone once put it that if we try and design something that looks so close to a humanand almost mimics the face, there’s total distrust. Whereas if it looks like a robot there is more trust.

Eleanor Sandry: I think there definitely is potential. I mean whenever I write about robots I always end up having to utilize the uncanny valleyexample. I’m kind of quite frustrated with it now: that idea that, as a robot becomes more and more human-like,eventually it’s actually read as zombie-like and there is a strong drop off in the level of trust in the robot. Some people suggest that you could climb the valley on the other side and eventually end up with aperfect human-like machine. Then, as has been written about in many different science fiction books, you have the question of what is the difference between the robot and the human and …

Ron Vanderkley: Silicon versus Calvin.

Eleanor Sandry: I think, really, we’re nowhere near achieving that at the moment. Even the very, very human-like faces you can see pretty easily from their expressions and from other things about them. I don’t know why we want to aim for that. I think that we have …

Ron Vanderkley: Perfection.

Eleanor Sandry: Yes, but it seems to be a perfection that’s driven by a100 percent understanding of humans being the pinnacle of evolution and, therefore, that should be the pinnacle of robotic development. That just seems to me an unfortunate decision to make. There are many more options for robotics than trying to achieve human-like status. I think that there are many other ways that machines could be far more interesting, far more useful but also maybe even develop new relations with people that don’t have to be like something that already exists.

Ron Vanderkley: One of the ideas in modern day sci-fi, the cyborg…

Eleanor Sandry: Yes, the idea of augmenting human ability, of fixing people, which is a very hot topic in disability studies because,in general, there aremany people with disabilities who do not wish to be fixed.Again, it’s driven by this idea of the human being the current pinnacle of evolution. Let’s see howfar we can take it. I hesitate to mention this but, as Dona Harrower said many, many years ago, in 1985 actually, we are already cyborg. I’m effectively a cyborg: I wear contact lenses; that’s the only way I can exist in this world. I would not be alive without those developments – of glasses or contact lenses. But where people begin to decide to amputate limbs, in order to have blades, because that means they can ran faster, that’s another thing science fiction is written about. There are all sorts of – not just ethical – questions, but bigger questions around that.

Ron Vanderkley: You brought up sci-fi as a sounding board. Do you use that as a sounding board for some of your ideas or your thoughts?

Eleanor Sandry: Often I do, yes: in fact, in robots and communication. Science fiction to me is a really great place to look because it’s sometimes overlooked as being a kind of subgenre. For some people, for many years, it was not real literature and not something that was studied much in scholarly circles. But it provides a real sounding board and some of the best science fiction writers write about embedding technology in societiesin ways that allow you to really think about some of these questions ina more developed way,because someone has written a story about it. It seems to provide a reallygreat way of thinking through some of the questions that are very difficult to picture in your head, whereas science fiction provides you with those kinds of thought experiments that can be so important.

In my thesis – what I was doing for my PhD- I did write more about science fiction: the idea of culture and human-machine interactions in culture were to me very interesting because they push way into the far future I guess.

Ron Vanderkley: Coming back on the idea that technology is going to take over. Is that something that you think that’s going to be more and more of a problem, as sci-fi seems to dwell on the bad things and not the positives, or is that just good for entertainment?

Eleanor Sandry: Well, it’s definitely good for entertainment. It’s also a very Western perspective on robots andtechnology. Whereas in Japanese Manga, the whole idea of technology, and robots in particular, is far more positive. Also, it’s a great pity that we lost [Iain] Banks because that was someone writing about a utopian feature, problematic in lots of ways, but at least he offered some kind of positive vision of the way things could work. You’re right, in general, there will still often… they’re not exactly disaster movies are they? But they’re kind of …

Ron Vanderkley: The good guy always seems to win in the end.

Eleanor Sandry: In the end but there is usually a potential for total disaster in the middle, isn’t there?

Ron Vanderkley: Okay, so in closing, what do you think is going to happen in the field of robotics, well, in Australia?

Eleanor Sandry: Australia is interesting and I’m still trying to get to grips with what’s happening in different parts of Australia. I think that the level of funding here may be very different from other countries. There are interesting things going on and I really need to spend more time getting to grips with them because I keep on reading things about what people are doing in Melbourne, starting with robotic arms, the disabled, people with disabilities, and I’m interested that there is stuff going on in Sydney. I’m seeing people from the Center for Social Robotics there, and so I’m particularly interested in what they’re doing and the cost as well.

I think robotics will continue to develop here. There is probably quite astrong drive towards thinking of robots in terms of self-driving machines, mind technology. That’s always going to be a big thing in this country. There are definitely innovative thinkers in Australia who produce interesting solutions to different problems and I would like to carry on finding out what they’re doing and writing about it.

Ron Vanderkley: Okay and to cap that off, if I can firstly thank you for giving the time to chat. On behalf ofThe Podcast, thank you very much.

Eleanor Sandry: Well, thank you for having me. It’s been great to talk.

All audio interviews are transcribed and edited for clarity with great care, however, we cannot assume responsibility for their accuracy.



tags: , , , , , , , ,


Ron Vanderkley





Related posts :



Octopus inspires new suction mechanism for robots

Suction cup grasping a stone - Image credit: Tianqi Yue The team, based at Bristol Robotics Laboratory, studied the structures of octopus biological suckers,  which have superb adaptive s...
18 April 2024, by

Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association