Robohub.org
 

Visions of transformative research: Risk-reward, variation-selection, hot trends, exploration of the wild


by
08 July 2013



share this:

Over the last 20 years or so, a sense that science has become conservative or incrementalist has developed, and calls for change in the approaches to public funding of research have been heard from various quarters. Several notions have been suggested of what should be supported instead of “normal science” or “incremental innovation.” Among them we have heard calls for more “high risk-high reward” research, or for more “highly creative” science, or for more “cutting edge” or “frontier” research and, more recently in language adopted by funding agencies, that more “transformational research” is needed.

The main idea is that there isn’t enough risk taking, and little scientific or technological creativity is reflected in proposals that might “revolutionize” our understanding of the nature or society. “Business as usual” in the support of science and technology seems not to be satisfactory for producing the sort of radical change and rapid advance in science needed to solve important problems we face in economic development, health care and general social wellbeing.

This discussion does not fully address the situation in the private sector. However, the fate of many top-performing R&D centers in companies such as IBM, Texas Instruments, AT&T, among others, over the same period indicates that corporate willingness to take risks in research is not precisely at its most adventurous. Having said that, our focus here is on public funding since the sense of risk and reward of innovation by private industry in the market does not have the severe principal-agent problem faced by public funding. Companies invest their own money and either lose it or make a profit for themselves. Researchers funded by government do not lose their own money in the risk taking. The government takes virtually all the risk, at least in the financial sense.

Participation of the public sector in the development of robotics technology has the purpose of reducing the risk for private companies that, without at least partial subsidies from government, would not have a safe enough business case to get involved.

When it comes to radical innovation in robotics, this distinction about risk taking must be kept in mind. In any case, the participation of the public sector in the development of robotics technology has the purpose of reducing the risk for private companies that, without at least partial subsidies from government, would not have a safe enough business case to get involved. This is the well-known market failure justification for public support of R&D.

In the context of public funding, the capacity of the government to take on R&D risk is not infinite. And moving from the intuition on the desirability of more scientific and technological breakthroughs to a clear understanding of how it would be recognized, what exactly is the difference between incremental research and proposals for future scientific revolutions, and what the proper incentive and support mechanisms are for it has proven difficult. It is not clear what in the research process has the attribute that is looked for. Is it the projects that are special? Are certain scientists capable in a different way? Is it the organization that can provide an unusual environment for research? Or is it groups of projects, that is, portfolios that must be assessed for their combined effects? So far, none of these questions has a clear answer.

Models, Metaphors, Analogies to Produce Meaning
One way to begin to clarify the problem is to take the models that are being used either implicitly or explicitly to suggest courses of action. We have identified at least four that have appeared repeatedly in documents and speeches about the “special” research that might have higher impact. They work as analogies for highlighting one or more important attributes or their consequences. These are the models of stock investment portfolios; biological evolution; “hot” trends; and exploration of the wild.

Stock Investment Portfolios
Many calls to support more “high risk” research that has the potential to offer “high reward” appeal to the intuition of investment risk. In this analogy, each project that is up for support is similar to a stock considered for inclusion in an investment portfolio. As in stock portfolios, a balance of risk is sought across the portfolio so that a number of safer bets compensate for riskier ones that might payoff disproportionately.

The important implication of this perspective on research support is that the unit is the project and that each one is assessed in comparison with the others in the portfolio. The overall balance of risk of a good portfolio means that projects are not selected on the same criteria. Some will be “good” low risk projects to compensate for potential losses of the high risk ones. This is clearly not current practice in public funding of research since all projects are assessed on the same criteria and only the best are funded. All projects have approximately equal risk.

Much talk about “high risk-high reward” is superficial and really suggests that most high reward projects could be identified with some special assessment mechanism without a high risk of failure. Actual failure would probably be pinned on the proposal assessment mechanism rather than the risky nature of the projects.

There are a few consequences that must be given further thought. First, what attribute of projects must be risk assessed on? Are risks across disciplines considered, for example? Or are risks across organizations or types of projects? No clear answer to this question has been offered. Second, how much failure will be tolerated? If risk is real, then high risk means high probability of failure and a high failure rate should be observed. Otherwise, the risk is not as high as it was assumed. No clear answer for this question is available either. Much talk about “high risk-high reward” is superficial and really suggests that most high reward projects could be identified with some special assessment mechanism without a high risk of failure. Actual failure would probably be pinned on the proposal assessment mechanism rather than the risky nature of the projects.

Evolution in Science
This analogy implies that new ideas in science appear much like genetic mutations do in living organisms. If our selection mechanism allows for the more radical mutations to survive, more radical change in science would probably follow. Most attention has been drawn to the selection mechanisms, assuming implicitly that the size of the pool of variation for the chance of selecting viable large jumps is sufficiently large.

The consequences of this line of thinking are also intriguing. First, the necessity of a random generation of variation in scientific ideas is not a familiar one. The image of a rational scientific method has a strong cultural effect of keeping random idea generation hidden from view, if not largely muted. In other words, attention would have to be put on the environment that allows for a large pool of very diverse scientific ideas to emerge continuously in order to have large jumps that might be selected. Secondly, a large measure of randomness in the consideration of scientific ideas would have to be acceptable to the establishment. Otherwise, just as the reduction in biological diversity is a serious problem for environmental health, there will be very little to select from to have many “viable new scientific organisms.”

Hot Trends in Science
Science is not different from other cultural phenomena in that novel trends generate “icons” and “hits” that the community rallies around. As a matter of fact, many new ideas in science and technology are followed during the first period of their public diffusion to assess whether they are substantive or only a fad. It is often heard of researchers gaining prominence in their fields that they are new “stars” or that they are “hot.” These trends and their icons attract a following, and researchers that join the movement fashion their professional identity around the key features of the emerging field.

Science is not different from other cultural phenomena in that novel trends generate “icons” and “hits” that the community rallies around.

The main lesson to drive home from the analogy is that this cultural phenomenon is an integral part of science even if it is not formalized into the assessment of proposals and projects. Individual projects are not the main focus. The “movement” is the main concern. But, how does the perception of a hot trend and its icons affect the assessment of project proposals that promise to continue in its wake? How does the system deal with the possibility of a fad that wastes energy and resources on a trend that does not pan out? A rapid recognition mechanism to tease out indicators of faddish elements seems to be necessary to address this issue. But this would be a “conservative” reaction. Should the system be more liberal in embracing hot trends to avoid killing off potential revolutionary developments? This would be another type of risk that might have to be accepted. With its acceptance comes another departure from a common view of rational science that looks askance at enthusiasm and bandwagon effects as illegitimate passions. Maybe science should have a few “moments of madness”.

Exploration of the Wild (or the Endless Frontier)
The idea that science is a sort of exploratory venture into the unknown unexplored regions of nature is an old one. It was made into influential policy discourse by a prominent scientist in the aftermath of World War II: Vannevar Bush. It assumes that what we don’t know is contiguous with what we know, and we just have to “venture out” into this unexplored territory to get to know it. The only difference between conservative, incremental science and the highly creative or transformational research is that the exploratory journey should go deeper and farther into the unknown in one single expedition.

This image suggests that the main way to achieve sizable leaps to advance science is by organizing well-equipped and supported expeditions into unexplored territory. The skeptics in policy making recognize immediately that this analogy suggests that research always requires more funding: “If you give us more money, we can go deeper into the unknown.” This was already recognized as a subtext of the original idea of the “endless frontier.” Risk is downplayed in this analogy. Only underprepared expeditions are risky. There is always unknown territory to discover. The question is rarely raised that there may be little of value in vast regions of the unknown.

A Path Forward?

A part of the public image of science as a highly rational activity is tied to the status quo of low risk incremental change in science.

It seems that before we can define a path forward it is necessary to come to terms with the full consequences of what must be achieved. First, the simple formula of “high risk-high reward” hides the complexity of the goal. There are many interrelated issues that must be addressed simultaneously if this objective will be pursued with a committed effort. A simple set of criteria for evaluating proposals will not do the job. Second, a part of the public image of science as a highly rational activity is tied to the status quo of low risk incremental change in science. Higher risk and randomness that seem to be inherent in increasing the magnitude of change in science may open the enterprise to criticism and loss of legitimacy because of an appearance of irresponsible gambling with public resources. The idea is that the payoff of a few initiatives that succeed will compensate for many that fail. Is there any publicly supported system in today’s political environment that can operate legitimately under those conditions? Not likely.

The DARPA Robotics Challenge has an interesting approach to encourage the pursuit of “leaps forward” in the development of these technologies while reducing how much risk it takes on at any point in time. It does so with two crucial measures. First, it specifies a well defined target of technological capabilities. So, in terms of the metaphors we used, it is not looking at the entire evolutionary environment of technology or at the entire frontier of technology. It limits how much randomness must be accepted and how far into the unknown it may be necessary to travel before something of value is shown. Second, by making several teams compete in stages and granting a limited amount of funding to compete in progressively more difficult challenges, it reduces potential losses and requires the return of something of value before more risks are taken. At most, no team will advance to the next stage, so what has been granted will be lost, but it cuts its future losses as soon as nobody can show progress. However, the risk of pursuing those technological objectives rather than others has not been mitigated, and the stages might cut losses too early.

For this reason, in fields where objectives cannot be specified with such clarity, program design is more difficult. But the lesson remains. As the DARPA program shows, the flexibility may have to be transferred to the very support mechanisms themselves. Attempting to derive a fixed set of procedures and criteria to identify and support something that is highly variable, with an irreducible measure of randomness, culturally unstable and insatiably adventurous may be the wrong path to go down. The criteria and support mechanisms may have to become “experimental” themselves. The future development of scientific and technological research may be pressing for some institutional change.



tags: , , , , , ,


Juan Rogers is Associate Professor of Public Policy at the Georgia Institute of Technology.
Juan Rogers is Associate Professor of Public Policy at the Georgia Institute of Technology.





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association