A Robot for Menial Labor in Space :BesMan

If the project is successful, Besman will be able to learn by imitating humans. (Photo credit: David Schikora, DFKI GmbH)

If the project is successful, Besman will be able to learn by imitating humans. (Photo credit: David Schikora, DFKI GmbH)

There’s a lot of work to do on the International Space Station, and only a handful of humans to do it! That’s why researchers from the German Research Centre for Artificial Intelligence and the University of Bremen are working on a bot to perform menial labor in space, and assist with tasks that normally require two people.

The goal is to have the two-armed robot, called Besman, be able learn by watching and imitating humans. That way, the robot can pick up new skills as the need arises using a reinforcement learning technique. The bot will learn “how to deal with hazardous or unforeseen situations which the robot could not solve autonomously by simply using reflexive actions,” according to the BesMan Webpage.

The project is still in progress. But the team has already released videos of BesMan autonomously performing a mission that consisted of some of the basic operations that would be necessary on the International Space Station, such as rotating a handwheel and activating specific switches.

 

Credit and we pay tribute to professor  R Kremen For sharing editorial and copyright

With Regards

Rohan Chataut

 

 

#latest, #robotics-2, #science, #technology, #updates

Designing Electronics for the Harsh Environment of Venus

Researchers from KTH Royal Institute of Technology are working on electronics able to handle the extremely high temperatures of Venus. The electronics are based on silicon carbide, a semiconductor that can withstand the extremely harsh climate of the second planet from the sun.

The Venus Landsailing Rover, depicted above, would need to be able to work in very hot, high pressure, corrosive conditions. (Photo credit: NASA.)

“There are some places in space where the temperature is very high, such as the surface of Venus, where the temperature is 460 degrees Celsius,” says Carl Mikael Zetterling, professor of solid state electronics at KTH, and one of the researchers on the project. “But there are also places around the moons of Jupiter and Mercury, where the environments are really tough and where a powerful semiconductor such as silicon carbide can also be used.”

The space researchers, who may use the new Venus landsailing rover robot, want to find out is how the climate works on Venus, which in turn may provide explanations for the Earth’s climate. But Zetterling says there is one more point to take into consideration with silicon carbide in space research. Major space missions have proven fruitful for microelectronics.

“You could say that microelectronics has much to thank NASA and the Apollo projects for. These were some of the first customers to use silicon in integrated circuits. It is something that helped to get this industry going,” he says.

Zetterling believes that this focus on electronics for a new Venus lander robot will give a boost to the silicon carbide ahead. He notes that it is difficult to obtain funds for research in microelectronics. But demonstrating the technique on Venus increases the possibilities to show silicon carbide’s potential.

“The Knut and Alice Wallenberg Foundation, which funds the project, wish to see something that really makes an impression, something that makes an impact. And to show something that works on Venus certainly meets that objective. But I see applications on Earth as being much more important,” he says.

The Venus project would be the first time that silicon carbide is used in a high temperature application. However, the material has been used in high-voltage applications for at least a decade, such as for the conversion of solar energy into electrical energy.

Credit and we pay tribute to professor  R Kremen For sharing editorial and copyright

With Regards

Rohan Chataut

 

#robotics-2, #sci, #science, #tech

Inside, Robot: Improving Communication Between Robots and Astronauts

Recently, astronaut Andreas Mogensen placed a metal connector in a receptacle that had a mechanical tolerance to the connector of only 150 micro-meters. The success was notable because Andreas wasn’t trained to perform this task and performed while in space — some 160000 km away from the robot. But the procedure took a whopping 45 minutes. Researchers with theEuropean Space Agency later realized that the problem was the communication protocol. “Instead of having had a round-trip communication time delay in the task of approximately 850 milliseconds, the delay had jumped up, halfway through the experiment, to a total of 12 seconds! He still managed to do the haptic connector mating by using force-feedback, it was clear that the communications protocol needed to be improved,” reads the ESA Website.

The international UNISONO project, which is coordinated by VTT Technical Research Centre of Finland, has developed a new system so that astronauts can better control robots. (Photo credit: VTT)

Now a new communication solution help alleviate that problem, allowing astronauts in an orbiting space station to maintain uninterrupted contact with robots working on the surface of a planet. The technology also has potential industrial applications, such as reducing lag and jitter in mobile gaming.

Developed by the UNISONO team, the technology is an important step forward for initiatives such as the human mission to Mars. Before humans can land on Mars, the planet needs infrastructure, such as housing and laboratories, which need to be built by robots. These robots need to be controlled by astronauts from a space station orbiting the planet.

Astronauts can currently practice controlling robots on Earth from the International Space Station (ISS). The ISS is in constant orbit around Earth, which means that the astronauts frequently lose direct contact with the robot. This results in discontinuity in the data and video transmission, which makes it challenging to maintain the control of a robot.

“Losing control of the robot during a critical task can cause damage to the task or the robot itself. The UNISONO project has developed a solution which can keep the astronaut in constant contact with the robot during entire orbit,” said Ali Muhammad, principal investigator of robotics systems at VTT Technical Research Centre of Finland, which coordinates the project.

The time window for ISS to be in direct contact with a robot on Earth is much shorter than what is planned for the Mars. The UNISONO project has shown how the time window available to the astronaut can be widened by seamlessly switching between relaying stations on the ground. This would allow astronauts to realistically simulate future robotic missions on Mars, moon or other heavenly bodies.

The technology also has many potential industrial applications. The same idea can be used to design seamless wireless data transmission systems to stop smart phones from losing signal when people use them in a moving vehicle.

The gaming industry could use the technology to eliminate lags and jitters in mobile games.

“Mobile gamers frequently experience lags and other connection issues during a game. The technology developed in the course of the UNISONO project could improve their experience,” said Janne Seppänen, research scientist at VTT.

Credit and we pay tribute to professor  R Kremen For sharing editorial and copyright

With Regards

Rohan Chataut

 

#robotics-2

Singapore introduces Robocoach to keep older citizens in shape

City state will introduce robotic personal trainer to five senior activity centres to help country’s ageing population to stay healthy

Yaacob Ibrahim (centre front) works out with Robocoach in Singapore.
Yaacob Ibrahim (centre front) works out with Robocoach in Singapore. Photograph: Infocomm Development Authority of Singapore (IDA)/handout

It can’t fight crime or act as a butler but Robocoach is working with Singapore’s older citizens to help them stay healthy with regular exercise.

The android with metal arms and a screen for a face is already leading sessions and will roll out its services to five “senior activity centres” across the city-state this year, according to Infocomm Development Authority of Singapore (IDA), a government body that supports the country’s tech industry

 

It has been effective in engaging seniors to do their exercise routines correctly with its motion-sensor technology,” the IDA told the Guardian.(We extracted this from the Guardian)

“Feedback has been positive as seniors enjoy a novel way in physical exercises,” it said, although interaction with human volunteers is “just as important”.

Unlike the fictional superhuman cyborg Robocopfrom the 1987 film, whose primary directive is to fight crime with lethal force, Robocoach is part of a project that “aims to bridge the digital divide among seniors aged 50 and above”.

The IDA hopes to review the project over time, implementing more Robocoaches if it proves a successful trainer. The robot is already working at Singapore’s Lions Befrienders senior activity centre.

Singapore’s Ngee Ann polytechnic developed Robocoach and says “she” has a rosy red face, blue eyes and two teeth, and mimics human movements.

The robot “takes her responsibilities very seriously and coaches the elderly in performing 15 types of arm exercises each week. She can recognise the human voice instructing her to start off the exercise routine,” a report on Ngee Ann’s website says.

“She slows down the pace during group workouts to make sure everyone can catch up.”

Singapore has the world’s third most rapidly ageing population, according to the IMF.

Japan, which has the fastest ageing population, is leading in the field of robotics and the country’s Riken institute this year announced Robobear, a cartoon-faced android that can help lift a people from a bed into a wheelchair.

Japan’s Riken institute tests out Robobear, which helps lift people from a bed to a wheelchair.

“With its rapidly increasing elderly population, Japan faces an urgent need for new approaches to assist care-giving personnel. One of the most strenuous tasks for such personnel, carried out an average of 40 times every day, is that of lifting a patient from a bed into a wheelchair, and this is a major cause of lower back pain,” Riken said.

“Robots are well-suited to this task, yet none have yet been deployed in care-giving facilities.”

French company Aldebaran is developing Romeo, a 140cm humanoid robot intended to assist elderly people and those who have mobility issues. He can open doors, climb stairs and grab objects on a table.

Extracted from The Guardian

collected  by Rohan Chataut

 

#robotics-2

Rehabilitation robot

, robot suits [Credit: Contunico © ZDF Enterprises GmbH, Mainz]

any automatically operated machine that is designed to improve movement in persons with impaired physical functioning.

There are two main types of rehabilitation robots. The first type is an assistive robot that substitutes for lost limb movements. An example is the Manus ARM (assistive robotic manipulator), which is a wheelchair-mounted robotic arm that is controlled using a chin switch or other input device. That process is called telemanipulation and is similar to an astronaut’s controlling a spacecraft’s robot arm from inside the spacecraft’s cockpit. Powered wheelchairs are another example of teleoperated, assistive robots.

The second type of rehabilitation robot is a therapy robot, which is sometimes called a rehabilitator. Research in neuroscience has shown that the brain and spinal cord retain a remarkable ability to adapt, even after injury, through the use of practiced movements. Therapy robots are machines or tools for rehabilitation therapists that allow patients to perform practice movements aided by the robot. The first robot used in that way, MIT-Manus, helped stroke patients to reach across a tabletop if they were unable to perform the task by themselves. Patients who received extra therapy from the robot improved the rate of their arm movement recovery. Another therapy robot, the Lokomat, supports the weight of a person and moves the legs in a walking pattern over a moving treadmill, with the goal of retraining the person to walk after spinal cord injury or stroke.

Limitations in functionality and high costs have restricted the availability of rehabilitation robots. Furthermore, teleoperating a robot arm to pick up a bottle of water and bring it to the mouth is time-consuming and requires an expensive robot. To overcome that problem, engineers have worked to build more intelligence into robot arms on wheelchairs. Making robots understand voice commands, recognize objects, and agilely manipulate objects is an important area of advance in robotics generally. Progress in neuroscience stands to significantly advance the development of rehabilitation robots by enabling the implantation of computer chips directly into the brain so that all a user has to do is “think” a command and the robot will do it. Researchers have shown that monkeys can be trained to move a robotic arm in just that fashion—through thought alone.

The major limiting factor in the development of rehabilitation robots is that researchers do not know what exactly needs to happen in order for the nervous system to adapt to overcome a physical impairment. Hard work by the patient is important, but what should the robot do? Researchers are developing rehabilitation robots that assist in movement, resist movement when it is uncoordinated, or even make movements more uncoordinated in an attempt to trick the nervous system into adapting. Advances have been made in the development of robotic exoskeletons, which are lightweight wearable devices that assist in limb movement. Other types of rehabilitation robots could play a role in assisting the nervous system to regenerate appropriate neural connections following stem cell and other medical treatments.

#robotics-2, #science

Robot Future

3291

Numerous companies are working on consumer robots that can navigate their surroundings, recognize common objects, and perform simple chores without expert custom installation. Perhaps about the year 2020 the process will have produced the first broadly competent “universal robots” with lizardlike minds that can be programmed for almost any routine chore. With anticipated increases in computing power, by 2030 second-generation robots with trainable mouselike minds may become possible. Besides application programs, these robots may host a suite of software “conditioning modules” that generate positive- and negative-reinforcement signals in predefined circumstances.

By 2040 computing power should make third-generation robots with monkeylike minds possible. Such robots would learn from mental rehearsals in simulations that would model physical, cultural, and psychological factors. Physical properties would include shape, weight, strength, texture, and appearance of things and knowledge of how to handle them. Cultural aspects would include a thing’s name, value, proper location, and purpose. Psychological factors, applied to humans and other robots, would include goals, beliefs, feelings, and preferences. The simulation would track external events and would tune its models to keep them faithful to reality. This should let a robot learn by imitation and afford it a kind of consciousness. By the middle of the 21st century, fourth-generation robots may exist with humanlike mental power able to abstract and generalize. Researchers hope that such machines will result from melding powerful reasoning programs to third-generation machines. Properly educated, fourth-generation robots are likely to become intellectually formidable.

#robotics-2, #science

Robotics research

Dexterous industrial manipulators and industrial vision have roots in advanced robotics work conducted in artificial intelligence (AI) laboratories since the late 1960s. Yet, even more than with AI itself, these accomplishments fall far short of the motivating vision of machines with broad human abilities. Techniques for recognizing and manipulating objects, reliably navigating spaces, and planning actions have worked in some narrow, constrained contexts, but they have failed in more general circumstances.

The first robotics vision programs, pursued into the early 1970s, used statistical formulas to detect linear boundaries in robot camera images and clever geometric reasoning to link these lines into boundaries of probable objects, providing an internal model of their world. Further geometric formulas related object positions to the necessary joint angles needed to allow a robot arm to grasp them, or the steering and drive motions to get a mobile robot around (or to) the object. This approach was tedious to program and frequently failed when unplanned image complexities misled the first steps. An attempt in the late 1970s to overcome these limitations by adding an expert system component for visual analysis mainly made the programs more unwieldy—substituting complex new confusions for simpler failures.

Mars Rover Research Project [Credit: © MIT, Artificial Intelligence Laboratory]

In the mid-1980s Rodny Brooks of the MIT AI lab used this impasse to launch a highly visible new movement that rejected the effort to have machines create internal models of their surroundings. Instead, Brooks and his followers wrote computer programs with simple subprograms that connected sensor inputs to motor outputs, each subprogram encoding a behaviour such as avoiding a sensed obstacle or heading toward a detected goal. There is evidence that many insects function largely this way, as do parts of larger nervous systems. The approach resulted in some very engaging insectlike robots, but—as with real insects—their behaviour was erratic, as their sensors were momentarily misled, and the approach proved unsuitable for larger robots. Also, this approach provided no direct mechanism for specifying long, complex sequences of actions—the raison d’être of industrial robot manipulators and surely of future home robots (note, however, that in 2004 iRobot Corporation sold more than one million robot vacuum cleaners capable of simple insectlike behaviours, a first for a service robot).

Pebbles [Credit: © MIT, Artificial Intelligence Laboratory]

Meanwhile, other researchers continue to pursue various techniques to enable robots to perceive their surroundings and track their own movements. One prominent example involves semiautonomous mobile robots for exploration of the Martian surface. Because of the longtransmission times for signals, these “rovers” must be able to negotiate short distances between interventions from Earth.

A particularly interesting testing ground for fully autonomous mobile robot research is football(soccer). In 1993 an international community of researchers organized a long-term program to develop robots capable of playing this sport, with progress tested in annual machine tournaments. The firstRoboCup games were held in 1997 in Nagoya, Japan, with teams entered in three competition categories: computer simulation, small robots, and midsize robots. Merely finding and pushing the ball was a major accomplishment, but the event encouraged participants to share research, and play improved dramatically in subsequent years. In 1998 Sony began providing researchers with programmable AIBOs for a new competition category; this gave teams a standard reliable prebuilt hardware platform for software experimentation.

While robot football has helped to coordinate and focus research in some specialized skills, research involving broader abilities is fragmented. Sensors—sonar and laser rangefinders, cameras, and special light sources—are used with algorithms that model images or spaces by using various geometric shapes and that attempt to deduce what a robot’s position is, where and what other things are nearby, and how different tasks can be accomplished. Faster microprocessors developed in the 1990s have enabled new, broadly effective techniques. For example, by statistically weighing large quantities of sensor measurements, computers can mitigate individually confusing readings caused by reflections, blockages, bad illumination, or other complications. Another technique employs “automatic” learning to classify sensor inputs—for instance, into objects or situations—or to translate sensor states directly into desired behaviour. Connectionist neural networks containing thousands of adjustable-strength connections are the most famous learners, but smaller, more-specialized frameworks usually learn faster and better. In some, a program that does the right thing as nearly as can be prearranged also has “adjustment knobs” to fine-tune the behaviour. Another kind of learning remembers a large number of input instances and their correct responses and interpolates between them to deal with new inputs. Such techniques are already in broad use for computer software that converts speech into text.

#robotics-2, #science

Robot toys

Robot toys

AIBO [Credit: Courtesy of Sony Electronics Inc.]

Lack of reliable functionality has limited the market for industrial and service robots (built to work in office and home environments). Toy robots, on the other hand, can entertain without performing tasks very reliably, and mechanical varieties have existed for thousands of years.  In the 1980s microprocessor-controlled toys appeared that could speak or move in response to sounds or light. More advanced ones in the 1990s recognized voices and words. In 1999 the Sony Corporation introduced a doglike robot named AIBO, with two dozen motors to activate its legs, head, and tail, two microphones, and a colour camera all coordinated by a powerful microprocessor. More lifelike than anything before, AIBOs chased coloured balls and learned to recognize their owners and to explore and adapt. Although the first AIBOs cost $2,500, the initial run of 5,000 sold out immediately over the Internet.

#robotics-2, #science

Industrial robots

Industrial robots

robotics: industrial robot at a factory [Credit: © Index Open]Though not humanoid in form, machines with flexible behaviour and a few humanlike physical attributes have been developed for industry. The first stationary industrial robot was the programmable Unimate, an electronically controlled hydraulic heavy-lifting arm that could repeat arbitrary sequences of motions. It was invented in 1954 by the American engineer George Devol and was developed by Unimation Inc., a company founded in 1956 by American engineer Joseph Engelberger. In 1959 a prototype of the Unimate was introduced in a General Motors Corporation die-casting factory in Trenton, New Jersey. In 1961 Condec Corp. (after purchasing Unimation the preceding year) delivered the world’s first production-line robot to the GM factory; it had the unsavoury task (for humans) of removing and stacking hot metal parts from a die-casting machine. Unimate arms continue to be developed and sold by licensees around the world, with the automobile industry remaining the largest buyer.

bacterial genetics: use of robots [Credit: University College Cork, Ireland (A Britannica Publishing Partner)]

More advanced computer-controlled electric arms guided by sensors were developed in the late 1960s and 1970s at the Massachusetts Institute of Technology (MIT) and at Stanford University, where they were used with cameras in robotic hand-eye research. Stanford’s Victor Scheinman, working with Unimation for GM, designed the first such arm used in industry. Called PUMA (Programmable Universal Machine for Assembly), they have been used since 1978 to assemble automobile subcomponents such as dash panels and lights. PUMA was widely imitated, and its descendants, large and small, are still used for light assembly in electronics and other industries. Since the 1990s small electric arms have become important in molecular biology laboratories, precisely handling test-tube arrays and pipetting intricate sequences of reagents.

Mobile industrial robots also first appeared in 1954. In that year a driverless electric cart, made by Barrett Electronics Corporation, began pulling loads around a South Carolina grocery warehouse. Such machines, dubbed AGVs (Automatic Guided Vehicles), commonly navigate by following signal-emitting wires entrenched in concrete floors. In the 1980s AGVs acquired microprocessor controllers that allowed more complex behaviours than those afforded by simple electronic controls. In the 1990s a new navigation method became popular for use in warehouses: AGVs equipped with a scanning laser triangulate their position by measuring reflections from fixed retro-reflectors (at least three of which must be visible from any location).

Although industrial robots first appeared in the United States, the business did not thrive there. Unimation was acquired by Westinghouse Electric Corporation in 1983 and shut down a few years later. Cincinnati Milacron, Inc., the other major American hydraulic-arm manufacturer, sold its robotics division in 1990 to the Swedish firm of Asea Brown Boveri Ltd. Adept Technology, Inc., spun off from Stanford and Unimation to make electric arms, is the only remaining American firm. Foreign licensees of Unimation, notably in Japan and Sweden, continue to operate, and in the 1980s other companies inJapan and Europe began to vigorously enter the field. The prospect of an aging population and consequent worker shortage induced Japanese manufacturers to experiment with advancedautomation even before it gave a clear return, opening a market for robot makers. By the late 1980s Japan—led by the robotics divisions of Fanuc Ltd., Matsushita Electric Industrial Company, Ltd.,Mitsubishi Group, and Honda Motor Company, Ltd.—was the world leader in the manufacture and use of industrial robots. High labour costs in Europe similarly encouraged the adoption of robot substitutes, with industrial robot installations in the European Union exceeding Japanese installations for the first time in 2001.

#robotics-2, #science

Robot

Robot,     humanoid robot [Credit: American Honda Motor Co., Inc.]

any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner. By extension, robotics is the engineering discipline dealing with the design, construction, and operation of robots.

“Metropolis”: still with Abel, Helm, and Klein-Rogge from “Metropolis” [Credit: From a private collection]

The concept of artificial humans predates recorded history , but the modern term robot derives from the Czech wordrobota (“forced labour” or “serf”), used in . The play’s robots were manufactured humans, heartlessly exploited by factory owners until they revolted and ultimately destroyed humanity. Whether they were biological, like the monster in Mary Shelley’s Frankenstein (1818), or mechanical was not specified, but the mechanical alternative inspired generations of inventors to build electrical humanoids.

The word robotics first appeared in isaac Asimov’s science-fiction story Runaround (1942). Along with Asimov’s later robot stories, it set a new standard of plausibility about the likely difficulty of developing intelligent robots and the technical and social problems that might result.Runaround also contained Asimov’s famous Three Laws of Robotics:

  • 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

#robotics-2