Issue 58, January 2015
bulletArtificial Intelligence
bulletInnovation: Charlie the Model Chimpanzee: The First Robot with a Flexible Spine and Sensitive Feet
bulletInterview with Artificial Intelligence Expert Prof. Dr. Wolfram Burgard
bulletA New Era of Safe and Intelligent Mobility with Car-to-X Communication
bulletASSAM - Assistants for Safe Mobility: ICT for Aging Well
bulletMiroSurge: Minimally Invasive Robotic Telesurgery
Artificial Intelligence 

It's been four years since I.B.M.'s supercomputer "Watson" defeated two of the world's leading Jeopardy! champions in a man versus machine challenge. This success signified a major breakthrough in Artificial Intelligence (AI), the field of study in which machines can perform tasks at a level on par with human intelligence and perception. 

 

Knowledge representation and inference have long played an integral role in the history of AI research, which dates back to the 1950's, the infancy of computing. The goal of AI is to design intelligent, autonomous machines that can not only reason, plan, and learn, but also sense and interact with their surrounding physical environments and use language to communicate with others at an advanced level. Future society, according to a recent New York Times article, will be a "world in which cars drive themselves, machines recognize people and 'understand' their emotions, and humanoid robots travel unattended, performing everything from mundane factory tasks to emergency rescues."

 

Three important breakthroughs, namely, Big Data, better algorithms, and cheap parallel computation, have recently unleashed the arrival of exciting, new AI-based services. From GPS navigation systems and Apple's Siri to Google's self-driving car, AI's multifaceted applications are revolutionizing many sectors of life as we know it today. Fields of application include machine learning, data mining, drones, vision, natural language processing and translation, medical diagnosis, and space exploration. 

 

The German Research Center for Artificial Intelligence (DFKI), the largest research center worldwide for AI R&D, is pioneering ambitious projects in a variety of related fields. These projects range from the development of technologies to enable the Internet of Things and augmented reality to the advancement of robotics in underwater, search-and-rescue, logistics, e-mobility, and rehabilitation realms.
 

 


Climbing down steep crater walls in search of frozen water on the moon, for example, requires a robot to be both autonomous and flexible. In the future, these skills will become increasingly important for mobile robots. To meet these demands, scientists at the German Research Center for Artificial Intelligence (DFKI) Robotics Innovation Center and the University of Bremen have developed an ape-like robot named "Charlie" - with an actuated spinal column and sensor feet - for better traction and stability in uneven environments like lunar terrain.

Charlie is a hominid robotic system, equipped with multi-point contact feet and an active, artificial spine. The robot's front and rear are connected via a flexible spinal structure that offers movement in six spatial directions. Likewise, the robot's foot and ankle structures support the system's movement in terms of traction and stability. Altogether, the robot has more than 330 sensors. They are as self-contained as possible, allowing Charlie to respond to external disturbances with only minor delay.

For the time being, the robot's quadrupedal posture is a more stable standing position, better equipped to tackle explorations of rough, uneven environments like moon craters. Up until now, the robot has been able to walk in many different test environments with a range of walking speeds on various surfaces and in varying inclinations ranging from -20 to 20 degrees. The robot is able to shift its center of mass in real-time based on the slope it is walking on. Since the robot can also stand up on two legs like a human, advanced applications in a bipedal posture may also be possible, such as for using the front extremities for manipulation tasks.

Charlie was designed over the course of the DFKI project "iStruct - Intelligent Structures for Mobile Robots." iStruct received 3.3 million euros in funding from the Space Agency of the German Aerospace Center (DLR) and the German Federal Ministry for Economic Affairs and Energy (BMWi).

To watch Charlie stand, walk, and balance, click here.

Source & Image: © Daniel Kühn, Robotics Innovation Center, DFKI GmbH

  

 

Prof. Dr. Wolfram Burgard, Professor of Computer Science and Head of the Research Lab for Autonomous Intelligent Systems at the University of Freiburg, is well-respected for his research on artificial intelligence and mobile robots. Over the past years, his group has developed a series of innovative probabilistic techniques for robot navigation and control, covering a variety of areas, such as localization, map-building, simultaneous localization and mapping (SLAM), path-planning, and exploration.


In his interview with GCRI, Prof. Dr. Burgard describes how Artificial Intelligence is currently transforming society and will continue to influence human behavior in the future. He also explains when and how he became involved with AI and a few interesting projects that his group is currently working on. Lastly, he discusses the areas of AI in which Germany is a global leader. To read the full interview, click here.

In 2008, his group developed an approach that allowed a car to autonomously navigate through a complex parking garage and park itself. In 2012, his group created the robot Obelix that autonomously navigated like a pedestrian from the campus of the Faculty of Engineering to the city center of Freiburg. Since 2012, Prof. Dr. Burgard has served as Coordinator of the Cluster of Excellence BrainLinks-BrainTools, which aims to develop adaptive, robust applications of brain-machine interface systems.

Prof. Dr. Burgard is a Fellow of the European Coordinating Committee for Artificial Intelligence (ECCAI), the Association for the Advancement of Artificial Intelligence (AAAI), and the Institute of Electrical and Electronics Engineers (IEEE). He is also a recipient of the 2009 Gottfried Wilhelm Leibniz Prize, the most prestigious German research award. In 2010, Prof. Dr. Burgard received a European Research Council Advanced Grant, which enables exceptional, established research leaders to pursue ground-breaking, high-risk projects that open new directions in their respective research fields or other domains.

A prolific author, Prof. Dr. Burgard has published over 250 papers and articles for conferences and journals on robotics and artificial intelligence. He has also co-authored two books, including one on robot perception and control in the face of uncertainty. For a complete list of Prof. Dr. Burgard's projects, click here.
 

Image: © Emil Bezold

 

 


 

Imagine a future where cars can talk and network with each other - warning drivers of potential hazards beyond their field of vision and alerting motorists about traffic jams and broken down vehicles on the roads ahead. 

This vision is now becoming a reality thanks to Mercedes-Benz's "Intelligent Drive" strategy. Since December 2013, Car-to-X technology within Mercedes-Benz passenger cars has enabled vehicle-to-vehicle information exchange as well as vehicle-to-traffic infrastructure communication. Through the use of Car-to-X communication, information on imminent dangers can be relayed to drivers in real-time so that they can take appropriate action, such as reducing speed, changing lanes, or following alternative routes. This information may even help prevent hazardous situations from arising in the first place.

Car-to-X technology significantly expands the scope of existing vehicle sensors like radar or camera systems. It enables motorists to see around corners and beyond obstacles, thereby helping to reduce blind spots with which current sensor systems struggle. The technology's greatest potential thus lies in its expansion of the telematic horizon. With Car-to-X communication, Mercedes-Benz has created a market-ready base technology that will enable the development of a new generation of driver assistance systems.

In addition to enhancing safety and convenience, Car-to-X technology also facilitates more efficient mobility. By using the highly precise information on traffic conditions available via Car-to-X communication, the technology can help improve the flow of traffic, such as by controlling traffic signals. Smoother traffic in turn provides environmental benefits and financial savings like decreased fuel consumption and reduced congestion and accident-related costs.

Daimler was the first auto manufacturer to bring Car-to-X technology to the road and has been a driving force behind the research and development of this technology for decades. As a founding member of the Car 2 Car Communication Consortium, Daimler is actively working on a Europe-wide standardized system for Car-to-X communication. By serving as the project manager of a German and European field trial for the practical testing of Car-to-X communication, Daimler is also driving the preparations for the market launch of Car-to-X systems.

To watch a video about Car-to-X technology, click here.

Source & Image: © Daimler AG 

 

Source: Prof. Dr. Bernd Krieg-Brückner, Bremen Ambient Assisted Living Lab, Cyber-Physical Systems, DFKI 

 

In response to current demographic shifts, researchers are designing new devices to address the needs of an aging population. The Assistants for Safe Mobility project aims to develop new technologies to help individuals maintain their mobility by compensating for age-related decreases in physical or cognitive abilities, such as declining force, stability, vision, or orientation. In accordance with increasing individual user requirements, a commercially available walker or wheelchair as well as a new tricycle can be upgraded with affordable, self-contained hardware and software modules to support people's mobility as they age.


The sensors of the OdoWheel, for example, provide speed and directional data for an Android tablet so that the vehicle's Navigation Aid can connect with GPS and OSM map data for accurate localization to ensure safe outdoor navigation for the user. As an alternative to a directional arrow on the display screen or voice commands in an earpiece, two iHandleBars provide tactile or visual turn instructions. Electric iWheels give the walker the extra power needed to move up ramps or brake on dangerous downhill slopes, a feature also offered by the electric tricycle. Furthermore, the Navigation Assistant uses laser scanners to detect barriers, steps, or holes in the ground, to circumvent obstacles by proactive steering, to keep the wheelchair safely on the sidewalk, to maneuver through narrow door frames, and to enable fully autonomous driving.

Additionally, the Emergency Connection feature provides users an extra sense of security in unfamiliar environments, even when traveling abroad, by wirelessly connecting the mobility device to an off-site caregiver, who is able to interact with the user to assess any dangerous situations, to access his or her personal medical profile, to inspect a situation via an on-board camera (when permitted), and to provide navigation assistance from online.

From 2012 to 2015, ASSAM is receiving its funding from the Ambient Assisted Living Joint Programme by the EU and national ministries of Germany, Spain, and the Netherlands. Coordinated by the German Research Center for Artificial Intelligence (DFKI), the consortium includes nine research institutions, industrial partners, and end-user organizations.

To learn more, watch these videos. For further information, visit www.assam-project.eu or email Bernd.Krieg-Brueckner@dfki.de.

Image: © DFKI / Annemarie Hirth

 

Source: German Aerospace Center (DLR), Institute of Robotics and Mechatronics

 

Conventional minimally invasive surgery (MIS) or laparoscopic surgery is performed through small incisions in a patient's skin to help preserve healthy tissue. A surgeon works with long, slender instruments, away from the area of operation. This current arrangement is challenging for surgeons, however, due to reduced hand-eye-coordination. At present, the surgeon has to move the handle to the right, for example, when he wants to move the tip of the instrument inside the body to the left. So the handling is not very intuitive. Furthermore, the lack of direct manual contact makes it more difficult for the surgeon, such as when identifying different tissue stiffness or localizing a tumor. Force sensors are therefore very important. 

 

For these reasons, many sophisticated procedures still cannot be performed in a minimally invasive fashion. To overcome the drawbacks of conventional MIS, articulated instruments can be held by specialized robotic arms and remotely commanded by the surgeon who comfortably sits at an input console. The surgeon virtually regains direct access to the operating area by having 3-D endoscopic sight, force feedback, and restored hand-eye-coordination. 

 

The second generation of the DLR telesurgery configuration (MiroSurge system) includes an input console as well as a teleoperator consisting of three surgical robots (MIRO). Usually two or three MIROs carry surgical instruments (MICA). One more MIRO can automatically guide a stereo video laparoscope. The stereo video stream is displayed to the surgeon at the master console. The DLR MIRO robot can assist a surgeon directly at the operating table where space is limited. The planned scope of applications of this robotic arm is not only the described multi-robot concept for minimally invasive surgery, but also ranges from the guiding of a laser unit for the precise separation of bone tissue in orthopedics to the setting of holes for bone screws. Additional applications include robot-assisted endoscope guidance and robot-guided water jet surgery.  

 

Seven torque-controlled joints enable a flexible operating room setup and can be used to avoid collisions with other robots or operating room equipment. The system can either be teleoperated or directly manipulated by the surgeon via a hands-on mode. The design vision of the DLR MIRO is a compact, slim, and lightweight (<10 kg) robotic arm as a versatile core component for various existing and future medical robotic procedures. By adding specialized instruments, the MIRO robot can be adapted for many different surgical procedures.  


To watch a video about the DLR MiroSurge, click here.
 

Image: © Holger Urbanek, DLR
 

MOSCOW        NEW DELHI       NEW YORK       SAO PÃULO       TOKYO