In the development of robots, engineers can often be inspired by biological models. Mostly orient yourself to animals such as insects, snakes or fish. But also plants can provide valuable suggestions. Thus, researchers of stanford university have developed the wine robot whose locomotion is the growth of a grapevine, but is much faster.
Also on slippery surfaces
Like margaret m. Coad in its presentation in the context of the robotic conference icra, this robot consists of a base station in which a coil is rolled up in a folded hose made of soft plastic. When unwinding this hose, his heart is folded to the ax, so that he is peeping at the top stucco for stucco. Because the hose himself, similarly a plant root, do not move relative to his environment, also love slippery or sticky surfaces smoothly overweight, says coad. The direction love themselves with a special joystick, whose shape of the hose is modeled.
There have been several feasibility studies for navigation and exploration tasks for this technological approach so far, says cad. In addition, the 2018 wine robot in the soft robot navigation competition among seven participants as the only one in the first attempt has been forced all four tasks: he found the way through loosely set cones without being knocked out, wooded stairs, tight flowers and a sandbox.
Insights into hollow goals
Now, so cad, the robot had been tested for the first time in a deployment in a real environment: on an archaological excavation bar in peru, it succeeded in getting insights into winding, for people inadmissible hollow tongue. For this, the hose, which was still made of soft polyethylene, made from more robust tpu-coated ripstop nylon and loomed at the top with a camera. This camera is in a rigid, transparent hull whose inner diameter is a bit coarse than that of the hose so that it is pushed forward from the growing hose.
Stanford researchers develop vine-like, growing robot
Motion capture system
The wine robot is not the only plant-like robot that experiments the stanford team. In another conference contribution fabio stroppa introduces another procedure for controlling such robots: the body interface. Here, the movements of a human arm are recorded by a motion capture system and transferred to the robot. Its base is attached to the ceiling, so that the soft arm made of polyurethane in the direction of gravity growth. The maximum long amounts to 1.5 m, the diameter 10 cm.
The body interface uses seven markers. Three on the obercorper serve as reference points that allow the operator full freedom of movement. Two markers on the elbow and two on the wrist capture the movements of the arm. In this way, the robot arm can be controlled, atarts and laterally. Rotions of the wrist are converted into rotations of the gripper.
Grasp and let go
In the experiment with 12 participants objects should be gripped by a platform and stored on differently high sucks. The accuracy of the placement and the required time were measured as well as the workload of the subjects with the help of the nasa-tlx questionnaire. The placements were 97 percent successful, with the deposit required more time than gripping, depending on the distance to the respective target. But that had stressed how stroppa stresses to do that the robot arm just needed a long time to grow there, not with increased difficulty. In fact, the participants felt gripping than a little more difficult.
An improved touch of touch could help not just grab. Also, the interaction of people with soft robots has benefited from it, says isabella huang, who is committed to the university of california berkeley with this topic. Soft robots are inherently safe and therefore particularly well suited for physical contact with people. The gears electronic sensors are tough. Huang works instead with an inflated silicone hull whose deformation is measured from the inside through a depth camera.
The geometric shape of the objects that come into contact with the sensor can be easily determined in this way, so huang. The meng of the factory is a coarse problem: huang tested the power perception of the sensor with two different application scenarios: once it should recognize the prere of a human finger that wants to correct the position of the robot. In the other scenario, the robot should relieve a human forearm, but it can follow its movements. For both scenarios were trained on the basis of experimental collected data, neuronal networks. The tactile signals also recognized very well – but only in the scenarios with whose data they had been trained. The combination of both networks, however, proved to be immense in both scenarios.
It is a significantly wider and more variable database of interactions required if the function of the sensor should be exclusively sinking on learning methods, so huang. For more promising you the development of a model-based approach, "who uses the knowledge about the physics of elastic shells". That would "more complicated interactions" mam with the sensor: he could then recognize, for example, when he turns.