当前位置:文档之家› Designing a Sociable Humanoid Robot for Interdisciplinary Research

Designing a Sociable Humanoid Robot for Interdisciplinary Research

Designing a Sociable Humanoid Robot for Interdisciplinary Research
Designing a Sociable Humanoid Robot for Interdisciplinary Research

Designing a Sociable Humanoid Robot

for Interdisciplinary Research

Matthias Hackel1,Stefan Schwope1,Jannik Fritsch2,?,Britta Wrede2

and Gerhard Sagerer2

1MABOTIC–Robotics&Automation,H¨u rth,Germany,{hackel,schwope}@mabotic.de 2Faculty of Technology,Bielefeld University,Germany,{jannik,bwrede,sagerer}@TechFak.Uni-Bielefeld.de

Abstract

This paper presents the humanoid robot BARTHOC and the smaller,but system equal twin BARTHOC Junior.

Both robots have been developed to study human-robot interaction(HRI).The main focus of BARTHOC’s design was to realize the expression and behavior of the robot to be as human-like as possible.This allows to apply the platform to manifold research and demonstration areas.With its human-like look and mimic possibilities it di?ers from other platforms like ASIMO or QRIO,and enables experiments even close to Mori’s‘uncanny valley’.The paper describes details of the mechanical and electrical design of BARTHOC together with its PC control interface and an overview of the interaction architecture.

Its humanoid appearance allows limited imitation of human behavior.The basic interaction software running on BARTHCO as been completely ported from a mobile robot except for some functionalities that could not be used due to hardware di?erences such as the lack of mobility.Based on these components,the robot’s human-like appearance will enable us to study embodied interaction and to explore theories of human intelligence.

keywords:Humanoid Robot,Human-Robot Interaction,Embodied Communication.

1Introduction

Intelligent interaction between humans and machines is one of the main challenges for robotic research where human-human interaction is often seen as the gold standard to be reached.This entails that the robot needs to be equiped with perceptual as well as interactional capabilities.One important capability with respect to social interaction is the detection of and reaction to the user’s a?ective evaluation of a situation or more generally his or her mood[1].

While much e?ort has already been invested in developing robots that can communicate with humans,the devel-opment of humanoid robots bears many new research potentials especially in the?eld of human-robot communication.

?J.Fritsch is now with the Honda Research Institute Europe GmbH in O?enbach,Germany.

1

For human-like communication,especially speech,gestures,and facial expressions have to be considered[2–6]in both, perception as well as production1.In this paper we present a robot whose perceptual components include the detec-tion of human persons and based on this basic cues to detect their intent to interact,as well as speech and gesture recognition.Given our goal to investigate human-robot interaction,the action capabilities of our are mainly focussed on communication,such as the production of facial and gestural expressions,head turns to indicate gaze turn and thus signal shifts of attention and speech production by a grounding based dialog module.Actual service tasks such as vacuum cleaning or washing the dishes are thus not in the focus of our research on BARTHOC.

One aspect of the attraction of humanoid robots lies in their complexity which re?ects our own complexity.Even if the interaction modules developed so far are relatively poor when compared to human capabilities,their integration on a humanoid robot leads to surprisingly realistic human-like robot behavior.Moreover,a robot with a humanoid appearance,having approximately the same actuators as a human body,will allow us to study models of embodied interaction derived from human-human communication.Building a humanoid robot,therefore,is a unique chance to study human-like embodied interaction in a constructivist sense with the goal to develop algorithms suitable for implementing human-like capabilities on a robot.In this paper we present the humanoid robot BARTHOC(Bielefeld Anthropomorphic RoboT for Human-Oriented Communication)which has been developed in a tight cooperation between the manufacturer and the researchers who will use BARTHOC for studying embodied interaction.

In the following we will will describe BARTHOC’s anatomy and its overall appearance.Section3describes details of the electronics and the low-level control software to operate the di?erent degrees of freedom.A sketch of the di?erent interaction components and the overall architecture that is used to control BARTHOC’s behavior for interacting with multiple persons simultaneously in real-time is outlined in Section4.

2BARTHOC’s Anatomy

The hardware platform of BARTHOC is based on a former prototype,H10[7],and has bene?ted from many improve-ments.Its mechanics consists of a mechatronic head and two arms including hands.These components are mounted on a steel-frame backbone.Though BARTHOC is a non-mobile humanoid robot,it could achieve mobility,e.g.,by using a wheelchair[8]as basis.However,for the suggested?elds of application the immobile design is su?cient.For an overview about BARTHOC’s performance and possibilities,some mechanical details of its anatomy will be explained in the following.The system control,the number of degrees of freedom and the mechanical solutions of BARTHOC Junior are nearly the same as for BARTHOC.The main di?erence between the twin robots is the size and weight of the used parts(e.g.smaller DC motors or lighter synthetic material instead of aluminium)as BARTHOC Junior has approximately the size of a four year old child.In the following,all speci?cations about BARTHOC’s hardware do also relate to BARTHOC Junior,if no other other speci?cation is given.

1Other human communication paths like the magic of direct eye contact or pheromones are interesting as well,but not discussed in here.

2

Figure1:Draft:Joints and angular paths of torso and right arm.Not drafted,each?nger of the hand can be bend independently by servos.

Figure2:Photo:BARTHOC.The self-supporting DC-motors in the upper arms and a toothed belt to drive the shoulder turning joint are observable.All the power and control electronics to drive the motors and servos?t inside the torso,see chapter3.1for more details.

2.1Torso,Arms,and Hands

The realized joints and their corresponding angular paths are drafted in Fig.1.These degrees of freedom(DOF’s) supply BARTHOC with mechanical possibilities that comply with human ones.However,BARTHOC’s speed of motion and payload is dramatically reduced in that comparison.Additionally,it is not able to bend its back,which would be important for tracking a moving object with its whole body[9].Nevertheless,the given DOF’s allow BARTHOC to perform human gestures and simple grasping actions in a satisfying way.

The joints of hip,shoulders,upper and lower arms are driven by planetary geared DC-motors with position feedback via precision potentiometers.Since no joint rotation is greater than180degrees,this method is reliable.To ensure a light construction of the arms,the DC-motors are used in a self-supporting way.The main axis of motor and gear always runs in parallel to the corresponding“bone”.To redirect the turning moment,spiral bevel gears are used where needed.Some of these details are shown in Fig.2.The shoulder lifting is simulated by only lifting the right and left part of the neck instead of the complete arms.As the mechanics of torso,arms,and hands will be covered by a usual shirt and gloves,this form of neck-lifting is a su?cient approximation for expressing the shoulder lifting.

The wrist has two joints to lift and turn the palm,driven by two servos via push-pull rods.The hand is constructed

3

as an external actuator type[10].Each?nger is built from Polyoxymethylen(POM)with three spring pre-stressed joints driven by a?exible,high strain resistant nylon cable.If the cable is pulled,the corresponding?nger will bend. To minimize friction and wear,the cable is running either above or inside a Te?on?or POM coating on its whole way. Five servos are mounted in the lower arm to pull the cables of each?nger independently.To ensure that the?ngers do not change their position once the pan is tilted,the nylon cables are laid exactly through the two rotation centers of the palm.The main joint angle and the corresponding rotation speeds of both BARTHOC and BARTHOC Junior are given in Table1.Due to its more lightweight construction and lower torques needed,especially the shoulder joints of BARTHOC Junior can achieve higher rotation speeds than BARTHOC’s.

Table1:Joint Angles and Rotation Speeds of BARTHOC and BARTHOC Junior

Joint Type Joint Angle[deg]Angle Speed[deg/sec]

BARTHOC

Angle Speed[deg/sec]

BARTHOC Junior

head pitch609090

head roll609090

head turn1209090

hip rotate1804545

upper arm turn1806090

upper arm lift1803590

upper arm rotate1804580

elbow1054580

lower arm rotate225120120

wrist side909090

wrist up/down609090

?nger bend1209090

2.2Head

While we used a TFT display for face representation in the past[7],now a complete mechatronic head has been developed to give BARTHOC a more human-like appearance and human-like features.To enable the implementation of cognitive processes,the hardware is equipped with sensors suitable for HRI.Therefore,a camera2is built in each eyeball,ear microphones are to follow.The eyes pan and tilt,opening and closing eyelids are provided,too.To permit changes in the appearance of BARTHOC’s gender and character by a?ordable costs and e?ort,the head is constructed with the ability to exchange the removable mask.The most suitable material for a human-looking android mask seems 2Synchronizable Dragon?y?re-wire cameras from Point Grey Research with30fps suitable for stereo-vision.

4

Figure3:Silicone mask over BARTHOC’s Junior mechatronic head.Four arti?cial muscles allow the silicone skin to be stretched.Emotions like happiness or fear can be imitated.

to be foam latex or silicone3.Currently,we are experimenting with both materials to achieve good results.A picture of a second functioning prototype mask is shown in Fig.3.The mask is driven by bowden cables and is designed to express basic facial expressions with movements of eyebrows and forehead,jowls and lower jaw.The lower jaw is rotated around the naturally given rotation point by a fast high-torque servo.This enables the simulation of lip movements by fast jaw action.A detailed photo of the naked head animatronics is shown in Fig.4.The head is pivoted on a mechatronic neck.The neck allows to turn,nod and tilt the head,and a linear front and back movement to express interest or aversion.

Altogether,BARTHOC is equipped with41actuators that can be controlled by one USB interface(see next chapter).The highest voltage used is12V,so even non-contactless experiments with BARTHOC are not dangerous from this point of view.

3Control Architecture

The control architecture of BARTHOC is separated in three parts:mechanics,electronics and control software.Based on the above description of mechanics,this chapter deals with some details of the developed electronics and low-level control software.

3.1Control Hardware

The development of prototypes,especially in robotics,needs a large amount of time and money.To tackle this problem we developed so-called iModules[7].These components are reusable hardware modules with specialized functionality on each module(like object-oriented programming),so the user can concentrate on the core of development by using 3An extensive overview is given at the Android Head Projects Site at https://www.doczj.com/doc/f7317124.html,

5

the eyes)and eyeball cameras.The friendly expression shows its will to communicate.

Figure5:The software/electronic/mechanic interface structure of BARTHOC.

the iModules.To build a new machine,only the desired functionality has to be chosen,e.g.,power control,motor

control,or servo control.Each iModule has embedded control of the connected actuators or sensors,so no code for

low-level control like interpreting sensor data and driving actuators has to be written in any control program on the

PC.Instructions for the iModule system are sent by an ASCII protocol via a USB interface.In Fig.5the interface

between software,electronics and mechanics is drafted.

BARTHOC is using three di?erent kinds of iModules:four iServo modules(44servos can be connected);three

iMotorControl modules with six1×5Amps power stage modules and three2×2Amps power stage modules connected

(12motors could be connected);and two independent iConnect modules for PC communication.One iConnect module

is used for controlling the head and the other iConnect module controls the torso with the arms and hands.In this way,

interaction components for controlling the facial expression and the gaze direction are separated from the components

controlling gestural expressions with the arms.This separation is especially important,as the arms have strong motors

and are potentially dangerous for the surrounding humans.

The iServo modules generate a puls width modulation(PWM)signal with peaks between0.8ms to2.2ms at50

Hz.The duration of one peak is converted to a bevel square/a position by the connected servo actuator.

One iMotorControl module generates PWM signals for four motors.Each signal is in?uenced by the current of the

connected motor and the position of the controlled joint.To imitate human movements and to reduce the mechanical

6

Figure6:The structure of the control software for driving BARTHOC’s actuators in simulation and on the hardware via Motion Engine.

stress by high accelerations,the iMotorControl Module generates di?erent ramp signals at di?erent joint positions. Seven parameters can be set for each motor control,for example aging of the robot’s mechanic can be balanced and also e?ects like jitter or slow movements can be imitated.The generated signals are send to power stages to provide enough power for the connected motors.

As the iConnect module is the interface to the PC,all commands from the iActuator system(see next chapter) are forwarded to the connected iModules.Also the temperature inside the robot can be interpreted and be used as “exhaustion”or“fatigue”indicator.The more strenuous a position is,the hotter the power stages of the motors become,the hotter the temperature inside the robot gets.

3.2Control Software

The robot is controlled by the iActuator software architecture(see Fig.6).Di?erent programs–in the following called engines–are running at the same time and communicate via TCP/IP with each other.Each engine ful?lls a special task concerning the robot’s control,the simulation,or the behavior.Currently,three engines are implemented under Linux using Trolltech’s Qt4:the Simulation Engine,the Motion Engine and the Mimic Engine.The iActuator Interface provides full communication to all engines and allows to build own engines.This open software structure ensures robustness,easy maintenance,and extension,which is of importance in this experimental stage.All mechanic, electric,and appearance properties of the robot are described in an XML object?le.Con?guration?les for each engine allow?exible use on di?erent computers.The delay from initiation of a command to beginning of the robot movement via TCP/IP and USB can be neglected due to the inertia of the mechanics.

The Simulation Engine(Fig.7)visualizes the mechanic of the robot in a3D OpenGL environment.The simulated robot moves like its steel brother.Crash situations and gestures can be seen and tested before using the real robot. Background textures for sides,ground,and ceiling make the simulation more realistic.This engine helps to speed up development of own software modules.No laboratory time has to be shared and no precautions for humans and 4Qt is a cross-platform,C++application development framework;https://www.doczj.com/doc/f7317124.html,.

7

Figure7:Screen shot:The robot Simulation Engine.This tool is indispensable for collision avoidance.

Figure8:Screen shot:The Mimic Engine.It is used to send movement commands to the robot and/or the simulation in an easy way.Positions de?ned can be stored and used with other engines.

BARTHOC are required.

The Motion Engine controls the hardware of the robot.After receiving a command via TCP/IP from another engine,it checks the command and all its parameters for validity.If the command and the parameters are valid, the Motion Engine sends appropriate movement or status commands to the iModule hardware via USB.No crash situations are checked at the moment.Collision avoidance and path planning for humanoid robots is an own?eld of research(see,e.g.,[11]).It is a complete new topic in comparison to industrial robots,which not have to act social and human-like.

The Mimic Engine(Fig.8)is a visual user interface for an application developer and allows to move every motor and servo of the robot by moving sliders.This way,manually generated commands can be forwarded to the Simulation Engine or/and the Motion Engine.All positions of the sliders–and so the position of the robot–can be saved in a gesture?le.These?les can also be used in other engines.

To build own engines for the iActuator system architecture,the iActuator Interface was developed.The C-language

8

interface provides all functions to move the robot and/or the simulation and to replay gesture ?les.For example,an Animation Engine for entertainment use of the robot is under development.Gesture ?les can be played after each other and the robot is able to play saved situations like an actress,an actor or a comedian would do.Other “more intelligent”iActuator engines are currently being realized (see next Section).

4Enabling Interaction on BARTHOC

Up to now,research on natural human-robot interaction has been performed at Bielefeld University using the mobile robot platform BIRON [12](see Fig.9).In order to enable BARTHOC to engage in a natural human-robot interaction,a variety of capabilities already developed for BIRON have been ported to BARTHOC.In the following we will brie?y sketch these capabilities and their necessary modi?cations to apply them on BARTHOC.

BIRON’s basis is a Pioneer PeopleBot from ActivMedia TM .The robot is equipped with an on-board PC (Pentium III,850MHz)for controlling the motors,on-board sensors,and for sound processing.An additional PC (Pentium III,500MHz)inside the robot is used for image processing and is equipped with a touch screen display.Both PCs,each run-ning Linux,are linked by an 100Mbit Ethernet LAN to a wireless LAN router enabling remote control of the mobile ro-

bot.

Figure 9:BIRON.A variety of sensors are used for locating communication partners and interacting with

them.A pan-tilt color camera is mounted on top of the robot for acquiring images of the

upper body part of humans interacting with the robot.A stereo camera at a height of

0.95m acquires stereo images of the humans in front of the robot.Two far-?eld micro-

phones are located right below the touch screen display.They enable speech processing

and especially stereo-based speaker localization.Finally,distances within the scene and

to humans are measured facilitating a SICK laser range ?nder mounted at the front at a

height of 0.3m.

The control of all components on BIRON and their communication is realized using

the SIRCLE framework [13]which is based on a three-layer architecture using a central

execution supervisor [14].The framework itself enables easy recon?guration and adap-

tation to new tasks.Autonomously running software components can be added and are

communicating via the exchange of XML data with other subsystems,e.g.,the dialog or

the attention control.Therefore,SIRCLE can be applied easily for controlling the modules on BARTHOC.The overall component interaction that is running on both robots the mobile BIRON and the static BARTHOC is depicted in Fig 10.However,some of the components require modi?cations in order to be used in BARTHOC’s architecture.In the following we brie?y sketch the individual components and their modi?cations,if necessary.

A basic functionality of BIRON is its capability to track humans in its environment.This is achieved through a multi-modal anchoring approach using depth,vision,and sound data [15].Based on the humans tracked in its

9

Figure10:The component interaction of BIRON and BARTHOC.The hatched components only exist on BIRON as BARTHOC is currently a static robot.

surrounding and the multi-modal information associated with the individual humans,an attention mechanism allows to selectively pay attention to humans looking at the robot and speaking at the same time[16].While BARTHOC does have vision and sound sensors(eyes and ears),it does not have a laser range?nder for obtaining depth data like BIRON has.Consequently,the modi?cation of BIRON’s person tracking and attention system was the?rst step in enabling BARTHOC to exhibit a natural attention behavior.

For BARTHOC’s attention behavior the existence of a sound source is already su?cient to instantiate a person hypothesis.Subsequently,BARTHOC turns its head to the supposed person position to take an image of that area and search for a face at the expected position.Only if a face is successfully detected,the person hypothesis is retained.All persons found are stored in a short-term memory to account for the fact that there is no laser range?nder continuously providing information about person positions in front of the robot[17].Consequently,the short-term memory contains person hypotheses that have to be veri?ed by looking with the head in the direction of the person and successfully recognizing a face.

In di?erence to BIRON where an animated face was displayed on the touch screen,BARTHOC can use facial expressions to convey the state of the attention system to the user.In the inactive mode it displays a‘sleeping’face and if no person is around a‘neutral’face.On instantiating a person hypothesis,the mimics engine displays an‘interested’face that changes to a‘happy’face when BARTHOC changes its state to interaction with a human. During such an interaction(see also below),it displays a‘thinking’face if it processes speech input and a‘searching’face if it turns its head to search for an object referenced by the human.

With the utilization of the eye video cameras and two microphones BARTHOC is able to detect and continuously track multiple people in real-time with a robustness comparable to systems using sensors with a wide?eld of perception. However,a person attention behavior is not su?cient if a human wants to engage in a communicative interaction with the robot.In order to enable the mobile robot BIRON to understand natural language,we integrated components for speech recognition[16],speech understanding[12]and dialog[18].These components are all depending solely on a

10

sound signal from the microphones and,therefore,do not require any modi?cations to be used on BARTHOC if the type of the instructions is kept unchanged.

In an interaction with BIRON,the communication partner can not only get the robot’s attention but can also control the robot’s behavior by giving commands(for more details see[12]).For example,the command“Follow me”results in the robot following the human around.Obviously,this is an instruction that is not supported by BARTHOC and,consequently,the speech understanding and dialog components have been modi?ed to account for the tasks BARTHOC can perform.The gesture detection and object attention components[19]do not require major modi?cations,as they are based only on camera images.Based on the static setup,their operation is even simpli?ed, as BARTHOC’s position in the room does not change and,therefore,the coordinate system is?xed.Through the combination of dialog and object attention[18],a user can reference an object by gesture and speech(’This blue cup’).The referenced object can then be learned by the robot by focusing the camera on the object to acquire an image of it and storing this appearance together with verbally communicated information in the active memory. In subsequent interactions,this learned knowledge can be used by the dialog to understand commands related to the already learned objects.Currently a rough position estimate of the objects is used to build up the scene model in the active memory.This position estimate serves as reference to verify the existence of an object at the learned position by simply comparing the stored appearance with an image capturing the same scene.In the future we hope to incorporate an object recognition algorithm capable of on-line learning.

A very important di?erence to BIRON is the humanoid body of BARTHOC.It has many more degrees of freedom that can be controlled during an interaction in order to exhibit a much more human-like behavior.On BIRON,the movements of the pan-tilt camera and of a face presented on its display reinforce the impression that the robot is paying attention and focusing on a communication partner,enabling humans to“read”the robot’s current internal state.This somewhat human-like behavior is very much appreciated by users[20].BARTHOC’s human-like body makes this impression even stronger and by controlling its bodily expression it will bring us further ahead in studying embodied interaction.

5Outlook

This paper has described our ongoing work for realizing a humanoid robot that supports research on embodied interaction.We have presented the technical details of the mechanics and the actuator control as well as the software tools for testing BARTHOC’s actuators‘o?ine’.Only a coarse overview of the di?erent interaction components that have been developed for BIRON/BARTHOC and how they are used to enable human-robot interaction could be given here,more details can be found in the referenced publications.We currently work on integrating a gesture generation system that is able to generate deictic gestures using the robot’s arms and hands.Through its humanoid appearance, BARTHOC will allow us to study a variety of aspects of embodied interaction that are becoming increasingly relevant for developing robot companions capable of natural human-robot interaction.Also,we are currently carrying out?rst user studies with an emotional empathy or mimicking module:a speech-based emotion recognition system recognizes

11

emotional a?ect in the user’s voice when reading a fairy-tale to the robot.The robot then simply mimicks the detected emotion by displaying the corresponding facial expression.In the ongoing user studies we are addressing the question of objective as well as subjective evaluation of the emotion detection capabilities of the robot and the perception of the robot by the human.This basic framework will be extended in the future by the development of an approach to emotional alignment in communication as well as studies on humans’perception of facial expressions.

REFERENCES

[1]K.Takekazu,M.Yasuhiro,S.Takeshi,“Cooperative distributed registration for robust face recognition,”Systems

and Computers in Japan,Vol.33,No.14,2002,pp.91–100.

[2]C.Breazeal,et al.,“Humanoid robots as cooperative partners for people,”International Journal of Humanoid

Robots,2(1),2004.

[3]F.Tanaka,H.Suzuki,“Dance interaction with QRIO:a case study for non-boring interaction by using an entrain-

ment ensemble model,”13th IEEE International Workshop on Robot and Human Interactive Communication, 2004.

[4]G.Sagerer,J.Fritsch,B.Wrede,“Bringing it all together:Integration to study embodied interaction with a robot

companion,”AISB Symposium–Robot Companions:Hard Problems and Open Challenges in Human-Robot Interaction,Hat?eld,England,2005.

[5]R.Stiefelhagen et al.,“Natural Human-Robot Interaction using Speech,Gaze and Gestures,”IEEE/RSJ Inter-

national Conference on Intelligent Robots and Systems,Japan,2004.

[6]A.Ito,S.Hayakawa,T.Terada,“Why robots need body for mind communication–an attempt of eye-contact

between human and robot,”13th IEEE International Workshop on Robot and Human Interactive Communication, 2004

[7]M.Hackel,S.Schwope,“A Humanoid Interaction Robot for Information,Negotiation and Entertainment Use”,

International Journal of Humanoid Robots,Vol.1,No.3,2004,pp.551–563.

[8]https://www.doczj.com/doc/f7317124.html,nkenau,T.R¨o fer,B.Krieg-Br¨u ckner,“Self-Localization in Large-Scale Environments for the Bremen Au-

tonomous Wheelchair”in Spatial Cognition III.Lecture Notes in Arti?cial Intelligence2685,C.Freksa,W.

Brauer,C.Habel,K.F.Wender,Eds.,Springer,2003,pp.34–61.

[9]I.Mizuuchi,et al.,”The Design and Control of the Flexible Spine of a Fully Tendon-Driven Humanoid Kenta”’,

IEEE/RSJ International Conference on Intelligent Robots and Systems,Lausanne,2002.

[10]T.Hino,T.Maeno,”Development of a Miniature Robot Finger with a Variable Sti?ness Mechanism using Shape

Memory Alloy”,International Simposium on Robotics and Automation,Quer′e taro,M′e xico,August2004.

12

[11]C.Breazeal,D.Buchsbaum,J.Gray,D.Gatenby,B.Blumberg,“Learning from and about Others:Towards

Using Imitation to Bootstrap the Social Understanding of Others by Robots,”L.Rocha and F.Almedia e Costa (eds.),Arti?cial Life,2004.

[12]A.Haasch et al.,“BIRON–The Bielefeld Robot Companion,”Proc.Int.Workshop on Advances in Service

Robotics,E.Prassler and https://www.doczj.com/doc/f7317124.html,witzky and P.Fiorini and M.H¨a gele,Eds.,Fraunhofer IRB Verlag,Germany, 2004,pp.27–32.

[13]J.Fritsch,M.Kleinehagenbrock,A.Haasch,S.Wrede,G.Sagerer,“A Flexible Infrastructure for the Development

of a Robot Companion with Extensible HRI-Capabilities,”Proc.IEEE Int.Conf.on Robotics and Automation, Spain,2005,pp.3419–3425.

[14]M.Kleinehagenbrock,J.Fritsch,G.Sagerer,“Supporting Advanced Interaction Capabilities on a Mobile Robot

with a Flexible Control System,”Proc.IEEE/RSJ Int.Conf.on Intelligent Robots and Systems,Japan,2004, pp.3649–3655.

[15]J.Fritsch et al.,“Multi-Modal Anchoring for Human-Robot-Interaction,”Robotics and Autonomous Systems,

Special issue on Anchoring Symbols to Sensor Data in Single and Multiple Robot Systems,S.Coradeschi and A.

Sa?otti,Eds.,Elsevier Science,Vol.43,No.2-3,2003,pp.133-147.

[16]https://www.doczj.com/doc/f7317124.html,ng et al.,“Providing the Basis for Human-Robot-Interaction:A Multi-Modal Attention System for a Mobile

Robot,”Proc.Int.Conf.on Multimodal Interfaces,ACM,Canada,2003,pp.28–35.

[17]Th.Spexard,A.Haasch,J.Fritsch,G.Sagerer,“Human-like Person Tracking with an Anthropomorphic Robot,”

submitted to IEEE Int.Conf.on Robotics and Automation,Orlando,2006.

[18]S.Li,A.Haasch,B.Wrede,J.Fritsch,and G.Sagerer,“Human-style interaction with a robot for cooperative

learning of scene objects,”Proc.Int.Conf.on Multimodal Interfaces,ACM,Italy,2005,pp151–158.

[19]A.Haasch,N.Hofemann,J.Fritsch,and G.Sagerer,“A multi-modal object attention system for a mobile robot,”

in Proc.IEEE/RSJ Int.Conf.on Intelligent Robots and Systems,Edmonton,2005,pp.1499–1504.

[20]S.Li,M.Kleinehagenbrock,J.Fritsch,B.Wrede,G.Sagerer,“‘BIRON,let me show you something’:Evaluating

the Interaction with a Robot Companion,”Proc.IEEE Int.Conf.on Systems,Man,and Cybernetics,Special Session on Human-Robot Interaction,W.Thissen,P.Wieringa,M.Pantic,and M.Ludema,Eds.,2004,pp.

2827-2834.

13

欠驱动单杠体操机器人研究综述

Dynamical Systems and Control 动力系统与控制, 2016, 5(2), 48-60 Published Online April 2016 in Hans. https://www.doczj.com/doc/f7317124.html,/journal/dsc https://www.doczj.com/doc/f7317124.html,/10.12677/dsc.2016.52006 A Survey on Research of the Underactuated Horizontal Bar Gymnastic Robot Dasheng Liu, Guozheng Yan Institute of Medical Precision Engineering and Intelligent System, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai Received: Mar. 25th, 2016; accepted: Apr. 22nd, 2016; published: Apr. 25th, 2016 Copyright ? 2016 by authors and Hans Publishers Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). https://www.doczj.com/doc/f7317124.html,/licenses/by/4.0/ Abstract The gymnastic robot is a nonlinear, strongly coupled, multi-state underactuated system and be- longs to the natural unstable systems in the stable region. This kind of system can reflect the key problems of many control areas, and a lot of scholars have devoted themselves to the research of controlling the gymnastic robot. This paper reviews the domestic and foreign research on the ho-rizontal bar gymnastic robot. In the paper, the relevant theories and methods of the research on the dynamic modeling and motion control of the gymnastic robot are analyzed and discussed, the control on the swing up, balance, acceleration and giant-swing motion movement of the gymnastic robot is analyzed in detail, furthermore, the existing problems are discussed, and the development trend in the future is prospected. Keywords Gymnastic Robot, Underactuated, Control Strategy, Nonlinear System 欠驱动单杠体操机器人研究综述 刘大生,颜国正 上海交通大学电子信息与电气工程学院医学精密工程及智能系统研究所,上海 收稿日期:2016年3月25日;录用日期:2016年4月22日;发布日期:2016年4月25日

带拖车的轮式移动机器人系统的建模与仿真

系统仿真学报 JOURNAL OF SYSTEM SIMULATION 2000 Vol.12 No.1 P.43-46 带拖车的轮式移动机器人系统的建模与仿真 杨凯 黄亚楼 徐国华 摘 要: 带拖车的轮式移动机器人系统是一种典型的非完整、欠驱动系统。本文建立了带多个拖车的移动机器人系统的运动学模型,对系统的运动特性进行了分析,并在此基础上对系统的运动进行了数值仿真和图形仿真,验证了理论分析的正确性。 关键词: 移动机器人系统; 运动学模型; 龙格-库塔法; 计算机仿真 中图分类号: TP242.3 文献标识码:A 文章编号:1004-731X (2000) 01-0043-4 Modeling and Simulation of Tractor-trailor Robot Systems' Kinematics YANG Kai, HUANG Ya-lou (Department of Computer and System Science, Nankai University, Tianjin 300071) XU Guo-hua (Institute of Automation, Chinese Academy of Sciences, Beijing 100080,China) Abstract: A mobile robot with multi-trailers is a typical nonholonomic, underactuated system. This paper establishes a kinematic model for such system. Based on the kinematic model, the motion of the system is analytically studied, and the simulation of the motion for this system is conducted with the means of Runge-Kutta method and computer graphics. It proves that the theoretical analysis is right. Keywords: mobile robot; underactuated system; Runge-Kutta; computer simulation 1 引言 移动机器人是机器人学中的一个重要分支,本文所讨论的是一种特殊类型的移动机器人系统——带拖车的轮式移动机器人(Tractor-trailer robot),它由一系列相互铰链在一起的多个二轮式刚体小车组成,运行在一个平面上。带拖车的轮式移动机器人系统的一种情形是由一个卡车型的牵引车拖动着一个或多个被动的拖车组成,牵引车可以执行类似于汽车那样的运动:驱动轮向前或向后运动,转向轮向左或向右转向,拖车跟踪牵引车的运动路径。 作为典型的欠驱动、非完整系统,带拖车的移动机器人系统的运动学、规划、控制等方面的研究明显不同于其它机器人系统,由于系统运动规律、控制特性上的理论结果亟待验证,因此,带拖车的移动机器人系统的仿真是极有价值的。 本文针对一般结构形式的带拖车的移动机器人系统建立系统的运动学模型,研究模型的递推形式以解决拖车节数变化带来的模型重构问题,同时就一些问题开展理论分析与仿真验证。 2 系统的运动学模型 2.1 基本假设与变量说明 为了使所建立的数学模型对各种车体链接形式均成立,这里以非标准型带拖车的轮式移动机器人系统为研究对象,所谓非标准型就是相邻两车体的链接点不在前一车体的轮轴上而是在链接轴的某点上(如图1所示),且假设:整个系统是在平面上运动;车轮是无滑动的;车体关于其纵向轴线对称;车轮与地面是点接触,且是纯滚动运动;车体是刚体; 用于车体连接的关节之间是无摩擦

一类多自由度欠驱动手臂机器人的控制策略_赖旭芝

一类多自由度欠驱动手臂机器人的控制策略1 赖旭芝o (中南大学自动控制系长沙410083) 摘要针对多自由度欠驱动手臂机器人提出一种模糊逻辑控制、模糊变结构控制和线性二次调节控制相结合的控制策略。首先用模糊逻辑控制实现快速平滑地摇起,然后用模糊变结构控制确保从摇起区进入平衡区,最后用线性二次调节方法平衡它。 关键词欠驱动手臂机器人,模糊控制,变结构控制 0前言 对于n自由度欠驱动手臂机器人的运动控制问题在国内外还是一个新的控制领域。文献[1]探讨了n自由度欠驱动手臂机器人基于部分反馈的运动控制问题,此控制策略理论依据不充分,同时存在在n自由度欠驱动手臂机器人的平衡区难以捕捉到该系统的实际控制问题。这样一来,n自由度欠驱动手臂机器人的摇起控制目标就很难实现。 本文依据n自由度欠驱动手臂机器人动力学方程,从摇起能量需增加的角度出发,推导仅有n-1个驱动装置的摇起控制方案。然后,设计模糊变结构控制器对欠驱动手臂机器人进行系统解耦,来实现从摇起控制到平衡控制的快速过渡控制。最后,用线性二次调节器对它进行平衡控制,以实现n 自由度欠驱动手臂机器人的控制目标。 1模糊逻辑控制器的设计 1.1动力学方程 用广义坐标描述多自由度欠驱动手臂机器人的动力学方程为[2] M(q)&q+C(q,¤q)¤q+g(q)=S(1)其中,q=[q1q2,q n]T,S=[S1S2,S n]T,C(q,¤q)I R n@n为作用在机器人连杆上的哥氏矩阵,g (q)I R n为重力,S I R n为驱动力矩,没有驱动装置的力矩为零,M(q)I R n@n为惯性矩阵。对称正定矩阵。机器人运动方程中的各部分具有下列性质: M(q)是对称正定阵; &M(q)-C(q,¤q)是反对称矩阵。 1.2摇起控制器的设计 n自由度欠驱动手臂机器人的运动控制空间分两个子区间:一个是在不稳定平衡点附近的区域叫平衡区;另一个是除平衡区以外的所有运动空间叫摇起区。 从摇起过程能量增加的角度出发,寻找摇起控制规律。其能量为 E(q,¤q)=T(q,¤q)+V(q)(2) T(q,¤q)为动能,V(q)为热能,它们分别为 T(q,¤q)= 1 2 ¤q T M(q)¤q(3) V(q)=6n i=1V i(q)=6n i=1m i gh i(q),i=1,,,n (4)其中,V i(q)和h i(q)分别为第i杆的势能和质量中心的长度。 在整个摇起区,为满足能量不断增加,能量的导数必须满足下面的条件。 ¤E(q,¤q)\0(5)根据(2)、(3)和(4)式可得 ¤E(q,¤q)=¤q T M(q)&q+1 2 ¤q T¤M(q)¤q+¤V(q)(6) (1)式可改写为 &q=M-1(q)(S-C(q,¤q)¤q-g(q))(7)从(4)式可推出 ¤V(q)=g T(q)¤q(8)把(7)和(8)代入(6)式得 ¤E(q,¤q)=¤q T S+1 2 ¤q T(¤ M(q)-2C(q,¤q))¤q(9)利用¤ M(q)-C(q,¤q)为反对称矩阵,所以有 81 1 o女,1966年生,副教授;研究方向:智能控制,机器人控制和非线性控制;联系人。 (收稿日期:2000-06-27) 国家自然科学基金和湖南省科研专项基金资助项目。

工业机器人常用坐标系介绍

工业机器人常用坐标系介绍 坐标系:为确定机器人的位置和姿态而在机器人或空间上进行的位置指标 系统。 坐标系包含:1、基坐标系(Base Coordinate System) 2、大地坐标系(World Coordinate System) 3、工具坐标系(Tool Coordinate System) 4、工件坐标系(Work Object Coordinate System) 1、工具坐标系机器人工具座标系是由工具中心点TCP 与座标方位组成。 机器人联动运行时,TCP 是必需的。 1) Reorient 重定位运动(姿态运动)机器人TCP 位置不变,机器人工具沿座标轴转动,改变姿态。 2) Linear 线性运动机器人工具姿态不变,机器人TCP 沿座标轴线性移动。机器人程序支持多个TCP,可以根据当前工作状态进行变换。 机器人工具被更换,重新定义TCP 后,可以不更改程序,直接运行。 1.1.定义工具坐标系的方法:1、N(N=4)点法/TCP 法-机器人TCP 通过N 种不同姿态同某定点相碰,得出多组解,通过计算得出当前TCP 与机器人手腕中心点( tool0 ) 相应位置,座标系方向与tool0 一致。 2、TCPZ 法-在N 点法基础上,Z 点与定点连线为座标系Z 方向。 3、TCPX,Z 法-在N 点法基础上,X 点与定点连线为座标系X 方向,Z 点与定点连线为座标系Z 方向。 2. 工件坐标系机器人工件座标系是由工件原点与座标方位组成。 机器人程序支持多个Wobj,可以根据当前工作状态进行变换。 外部夹具被更换,重新定义Wobj 后,可以不更改程序,直接运行。

乐高机器人课程

乐高机器人课程 Lego Mindstorms(乐高机器人)是集合了可编程Lego砖块、电动马达、传感器、Lego Technic部分(齿轮、轮轴、横梁)的统称。Mindstorms起源于益智玩具中可编程传感器模具(programmable sensor blocks)。第一个Lego Mindstorms的零售版本在1998年上市,当时叫做Robotics Invention System (RIS)。最近的版本是2006年上市的Lego Mindstorms NXT。 乐高机器人套件的核心是一个称为RCX或NXT的可程序化积木。它具有六个输出输入口:三个用来连接感应器等输入设备,另外三个用于连结马达等输出设备。乐高机器人套件最吸引人之处,就像传统的乐高积木一样,玩家可以自由发挥创意,拼凑各种模型,而且可以让它真的动起来。 机器人是一门涉及机械学、电子学、工程学、自动控制、计算机、人工智能等方面的综合性学科,以培养学生的科学素养和技术素养为宗旨,以综合规划、设计制作、调试应用为主要学习特征的实践性课程。在拓宽学生的知识面,促进学生全面而富有个性的发展上起着不可替代的作用。 随着科学技术的发展,特别是人工智能与机器人的结合,机器人不再局限于工业应用和研究所内,它已经进入教育领域。国内外教育专家指出,利用机器人来开展实践学习,不仅有利于学生理解科学、工程学和技术等领域的抽象概念,更有利于培养学生的创新能力、综合设计能力和动手实践能力。机器人教育在基础教育越来越受到人们的关注。 我国自2001年举办首届全国青少年机器人竞赛以来,在竞赛的带动与促进下,全国各地展开了校本课程、课外科技小组、选修课等丰富多彩的机器人教育活动。近年来,由于对机器人教育认识上的不足,机器人竞赛活动目标不明确等原因,我国机器人教育的发展受到一定程度的制约。 在课程改革的背景下,乐高从全国基础教育发展现状出发,构建科学、合理、切实可行的中小学乐高机器人教程体系,规范机器人教育,对我国今后机器人教育的蓬勃发展起着非常重要的作用,势在必行。 科学和技术素养是当今社会每个公民必备的基本素养。乐高机器人课程在培养学生的科学与技术素养方面有其独特的优势,机器人教育作为一门基础课程,要面向全体学生。 乐高机器人课程要为学生营造动手动脑、进行设计活动的环境,提供必要的设备和工具,倡导学生积极主动、勇于探索的学习精神,组织学生进行探索式学习,让学生充分动手实践,积极合作,主动探究。 乐高机器人更多信息可登入网站棒棒贝贝科技中心了解更多。 地址:上海市浦东新区杜鹃路68号。

乐高机器人教案

认识乐高蓝牙机器人系统____NXT 参加教师 活动目的: 1、认识NXT主要配件,并将其与RCX核心配件作比较,学习和掌握新型乐高机器人; 2、搭建蓝牙机器人; 3、知道NXT控制器各按钮的作用,初步学会在NXT是编写简单程序,理解传感器的功能活动过程: 一、乐高机器人—— MINDSTORMS NXT与RCX的比较 1、处理器由8位升到32位 丹麦乐高(LEGO)将于2006年9月上旬推出乐高公司和美国麻省理工学院共同开发的机器人组件新款“教育用LEGO Mindstorms NXT”。Mindstorms是将配备微处理器的LEGO公司的塑料积木组装起来,通过个人电脑制作的程序来控制的机器人。此前的RCX的微处理器为8位,而NXT配备32位处理器等,提高了性能。表格1列出RCX和NXT的比较。 图1:安装4个传感器和3个伺服马达的LEGO NXT 图4:LEGO NXT系统目前提供的4种传感器全家福 5、改进了编程软件 NXT程序用软件“ROBOLAB ,跟ROBOLAB原来的版本一样,是基于NI LabVIEW开发的。该软件不仅可以制作NXT用的程序,也可以完成

RCX用的程序。此前要操作接近400个图标进行编程,这次减少为约40个,从而使得编程更为简单。OS为“Windows2000”以上和“Mac OS X”。 图5:乐高网站给出的ROBOLAB 的样图,跟以前版本相比,变化较大 二、快速认识NXT 1、按钮 NXT正面有四个按钮,它们分别是开关、运行;导航和返回。 2、 NXT显示器上各图标的意义 最上一行,相当于状态栏,从左到右依次表示了:蓝牙、USB、NXT 控制器名、运行状态、电 池电量以及声音音量的情况。 状态栏的下面是六个主控操作面板,相当于主菜单,它们依次是:“My Files我的文件”、“Try me测试”、“Settings设置”、“Bluetooth 蓝牙”、“View查看”、“NXT Program NXT程序”。 三、 NXT Prpgram(NXT程序) 不需要在电脑上,通过NXT就可以编写简单的控制程序。 1、进入NXT Program 2、屏幕显示传感器和马达的连接方式; 3、确定后,进入五步编程 第一步主要是设置运动方式:前后、后退、左转、右转等

一种欠驱动移动机器人运动模式分析

天津比利科技发展有限公司 李艳杰 ’马岩1,钟华2,吴镇炜2 ' 隋春平2 (1.沈阳理工大学机械工程学院,沈阳110168;2.中国科学院沈阳自动化研究所,沈阳110016) 摘要:介绍了一种欠驱动移动机器人的机械结构。分析了该欠驱动移动机器人在平地行进 模式的特点,提出一种越障控制模式。在该越障控制模式中加入了障碍物高度计算算法, 使得移动机器人在越障过程中的智能控制更加高效。利用VB编写控制程序人机界面,在移 动机器人实物平台上进行了实验,实验结果证明了控制方法的有效性。 关键词:AVR单片机;欠驱动移动机器人;越障模式 中图分类号:TP242文献标志码:A Analysis of a Underactuated Mobile Robot Moving Mode LI Yan-jie',MA Yan',ZHONG Hua2,WU2hen-wej2,SUI Chun-ping2 (l.School of Mechanical Engineering,Shenyang Ligong University,Shenyang110168,China;2.Robotics Lab,Shenyang Institute of Automation,Chinese Academy of Sciences,Shenyang110016,China) Abstract:The mechanical structure of a kind of underactuated mobile robot was described in this paper.The charac- teristics of the underactuated mobile robot in the plains traveling mode was analyzed and a kind of obstacle-negotia- tion control mode was proposed.Due to calculate algorithm of obstacle's height was added to the the obstacle-nego- tiation control mode,the intelligent control of obstacle-negotiation becomes more efficient.The control procedure HMI was programmed by VB and the experiment was performed on the mobile robot platform.Experiment results show the control method was effective. Key words:AVR SCM;underactuated mobile robot;obstacle-negotiation mode 欠驱动机械系统是一类特殊的非线性系统,该容错控制的作用。因此,欠驱动机器人被广泛应用系统的独立控制变量个数小于系统的自由度个数【l】o于空间机器人、水下机器人、移动机器人、并联机器 欠驱动系统结构简单,便于进行整体的动力学分析人、伺服机器人和柔性机器人等行业。 和试验。有时在设计时有意减少驱动装置以此来增本文以四驱动、八自由度的欠驱动移动机器人加整个系统的灵活性。同时,由于控制变量受限等为实验对象,通过切换驱动器的工作模式来克服系原因,欠驱动系统又足够复杂,便于研究和验证各统不完全可控造成反馈控制失效【2】的缺点。以工控 种算法的有效性。当驱动器故障时,可能使完全驱机作为上位机,通过工控机的RS232串口与AVR 葫系统成为欠驱动系统,欠驱动控制算法可以起到单片机进行无线通讯。通过对驱动器反馈数据的分 收稿日期:2013-01-22:修订日期:2013-02-19 基金项目:国家科技支撑计划项目(2013BAK03801,2013BAK03802) 作者筒介:李艳杰(1969-),女,博士,教授,研究方向为智能机器人控制及机器人学;马岩(1988-),男,硕士研究生,研究方向为嵌入式控制;钟华(1977-),男,博士,副研究员,研究方向为机器人控制及系统集成。 Automation&Instrumentation2013(9) 一种欠驱动移动机器人运动模式分析

制作乐高机器人培训

制作乐高机器人-乐高乐园科技中心 现在有很多人都喜欢玩乐高机器人,感觉机器人是那么的神秘,所以从没想过自己能亲自动手制作一个机器人,其实,普通的机器人制作起来是非常简单的,今天,乐高乐园科技中心就来教大家做乐高机器人的方法,你只需要准备好材料,耐心的跟乐高乐园学很快就能见到自己的作品了。 制作乐高机器人的要点主要有两点:一是必须亲自动手,二是循序渐进。 亲自动手是必须的,只看理论,永远无法知道实际会遇到的问题;而如果你直奔复杂的人形机器人控制,不用多久你就会被你的挫折感打败,直到彻底放弃,所以循序渐进也是必须的。 有这两点,你也许应该知道上面问题的答案了,这个老师只能是你自己,我们也只能是你的同学,而乐高机器人就是入门的敲门砖! 一个机器人主要由三大部分组成:机械部件、电子电路、软件。 可能你不会对所有的部分都有兴趣,但遗憾的是,一个完整的实体机器人必须有这些部分。乐高机器人是一个完整的入门型机器人,你可以制作一个这样的机器人作为你的第一个作品,也可以只改造你最感兴趣的部分。 乐高教学机器人完全模块化,车架、主控制器(大脑)、各种传感器都是接插件,轻轻的插入相应的插座即可使用。 车架:铝合金高精度加工,数控激光切割等工艺制作。 主控制器:采用乐高MCU,核心是一块高档PIC18单片机r,只需要一根USB线即可重复更新芯片程序,不再需要额外的编程器了。这个控制器还可以作为单片机的学习开发板,非常方便。网站不仅提供了入门教程,还提供了4个好玩实用的入门程序范例。 传感器模块:目前提供了4种类型,并留有扩展接口,可以自由扩展。 1. 红外巡线传感器2个,也可以当作近距离避障的传感器,本身是采用红外线反射信号来检测黑白线或者物体的。巡线小车方式就是使用的这种传感器。 2. 感光传感器2个,可获取环境光的强度,通过这个传感器,机器人可对光做出反应。 3. 红外遥控器。包括了遥控器和接收器,可以学习各种红外遥控编码,也可以当个遥控玩具放松一下。 4. 超声波测距传感器。先进成熟的设计,能探测面前2-400cm之间的障碍物,提供的代码是自动避障,遇到障碍物就转向。你可以在此基础上写出更复杂的动作。

基于动力学模型的轮式移动机器人运动控制_张洪宇

文章编号:1006-1576(2008)11-0079-04 基于动力学模型的轮式移动机器人运动控制 张洪宇,张鹏程,刘春明,宋金泽 (国防科技大学机电工程与自动化学院,湖南长沙 410073) 摘要:目前,对不确定非完整动力学系统进行设计的主要方法有自适应控制、预测控制、最优控制、智能控制等。结合WMR动力学建模理论的研究成果,对基于动力学模型的WMR运动控制器的设计和研究进展进行综述,并分析今后的重点研究方向。 关键词:轮式移动机器人;动力学模型;运动控制;非完整系统 中图分类号:TP242.6; TP273 文献标识码:A Move Control of Wheeled Mobile Robot Based on Dynamic Model ZHANG Hong-yu, ZHANG Peng-cheng, LIU Chun-ming, SONG Jin-ze (College of Electromechanical Engineering & Automation, National University of Defense Technology, Changsha 410073, China) Abstract: At present, methods of non-integrity dynamic systems design mainly include adaptive control, predictive control, optimal control, intelligence control and so on. Based on analyzing the recent results in modeling of WMR dynamics, a survey on motion control of WMR based on dynamic models was given. In addition, future research directions on related topics were also discussed. Keywords: Wheeled mobile robot; Dynamic model; Motion control; Non-integrity system 0 引言 随着生产的发展和科学技术的进步,移动机器人系统在工业、建筑、交通等实际领域具有越来越广泛的应用和需求。进入21世纪,随着移动机器人应用需求的扩大,其应用领域已从结构化的室内环境扩展到海洋、空间和极地、火山等环境。较之固定式机械手,移动机器人具有更广阔的运动空间,更强的灵活性。移动机器人的研究必须解决一系列问题,包括环境感知与建模、实时定位、路径规划、运动控制等,而其中运动控制又是移动机器人系统研究中的关键问题。故结合WMR动力学建模理论的研究成果,对基于动力学模型的WMR运动控制器设计理论和方法的研究进展进行研究。 1 WMR动力学建模 有关WMR早期的研究文献通常针对WMR的运动学模型。但对于高性能的WMR运动控制器设计,仅考虑运动学模型是不够的。文献[1]提出了带有动力小脚轮冗余驱动的移动机器人动力学建模方法,以及WMR接触稳定性问题和稳定接触条件。文献[2]提出一种新的WMR运动学建模的方法,这种方法是基于不平的地面,从每个轮子的雅可比矩阵中推出一个简洁的方程,在这新的方程中给出了车结构参数的物理概念,这样更容易写出从车到接触点的转换方程。文献[3]介绍了与机器人动作相关的每个轮子的雅可比矩阵,与旋转运动的等式合并得出每个轮子的运动方程。文献[4]基于LuGre干摩擦模型和轮胎动力学提出一种三维动力学轮胎/道路摩擦模型,不但考虑了轮胎的径向运动,同时也考虑了扰动和阻尼摩擦下动力学模型,模型不但可以应用在轮胎/道路情况下,也可应用在对车体控制中。在样例中校准模型参数和证实了模型,并用于广泛应用的“magic formula”中,这样更容易估计摩擦力。在文献[5]中同时考虑运动学和动力学约束,其中提出新的计算轮胎横向力方法,并证实了这种轮胎估计的方法比线性化的轮胎模型好,用非线性模型来模拟汽车和受力计算,建立差动驱动移动机器人模型,模型本身可以当作运动控制器。 2 WMR运动控制器设计的主要发展趋势 在WMR控制器设计中,文献[6]给出了全面的分析,WMR的反馈控制根据控制目标的不同,可以大致分为3类:轨迹跟踪(Trajectory tracking)、路径跟随(Path following)、点镇定(Point stabilization)。轨迹跟踪问题指在惯性坐标系中,机器人从给定的初始状态出发,到达并跟随给定的参考轨迹。路径跟随问题是指在惯性坐标系中,机器人从给定的初始状态出发,到达并跟随指定的几何 收稿日期:2008-05-19;修回日期:2008-07-16 作者简介:张洪宇(1978-)男,国防科学技术大学在读硕士生,从事模式识别与智能系统研究。 ,

圆柱坐标型工业机器人设计

圆柱坐标型工业机器人设计 第一章绪论 1.1工业机器人研究的目的和意义 工业机器人是集机械、电子、控制、计算机、传感器、人工智能等多学科先进技术于一体的现代制造业重要的自动化装备。自从1962年美国研制出世界上第一台工业机器人以来,机器人技术及其产品发展很快,已成为柔性制造系统( FMS) 、自动化工厂( FA) 、计算机集成制造系统(CIMS)的自动化工具。广泛采用工业机器人,不仅可提高产品的质量与数量,而且保障人身安全、改善劳动环境、减轻劳动强度、提高劳动生产率、节约材料消耗以及降低生产成本有着十分重要的意义。和计算机、网络技术一样,工业机器人的广泛应用正在日益改变着人类的生产和生活方式。 20世纪80年代以来,工业机器人技术逐渐成熟,并很快得到推广,目前已经在工业生产的许多领域得到应用。在工业机器人逐渐得到推广和普及的过程中,下面三个方面的技术进步起着非常重要的作用。 1. 驱动方式的改变20世纪70年代后期,日本安川电动机公司研制开发出了第一台全电动的工业机器人,而此前的工业机器人基本上采用液压驱动方式。与采用液压驱动的机器人相比,采用伺服电动机驱动的机器人在响应速度、精度、灵活性等方面都有很大提高,因此,也逐步代替了采用液压驱动的机器人,成为工业机器人驱动方式的主流。在此过程中,谐波减速器、R V减速器等高性能减速机构的发展也功不可没。近年来,交流伺服驱动已经逐渐代替传统的直流伺服驱动方式,直线电动机等新型驱动方式在许多应用领域也有了长足发展。 2. 信息处理速度的提高 机器人的动作通常是通过机器人各个关节的驱动电动机的运动而实现

1 楼渊:四自由度圆柱坐标机器人设计 的。为了使机器人完成各种复杂动作,机器人控制器需要进行大量计算,并在此基础上向机器人的各个关节的驱动电动机发出必要的控制指令。随着信息技术的不断发展,C P U的计算能力有了很大提高,机器人控制器的性能也有了很大提高,高性能机器人控制器甚至可以同时控制20多个关节。机器人控制器性能的提高也进一步促进了工业机器人本身性能的提高,并扩大了工业机器人的应用范围。近年来,随着信息技术和网络技术的发展,已经出现了多台机器人通过网络共享信息,并在此基础上进行协调控制的技术趋势。 3. 传感器技术的发展 机器人技术发展初期,工业机器人只具备检测自身位置、角度和速度的内部传感器。近年来,随着信息处理技术和传感器技术的迅速发展,触觉、力觉、视觉等外部传感器已经在工业机器人中得到广泛应用。各种新型传感器的使用不但提高了工业机器人的智能程度,也进一步拓宽了工业机器人的应用范围。 1.2工业机器人在国内外的发展现状与趋势 目前,工业机器人有很大一部分应用于制造业的物流搬运中。极大的促进物流自动化,随着生产的发展,搬运机器人的各方面的性能都得到了很大的改善和提高。气动机械手大量的应用到物流搬运机器人领域。在手爪的机械结构方面根据所应用场合的不同以及对工件夹持的特殊要求,采取了多种形式的机械结构来完成对工件的夹紧和防止工件脱落的锁紧措施。在针对同样的目标任务,采取多种运动方式相结合的方式来达到预定的目的。驱动方面采用了一台工业机器人多种驱动方式的情况,有液压驱动,气压驱动,步进电机驱动,伺服电机驱动等等。愈来愈多的搬运机器人是采用混合驱动系统的,这样能够更好的发挥各驱动方式的优点,避免

乐高机器人学龄课程体系

学龄课程体系 学龄课程课程介绍年龄 动力机械I Power Machine I 本系列活动是孩子们参与学龄活动的起点孩子们亲自动手制作与 日常生活密切相关的模型,在老师引导下初步建立对轮子与轮轴、 杠杆、滑轮和齿轮四个基本机械原理的认识并简单的理解,知道 它们在日常生活生产中的简单运用。为他们进入《动力机械Ⅱ》 研究部分打下坚实的基础。 适宜 1~2年 级学 生学 习 结构与力Structure & Force 本系列活动中具体研究一些生活中的桥梁、塔等常见结构,孩子 们先通过设计,然后动手搭建一些结构模型,反复改进来加深对 形状和功能、强度与稳定性、力与负载、材料与连接等结构基本 概念的理解。活动中还激发孩子的探究精神以及独立解决问题的 能力,让孩子们站在生活的角度用器材来解决一些问题。 适宜 1~2年 级学 生学 习 能源世界I e-LAB I 本系列活动是能源世界的第一门课程,孩子们将探索什么是动力, 什么是能量的转换。如一辆小车,如何可以让小车动起来?风能、 马达的电池能量、橡皮筋的弹性势能等。活动中孩子们将运用掌 握的动力与能量知识来解决问题,培养孩子思考和解决问题的能 力。 适宜 3~4年 级学 生学 习 太极机器人本系列活动适合8岁以上并且已经完成《动力机械Ⅰ》与《能源 世界Ⅰ》相关全部课程的学生。本课程以中国传统武术——太极 为背景,从学习机器人自带内置编程至MINDSTORMNXT软件编程, 培养学生的编程意识,同时锻炼学生逻辑思维能力以及分析解决 问题的能力。 适宜 3~4年 级学 生学 习 动力机械II Power Machine II 本系列活动是孩子们先通过设计,然后动手搭建,通过问题来反 思改进,并加深对事物的理解,从而学习一些机械原理和物理知 识,体会数学的运用。《动力机械II》活动中还让孩子们用马达 作为动力,将原来手动的模型改进成动力的模型,让它按一定的 速度和节奏运行,从而体验速度、节奏等概念,为他们进入机器 人课程,做好准备。 适宜 3~4年 级学 生学 习 能源基础II e-LAB II 本系列活动是能源世界教学系列的第二部分活动,通过研究我们 生活生产中常见的五种能源形式(机械能、电能、风能、水能、 太阳能),孩子们将理解什么是能源,如何有效地利用这些能源。 在活动中,孩子们将观察到不同能量形式。 适宜4 年级 以上 学生

乐高机器人教育常见问题问答

乐高机器人教育常见问题问答 1、机器人课是什么样的课?孩子上课听得懂吗? 我们教学的机器人课用丹麦乐高机器人器材为材料,通过合理的机械搭建,用模块编程语言编写程序,通过时间、触动传感器、光电传感器等基本传感器采集外界信息,由微型电脑经过处理发出指令控制机器人自动化运行。 我们上课使用的是世界第一教育产品,有现成的传感器,控制器,不用焊接,不会触电,有美国麻省理工大学合作研发的教育平台,是全世界小朋友公用的一个课程。根据小孩的年龄特征区分不同的课程,因此每个小朋友都有适合的课程选择。 2、和其它班级有何区别? 机器人课程最大的特点是“做中学”,也就是组织引导学生动手实践,在 制作机器人的过程中体验物理学中齿轮、杠杆等机械工作原理,为初中学习物理知识打下基础,掌握传感器的使用方法,学会模块编程语言,通过编程控制培养逻辑思维能力,寓教于乐,在愉快的氛围中学习物理,数学,计算机,信息技术等综合知识。机器人课程改变了目前学习中只听不动的被动教学模式,以任务情境驱动学生自主学习,改变了高分低能现象,发展了学生的素质。 3、机器人学习了有什么用处? 通过机器人课程,可以学习初步的物理机械工作原理,比如齿轮的传动比,传动方向,传动位置等知识点。在编程过程中可以培养逻辑思维能力,同时接触计算机,信息技术,电子技术等综合学科知识。 4、机器人课程需要买器材吗? 学习机器人不需要买器材,我们提供价值2000到4000元的不同套装供学 生学习。报名费已经包含了所有费用,其中包括学习费,电池费,器材损耗费,电脑使用费等。另外我们也提供器材租赁服务,当孩子学过一段时间后,如果觉得课堂时间不够或者有比较好的创意想回家练习,在缴纳押金和器材使用费用后,可以租赁相关器材。同时,我们也可以提供一定技术指导。 5、我小孩可以学习哪种机器人课?为什么这样安排? 我们的机器人课程分为综合科学技术课、机器人课程以及比赛集训课程, 每门课程都分初、中、高三个级别,我们建议小学一至三年级学习综合科学技术课,四至九年级学习机器人课程。综合科学技术科主要以学习物理基础知识为主,如常见的传动装置、杠杆装置、桥梁结构、滑轮装置、轮轴装置等,机器人课程是在科学课的基础上增加了传感器技术、自动控制技术等。具体看适应什么课程,这样考虑主要是处于机器人教育的系统考虑和对学生年龄特征以及逻辑思维能 力角度考虑的。跟语文数学一样,什么年龄阶段学习什么具体课程。 6、机器人课程不就是学习搭积木吗? 机器人课程不能说是搭积木,机器人所以器材都是专业教育器材,不是玩 具里的积木块。学习机器人肯定要动手搭建,但动手搭建只是其中一部分。我们是通过创设任务情境,要求合理搭建机械结构,体验简易物理现象,了解简单物理知识。其中都包含了很丰富的物理机械结构的教育知识。

乐高教育与机器人教育差别

近年来,乐高机器人教育市场很火,很多家长心中有所疑问,乐高机器人教育不就是玩乐高积木吗?其实这样的想法是存在错误的。其两者之间还是有所差异性的。 乐高机器人教育与乐高玩具的区别一:器材不同 乐高积木玩具大多是在商场里看到的得宝系列、城市系列、星球大战系列、等等的拼插玩具,可以根据搭建步骤图进行搭建。而乐高教育是使用专门的乐高教具,其具有独特的乐高配件,比如齿轮、蜗轮、杠杆、黄粱等,更突出教育性。 乐高机器人教育与乐高玩具的区别二:侧重点不同 乐高玩具代表的是快乐,是无限的想象,是创意的未来。在搭建中感受到搭建的快乐和完成后的成就感。能够建立孩子的自信且锻炼孩子动手能力。而乐高机器人教育是有着系统化的课程体系,为3到16岁儿童打开了发现和探索世界的大门。乐高课程是乐高教育研发部门和麻省理工以及剑桥等学校的科学家及教育心理学家合作开发的,根据不同年龄(3-16岁)特点设计,涉及学科内容包括科学、技术、数学、设计、社会学等,既适用于

课堂教学,也可以作为课外活动和技能培训内容。让儿童深入体验机械的奥妙,并尝试设计、改变。同时,观察机械结构上的特征与特性及各项机械原理,并且能够描述分析组装过程中的机械原理,认识各种机械的运动方式、学习作品中的数学和物理原理、了解机械装置在生活中的应用、注重机械结构上的观察与描述说明。在后期通过机械结构,传感器应用,程序应用进行研究探索性学习,进入机器人编程阶段。培养孩子们的逻辑思维能力、团队合作能力等。 乐高机器人教育与乐高玩具的区别三:学习环境不同 乐高教育不像孩子可在家独立拼搭,而是一个团队一起。和小朋友们一起玩乐高,孩子之间会有学习和互动的过程,可以培养孩子们团队合作、沟通交流的能力、向他人学习优点的好习惯,也会让孩子们学会接受自己的失败,加强抗挫折的能力。 以上就是关于两者之间的差异介绍,仅供广大人士进行参考~投资有风险,加盟需谨慎

乐高机器人教育【路佰得】

乐高机器人教育【路佰得】 路佰得机器人 一、乐高教育“4C”教学法 4C”顾问式的教育解决方案,即:联系(connect)、建构(construct)、反思(contemplate)和延续(continue),是乐高教育根据儿童获取知识的过程和学习效果而设计的,是建立在儿童心理学家皮亚杰建构论的理念基础上的。皮亚杰认为儿童认知发展是通过认知结构的不断建构和转化而实现的,即儿童在主动地探索外部世界过程中,通过同化功能,将探索的新知识融入原有的认知结构中;通过顺化功能,不断改变原有的认知结构,形成新的认知结构的过程。乐高教育反对传统的单向灌输,反对把语言文字作为获取知识的捷径,认为教育就是要为儿童带来更多的可能性去创造和发现,教育在于给儿童创设学习的情境,帮助儿童在与情境中的人、事、物相互作用的过程中主动建构知识。 乐高机器人授课模式 乐高机器人的授课模式一般分为二种: 1、主题资源式。主题资源式就是教师在授课的时候要围绕一个主题来进行授课,围绕这一主题,让学生自己动脑、动手去收集与这一主题有关的信息。 2、任务驱动式: 这点相信大家应该都不陌生,是我们平时在授课的过程中经常采用的一种模式。当时在培训的时候,老师也是采用这种方式为我们培训的,他让我们八人为一小组,任务就是让小车在不安装马达的情况下动起来,看哪组制作的小车跑得最远。先是让我们总结能让小车动起来的方法,在场的信息技术教师大概总结了两种方法,一种是利用橡皮筋,把弹性势能转化为动能;还有一种就是利用气球,把风能转化为动能,整合了物理学中势能与动能转换这一过程。然后各小组根据自己选择的方法开始制作小车。在制作的过程中,又需要考虑到

关于乐高机器人在儿童教育中应用的几点思考_李晓彤

113 关于乐高机器人在儿童教育中应用的几点思考 李晓彤 世界著名玩具制造商乐高公司的教育专家以及美国塔夫茨大学、卡内基梅隆大学的教育专家经过多年的研究和实践编写而成的乐高课程在欧美国家广受欢迎。将“玩具”与“教具”相结合更能激发玩家们的趣味性,同时该产品也更加注重体验的重要性,注重人与机器人玩具的相互交流。以机器人为活动教具的教育形式也应运而生。 1.关于乐高玩具 乐高公司于1932年在丹麦创建,从家族小作坊发展成为世界著名玩具厂商。Lego Mindstorms(乐高机器人)是集合了可编程Lego 砖块、电动马达、传感器、Lego Technic 部分(齿轮、轮轴、横梁)的统称。Mindstorms 起源于益智玩具中可编程传感器模具 (programmable sensor blocks)。第一个Lego Mindstorms 的零售版是1998年上市的Robotics Invention System(RIS)。乐高机器人玩具套件的核心是一个称为RCX 或NXT 的可程式化积木。而乐高推出的WEDO 系列更是一套针对小学生的简单机器人套装。专注于初级教育市场。 2.乐高的教育理念 乐高有其自身的教育理念,它提倡“玩中学”的理念,旨在培养学生的动手能力以及对自然科学、数理、人文、信息技术方面的兴趣。为此,乐高将学习过程分为联系(connect)、构建(construct)、反思(contemplate)和延续(continue)四个循序渐进的过程。而以乐高为品牌的乐高机器人早教中心,其课程包括:乐高机器人课程、乐高积木课程、乐高机器人玩具、儿童 乐高课程、儿童智力教育等乐高培训课程,教学中使用的乐高玩具包括乐高积木玩具、乐高机器人玩具等。 3.儿童的发展特点分析 玩家,通常是指在游戏中的参与者,乐高机器人玩具在玩家的年龄上,主要分为儿童(Child)和青少年(Teenager)。那么在使用乐高玩具进行教学的过程中,首先应该从了解和认识儿童的生理、心理特点、行为习惯开始。 根据皮亚杰的认知发展理论,7岁儿童可以根据经验去思考解决问题,可以借助操作工具来帮助思考问题,能够理解可逆性、守恒性等概念。虽然儿童各个阶段出现的年龄特点会因为智力因素和社会背景以及家庭成长环境的影响有所差异,但现在的儿童具备一定的思考与解决问题的能力。甚至有的孩子还能够通过自己的思考创新花样实现系统编程,超越传统的原有模式,收获挑战。 4.对儿童进行机器人教育的适宜性的几点思考 (1)知识方法:通过对孩子进行机器人编程等知识的讲解,使孩子们在玩的过程中了解齿轮、马达、传感器等机械工程的相关概念,并在搭建组建小颗粒的过程中对建筑架构等相关知识进行了解。 (2)动作技能:儿童在机器人制作的过程中在完成了机器人的编程后,就要对机器人的外形进行组装。在组装的过程中儿童可能会遇到很多实际操作的问题。而这些问题都可以锻炼儿童的实际动手能力,使儿童在动作等各 个方面的协调性得到发展。 (3)认知情感:儿童处于人生的发展阶段,各方面都没有成熟,对新鲜事物感到好奇。提到机器人,儿童会马上想到科幻电影中看到的各种各样的机器人。儿童在乐高机器人组装的过程中,锻炼了孩子手眼协调的能力,培养孩子的动手能力,使孩子善于搭建不同形状的机器人玩具,善于尝试新鲜事物。在机器人制作的过程中,锻炼孩子的创造力、解决问题的能力,同时能够培养孩子的团队合作能力,机器人玩具无限创造性的可能满足了孩子们丰富的想象力和认知需求。 机器人制作在儿童知识、逻辑思维、认知能力、动手能力及创造性思维等方面都有很好的积极影响。相信随着我国经济的不断发展,科技的进步,在不久的将来,会有更多的儿童接触到机器人制作,这对于儿童的早期智力发展有诸多益处。更加注重儿童的智力各个方面的全面发展,培养儿童的创造性思维,也会有更多的人创造出兼顾美感与质量与高科技结合的儿童教育玩具,设计出符合孩子成长需求且具人性化、情感化的产品。将玩具同教具融为一体,寓教于乐,让孩子们快乐成长。参考文献: [1]约翰?拜区特尔(John Baichtal). 乐高神话(The cult of lego )[M].王 睿译.人民邮电出版社,2014.[2]林崇德.发展心理学[M].北京:人 民教育出版社,2009. 摘 要:随着经济的发展,机器人制作已经进入到广大儿童的童年生活中。而以乐高机器人为代表的机器人教育更是行业中的领航代表。本文以乐高机器人教育为例,通过分析乐高品牌的教育理念以及其设定的课程内容,从儿童早期生长发育特点入手,并结合机器人教育自身所具有的特点,来分析儿童应用机器人教育的适宜问题。关键词:机器人;儿童教育;乐高;适宜性(哈尔滨师范大学,黑龙江 哈尔滨 150025) 收稿日期:2015-10-14文章编号:2095-624X(2016)02-0113-01中图分类号:TS958.22 文献标识码:B DOI:10.14161/https://www.doczj.com/doc/f7317124.html,ki.qzdk.2016.42.090

相关主题
文本预览
相关文档 最新文档