当前位置:文档之家› GestureMan A mobile robot that embodies a remote instructor's actions

GestureMan A mobile robot that embodies a remote instructor's actions

Proceedings of CSCW'00, pp. 155-162, December 2-6, 2000, Philadelphia, PA.

GestureMan: A Mobile Robot that Embodies a Remote Instructor's Actions Hideaki Kuzuoka, Shin'ya Oyama, Keiichi Yamazaki, Kenji Suzuki, Mamoru Mitsuishi

Association for Computing Machinery

GestureMan: A Mobile Robot that Embodies a Remote

Instructor’s Actions

Hideaki Kuzuoka, Shinya Oyama Inst. of Engineering Mechanics and Systems

University of Tsukuba

1-1-1 Tennoudai, Tsukuba, Ibaraki, Japan

+81-298-53-5258

kuzuoka@esys.tsukuba.ac.jp

Kenji Suzuki

Intelligent Communications Division

Communications Research Laboratory

4-2-1, Nukui-Kitamachi, Koganei, Tokyo, Japan

+81-42-327-7496

ksuzuki@crl.go.jp

Keiichi Yamazaki

Department of Liberal Arts

Saitama University

255 Shimo-Ookubo, Urawa, Saitama 338, Japan

+81-48-858-3604

yamakei@post.saitama-u.ac.jp

Mamoru Mitsuishi

Department of Engineering Synthesis Faculty of Engineering, The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan

+81-3-5814-6355

mamoru@nml.t.u-tokyo.ac.jp

ABSTRACT

When designing systems that support remote instruction on physical tasks, one must consider four requirements: 1) participants should be able to use non-verbal expressions, 2) they must be able to take an appropriate body arrangement to see and show gestures, 3) the instructor should be able to monitor operators and objects, 4) they must be able to organize the arrangement of bodies and tools and gestural expression sequentially and interactively. GestureMan was developed to satisfy these four requirements by using a mobile robot that embodies a remote instructor’s actions. The mobile robot mounts a camera and a remote control laser pointer on it. Based on the experiments with the system we discuss the advantage and disadvantage of the current implementation. Also, some implications to improve the system are described.

Keywords

CSCW, remote instruction, mobile robot, embodiment, video mediated communication.

INTRODUCTION

The authors have been developing systems that support remote instruction on how to use machinery. One of the characteristics of this kind of interaction is that participants’ activities are inseparable from artifacts of the physical environment [7,14]. A system should consider not only human-to-human interaction but also consider human to artifact interaction.

According to Goodwin [3], when instruction is given face-to-face, operators move their bodies into appropriate positions, which allow them to see the shared artifact. The instructor likewise moves in such a way that his view of the shared object is not obstructed by the operators and makes sure that the operators are watching his/her gestures while they are given instructions. The operators in turn express their understanding using words and gestures while they are performing their tasks. During such sequences an instructor and operators not only use words, but also gestures, and body arrangement. It should be noted that body arrangement is not static, but changes dynamically during collaboration. Gestures and body arrangements can be monitored naturally when participants are talking face-to-face. When they have to collaborate via video-mediated communication systems, however, these acts easily become disembodied [4]. In other words because a remote participant’s gestures are displayed on static two-dimensional display, such gestures cannot be shown at a place where local participants can easily be aware of them while he/she is oriented towards artifacts to work with.

In the Robotics field, many researchers have started work on robot-human interaction. We are especially interested in robot mediated human-to-human interaction systems such as ProP [16], Tel-E-Merge [15], and CYBERSCOPE [9]. PRoP (surface cruiser version) is a remote-controlled vehicle and a 1.5 meter vertical pole on which a color video camera, microphone, speaker and color LCD screen are mounted. A remote user can control the robot using a user interface on a

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

CSCW’00, December 2-6, 2000, Philadelphia, PA.

Copyright 2000 ACM 1-58113-222-0/00/0012…$5.00.

PC. As Paulos et. al., described, this kind of robot is expected to embody a remote participant’s actions.

Influenced by these studies, we have been developing a robot-mediated communication system called GestureMan. We are especially interested in supporting remote instruction using the robot. Thus our main interest in our research is to derive the design implications for a mobile robot to support remote instruction.

The main difference between our research versus PRoP, Tel-E-Merge, and CYBERSCOPE is that we are especially interested in focused interaction that require precise pointing, precise body arrangement, and frequent mutual monitoring with minimal time delay.

We begin by first describing several basic requirements for robot design that we had derived from our earlier studies. We then will introduce the GestureMan system. Finally, based on our preliminary experiments, we discuss the advantages and disadvantages of our current system, and describe additional requirements for improvement of our robot.

REQUIREMENTS FOR THE ROBOT

We have been developing robot-mediated communication systems that can embody participants’ behavior e.g. the GestureCam[13] and the GestureLaser[19]. GestureCam consists of a camera and a laser pointer mounted on a remote controlled manipulator with three degrees of freedom of movement. A simple pointing gesture was possible with a laser pointer. Since the robot was placed close to an operator, he/she could involuntarily notice where the robot (instructor) was looking (Fig. 1).

The GestureLaser is a remote-control laser spot actuator that aims to enhance the gesturing capability of the GestureCam. GestureLaser reflects the laser emitter ’s ray off two

orthogonal mirrors into the workspace.

Figure 1. GestureCam and an operator.

Figure 2. Schematic diagram of the GestureLaser system.

The laser spot can be moved like a mouse cursor. The instructor controls the location of the laser spot with a mouse. Input from the mouse is sent through the instructor’s computer to the GestureLaser Controller, where it is translated into mirror movement. The instructor can monitor the position of the laser spot as well as objects and operators on an image from a camera unit. It is thus possible for the instructor to treat the laser spot as if it were a mouse cursor (Fig. 2). In this way, the instructor can show various expressions such as rotation and direction by a movement of the laser spot. The smallest step angle of a motor is 0.036°, which allows movement of the laser spot with a precision of about 1 mm at a distance of 2 m, making it appear continuous.

From the experiments with the system, we found that mere laser spot movement can communicate a variety of meanings when it was used with verbal explanations.

Based on the ethnomethodological studies and our own studies with GestureCam, GestureLaser, and other video mediated communication systems [11], we formulated the following requirements for a system.

(1) Gestural expression requirement : The instructor must be able to freely use not only verbal expressions, but also body movements and bodily expressions (gestures).

(2) Observability requirement : Participants should be able to mutually observe each other’s activities. In particular, (i) the operator should be able to see where the instructor is pointing and how the robot (instructor) is oriented; (ii) the instructor should be able to see how the operator is orienting himself/herself towards the indicated object as well as the pointer, when he/she is observing the instructor ’s pointing; (iii) the instructor should be able to reassure the operator by words and actions that the instructor is aware of the operator ’s orientation.

(3) Arrangement of bodies and tools requirement : One of the major factors to support the observability requirement is to allow for the appropriate arrangement of bodies and tools.

For instance, an instructor may position the robot so that he or she can easily see an operator and tools. Also, the robot should be positioned so that an operator is able to see its actions without conscious effort.

(4) Sequential organization requirement: The com-munication delay could disrupt the timing relations between the participants’ actions [17, 18]. Thus sequential and interactive organization of the arrangement of bodies, tools and gestural expression must be possible without serious time delay due to poor system response or communication latency. In terms of these requirements, one of the major problems of the GestureCam and the GestureLaser is their low mobility. When tools are dispersed in the environment, an instructor should be able to move around in the space while he/she is giving instructions. Thus in such cases our previous systems cannot always support the arrangement of bodies and tools requirement.

Therefore, in order to mitigate this problem e.g. to enable dynamical body arrangement and to support the above four requirements, we decided to design a mobile robot that mounts the functions of the GestureCam and the GestureLaser. We named this new robot ‘GestureMan’. Although the system is still under development, we have been conducting experiments with this system. The goal of these experiments is not only to verify the effectiveness of our ideas, but to find out what kind of resources participants use and how they organize their bodies for participation as they interact in this new situation [1,5,6,14]. We also want to elicit problems of the current system and discover any design implications that could help improve it.

GESTUREMAN

GestureMan is a mobile robot (Fig. 3). We are using the ActivMedia Robotics LLC’s Pioneer 2-CE as the mobile robot base [21]. A camera unit and the GestureLaser are mounted on a tilting mechanism atop a vertical pole. GestureMan’s total height is about 1.2 meters.

The camera’s horizontal field of view is about 90 degrees which is relatively wide. Thus the camera captures not only the intended object, but also participants as they stand close to the object.

The tilting mechanism allows both the GestureLaser and the camera unit to tilt to back and forth together.

The remote instructor can control the base unit using a joystick (Fig. 4). Forward, backward, right turn, and left turn motions of the base unit can be controlled by leaning the joystick forward, backward, right and left respectively. The speed of GestureMan’s motion is proportional to the leaning angle of the joystick.

Tilting can be controlled by pressing two buttons of the

joystick, as shown in Figure 4.

Figure 3. GestureMan

Figure 4. Joystick Controller.

Finally, GestureMan can be controlled wirelessly. We do this by mounting on the robot’s base the batteries, a notebook PC, a 2Mbps wireless LAN unit, and a wireless video transmitter. Because the video transmitter uses the 1.2GHz band, we can maintain a high quality image in the transmitted image. With the current setup, batteries last 30 to 40 minutes. EXPERIMENTS

We conducted several remote instruction experiments to investigate the advantages and disadvantages of the current GestureMan system.

Remote Instruction Experiment

An instructor at the University of Tsukuba gave instructions on how to use the machinery to an operator in the workshop at the University of Tokyo. The distance between both universities is about 50 km. Video and voice were transmitted using a satellite communication system called SCS (Space Collaboration System). SCS is an inter-university satellite network designed for transmitting sounds and images amongst universities, colleges, and national institutes. Because the transmission bandwidth of the SCS is 1.5Mbps, the video signal is compressed. Due to the compression time and traveling time, the transmission delay of SCS is about 0.3 seconds in total.

An instructor who was at the University of Tsukuba controlled the robot (Fig. 5). Computer data for robot control was transmitted via the Internet. However, transmission delay for this computer communication was negligible compared to

the video/audio transmission delay.

Figure 5. Instructor’s site. GestureMan1s camera was projected onto a 100-inch projection screen in front of the instructor, as illustrated in Figure 5.

An overview of the workshop at the University of Tokyo is shown in figure 6. The instructor gave instructions on how to use the MC (Machining Center) that is depicted as MC1 in the figure. An MC is a numerically controlled machine tool that has the ability to perform milling and drilling automatically. The MC in the workshop could be controlled using a user interface on a PC that was placed right next to it. At the beginning of each session both an operator and the GestureMan were placed in the initial position shown in fig.

6. Soon after the session was started, an instructor took the operator to the power switch of the MC1. After an instructor made the operator turn the switch on, the instructor took the operator in front of the PC and the control panel. Then, the instructor told the operator to push and turn some switches on the control panel. After that an operator was told to control a user interface on the PC to start the previously prepared milling program and then the MC started to cut the work piece. Finally after milling was done, the operator was told to take a work piece out of the MC and show it to the instructor.

Figure 6. Overview of the workshop at the Univ. of Tokyo.For verbal communication, the operator used a wireless microphone and wireless headphones.

Four university students from the Engineering department served as operators. All of them had never seen the MC before.

Only one graduate student served as the instructor. He was quite familiar with both the MC and the user interface on the PC. Because the instructor had never controlled the GestureMan before, we had him practice using the system by actually performing remote instruction to one of the authors. Face-to-Face Instruction

For comparison, we did exactly the same instruction task in a face-to-face setting. Only one session was conducted. The instructor was the same student as the remote instruction experiment. The operator was an undergraduate student who had never used the MC before. While the instructor could do most anything he wished, we did not allow him to control the MC and the PC directly as we wished to mimic the constraints of the remote situation.

ANALYSIS OF THE EXPERIMENT

We analyzed the interaction in an attempt to clarify how the operator and the instructor interacted each other. We were especially interested in seeing if the mobile robot could support the previously stated requirements: 1) The gestural expression, 2) The arrangement of bodies and tools, 3) The observability, and 4) The sequential organization require-ment. In this way, we tried to identify the advantages and disadvantages of the system.

Controllability of the GestureMan

In order to satisfy the requirement concerning the arrange-ment of bodies and tools, the instructor should be able to control the robot without excessive frustration: in the worst case, the instructor would simply stop controlling it.

During the experiment, in spite of the transmission delay, the robot was moved more often than we expected. We observed the instructor move the robot for several reasons:

? to guide the operator to certain positions,

? to observe an artifact to work with,

? and to observe an operator’s manipulation.

For example, in order to guide the operator to the power switch, the robot had to pass through a narrow path where only one person could get through. It seemed that the instructor did not hesitate to go into the path. We suspect it was because the instructor used the affordance of himself to judge if the robot can get through or not. Thus we postulate that it is better to design a robot with almost the same width as a human body.

During the session the instructor had to change the robot’s orientation between the PC and the control panel. It seemed that he could do so within a reasonable time. This was clearly possible because the robot was designed so that it could change its orientation at the same spot without moving back and forth.

However, we also noticed that the frequency that the instructor moved the robot’s body was much less than what he did during the face-to-face instruction. One of the reasons may be the transmission delay of the video image. Because of this delay, the image from the robot’s camera moved a little bit after the instructor moved the joystick. Because of this inexact control/display compatibility, the instructor had to move the robot little by little. We also saw that the instructor sometimes moved the robot much too close to an object or overturned the robot so he could no longer see the intended object. In such cases he had to fix the robot’s position or direction. This problem is one of the major reasons why remote instruction took more time than face-to-face instruction.

This problem happened since the instructor could not specify the exact position/direction to move/turn to, but he could only specify direction and speed of movement. In order to alleviate this problem, we are planning to employ a virtual reality technique i.e. an omni-directional treadmill. The omni-directional treadmill was developed by Virtual Reality Lab. at the Univ. of Tsukuba [10, 20]. It enables a user with a head mounted display (HMD) to walk around any direction in the virtual world. Using this technology, the robot can be controlled to move in exactly the same direction as a user walks. The video image from the robot’s camera will be displayed on the HMD. In this way, we are expecting that an instructor can use his/her own sense of distance/direction to decide how far/much the robot should move/turn. Currently this method is still under development. Since an HMD will have problems of narrow field of view and low resolution, it will be necessary for us to carefully evaluate if the system is really effective or not.

Observability for the Instructor

During face-to-face instruction, the instructor turned his head toward the operator very frequently (sometimes less than every five seconds) as he was explaining about certain artifacts. He did this to observe the operator’s reaction such as facial expressions, orientation, and facial expressions.

In this experiment, the GestureMan was mounted with one camera with a 90 degrees horizontal field of view. With this wide field of view, the camera could often capture both object and the operator within one screen (Fig. 7). Thus the video could be used to observe the operator’s orientation with

respect to the object.

Figure 7. A screenshot from the GestureMan’s camera.

When the robot and the operator were positioned side by side, however, the camera’s field of view was too narrow to capture the operator.

Another problem was a trade-off between width of field of view and image resolution per object [2]. That is, the field of view of the camera was too wide to capture an image of each object with a satisfactory quality. In our experiment, this mean that the instructor could not read the characters on the PC screen or the control panel.

In order to satisfy the observability requirement, we are testing a zoomable camera. Although the results are not yet analyzed, it seems that the instructor was generally comfortable with the zooming function. Since the field of view of a camera becomes narrower as it zoomed, however, we have to further analyze how this function affected the interaction.

A more radical approach is to employ a three-camera unit. A three-camera unit is a combination of three cameras, each of which has a narrower field of view. Three cameras are aligned radially and close to each other to make the blind

areas between cameras as narrow as possible. In this way the unit achieve not only a wider field of view but also a higher resolution (Fig. 8). Of course this method requires much more bandwidth for video transmission. It is worth studying this kind of system, however, because such high bandwidth networks will be widely available in the future.

Because of this, we are starting to test three-camera unit. Video images of three cameras were transmitted from the Univ. of Tsukuba to CRL (Communications Research Laboratory) in Koganei City using a 135Mbps ATM network and displayed on the three-screen system named the UNIVERSE [22] (Fig. 9).

Laser Pointer

As we have reported on our previous paper [19], the GestureLaser is effective because it can project a pointing gesture (a laser spot) directly on an object.

During this experiment, the laser was constantly used to point at switches on the control panel, buttons on the user interface of the PC, and tools placed somewhere in the environment (Fig. 10). The laser spot was conspicuous enough for the operator, and also small enough so that it can be projected on a small button. Surprisingly, in spite of the transmission delay, the instructor could point at buttons precisely enough for effective instruction. Perhaps the most telling moment came when the instructor said that it would be impossible to give instruction without this kind of pointing device.

The instructor complained, however, about a transmission delay because sometimes he had to adjust the position of the laser spot a few times to correctly point to an intended object. In order to satisfy the fourth requirement of sequential organization requirement we have to minimize the time delay due to the system. To alleviate this problem, we are considering employing a touch panel display so that the laser spot automatically points at the place corresponding to an instructor touches on a screen.

Figure 8. Top view of the Three-camera unit.

Figure 9. Three-screen system at CRL.

Figure 10. Using a laser beam to point at a button.

Robot Awareness

We are interested in analyzing if a mobile robot can satisfy the second and third requirements of arrangement of bodies and tools and of observability . For instance, we want to know if the operator could be aware of the robot’s orientation.

Awareness to Orientation

In our previous experiments, when a static camera was used to mediate remote instruction, the instructor’s verbal expressions such as “right” and “left” often caused misunderstandings because the operator could not be aware of the direction of the instructor’s gaze through the camera. Most of the time during this experiment, the robot and the operator placed their bodies close to each other and oriented themselves in the same direction. Also, the instructor often used verbal expressions such as “right” and “left” but no misunderstanding by the operator was observed. We believe that the operator could be aware of the robot’s orientation intuitively during the course of interaction.

Predictability

During face-to-face instruction the instructor often looked at the object to explain something before he pointed at it. Very often the operator was aware that the direction of the instructor’s gaze had changed and the operator looked in the

same direction before the instructor actually performed the pointing gesture [12]. Thus the operator could involuntarily predict the location of the next object to be explained. During the session of this experiment, the instructor changed the orientation of the robot between the PC and the control panel several times. In such cases, most typically the instructor first turned the body of the robot and then tilted the camera to capture the image of the object on which instruction is given. Finally, the instructor started to say things such as “Please look at this control panel.”

(a) (b)

(c)

According to our video analysis, we could observe several

cases where the operator was aware of the change of robot’s

orientation and looked at the same direction before the

instructor uttered a word. Figure 11 shows a typical example

of such a case. At first both the robot and the operator were

looking at the PC (a). Soon afterwards, the robot started to

change the direction of its gaze toward the control panel.

While it was moving, the operator noticed that the robot was

moving thus he looked at the camera (b). Two seconds later,

the operator changed his gaze direction toward the control

panel (c). Again the instructor did not utter any word during

this period. It can be assumed that this prediction was

possible because the robot could position its body close

enough to the operator so that he could easily be aware of its

movement.

Different operators, however, used other resources to notice

the robot’s motion and to predict the next place to be

explained. One operator, for example, said that he noticed

that the robot changed its orientation because the laser spot

moved out from the PC screen. Then he kept on following the

laser spot until it reached the control panel. It was an

interesting discovery that the laser spot can show not only

gestures, but also the focal point of the remote instructor.

In order to support our third requirement of observability,

perhaps it is better to equip the robot with redundant

resources that indicate its orientation. Thus we are

considering mounting a flashlight on the camera to brighten

the field of view of the camera. Mounting a dummy head on

the camera may also help the operator intuitively notice the

instructor’s gaze direction.

CONCLUSIONS

In order to support remote instruction on how to use

machinery, we developed a mobile robot-mediated

communication system named the GestureMan. GestureMan

was designed so that it partially enables 1) the instructor to

use pointing gestures and body movements, 2) both the

operator and the robot to take an appropriate body

arrangement to see and show gestures, 3) the instructor to

monitor operators and objects, and 4) participants to organize

the arrangement of bodies and tools and gestural expressions

sequentially and interactively.

From our experiments, we believe that a remote control

mobile robot has the ability to embody a remote instructor’s

actions. In other words, because the robot can position its

body close to the operator, its actions can be easily noticed by

that operator.

Ruhleder and Jordan pointed out that time delay breaks down

the distributed interaction [17]. Consequently, we need

further study on the effect of the time delay on remote

instruction. It was interesting to note, however, that the

remote instruction was effectively performed with a relatively

low bandwidth for a video/audio communication line and 0.3

seconds transmission delay. This means that the next

generation cell phone system such as IMT-2000 (that will

have 2Mbps bandwidth) will be sufficient to be used as

communication line for the GestureMan. It means this kind of

system can be used anywhere in the world in the near future.

ACKNOWLEDGMENTS

The research was supported by Telecommunications

Advancement Organization of Japan, Japan Society for the

Promotion of Science, Grant-in-Aid for Scientific Research

(B), 2000, 12558009, Casio Science Promotion Foundation,

Oki Electric Industry Co. Ltd., and Venture Business

Laboratory. The experiment with ATM network were Figure 11. An example

that the operator was

aware that the robot

changed its gaze direc-

tion. Solid line show the

robot’s gaze direction

and the dotted line show

the operator’s gaze

direction.

supported by Communications Research Laboratory. We extend our thanks to Mr. Atsushi Shimizu and other members of Networked Manufacturing Laboratory of the University of Tokyo for helping us in undertaking experiments using SCS. REFERENCES

1. Dourish, P.,Adler, A., Belloti V. and A. Henderson, Your

Place or Mine? Learning from Long-Term Use of Video Communication, EuroPARK Technical Report EPC-1994-105, Rank Xerox EuroPARK, UK., 1992.

2. Gaver, W., The Affordances of Media Spaces for

Collaboration, Proc. of CSCW’92 (1992), pp. 17-24.

3. Goodwin, C., ‘Professional vision,’ American

Anthropologist 96 (1996), pp.606-33.

4. Heath, C., and Luff, P., Disembodied Conduct:

Communication through video in a multi-media environment, Proc. of CHI'91 (New Orleans, 1991), pp.

99-103.

5. Heath, C., and Luff, P., Media Space and Communicative

Asymmetries: Preliminary Observation of Video-Mediated Interaction, Human Computer Interaction, 7(3) (1992), pp.315-346.

6. Heath, C., Luff, P., and Sellen, A., Reconsidering the

Virtual Workplace: Flexible Support for Collaborative Activity, Proc. of ECSCW'95 (1995), pp. 83-99.

7. Heath, C., The Analysis of Activities in Face to Face

Interaction Using Video, David Silveraman (ed.) Qualitive Sociology, London: Sage (1997), pp-183-200. 8. Hindmarsh, J., Fraser, M., Heath, C., Benford, S., and

Greenhalgh, C., Fragmented Interaction: Establishing Mutual Orientation in Virtual Environments, in Proc. of CSCW’98 (1998), pp. 217-226.

9. Hiraiwa, A., Fukumoto, M., and Sonehara, N., Proposal

of CYBERSCOPE World, Symbiosis of Human and Artifact, Elsevier Science B.V., pp.429-434, 1995.

10. Iwata, H., Walking About Virtual Environments on

Infinite Floor: Proc of IEEE 1999 Virtual Reality Annual International Symposium (1999), pp.286-293. 11. Kato, H., Yamazaki, K., Suzuki, H., Kuzuoka, H., Miki,

H., Yamazaki, A., Designing a Video-Mediated

Collaboration System Based on a Body Metaphor, Proc.

of CSCL'97 (Toronto Canada, 1997), pp. 142-149.

12. Kuzuoka, H., Spatial Workspace Collaboration: A

SharedView Video Support System for Remote Collaboration Capability, Proc. of CHI’92 (1992), pp.

533-540.

13. Kuzuoka, H., Kosuge, T., Tanaka, M., GestureCam: A

Video Communication System for Sympathetic Remote Collaboration, Proc. of CSCW’94 (October 1994), pp.35-

43.

14. Nardi, B., Schwarz, H., Kuchinsky, A., Leichner, R.,

Whittaker, S., and Sclabasi, R., Turning Away from Talking Heads: The Use of Video-as-Data in Neurosurgery, Proc. of INTERCHI'93 (1993), pp.327-334.

15. Noma, H., Sugihara, T., and Miyasato, T., Development

of Ground Surface Simulator for Tel-E-Merge System, Proc. of IEEE-Virtual Reality 2000 Conference (2000), pp. 217-224.

16. Paulos, E. and Canny, J., PRoP: Personal Roving

Presence, Proc. of CHI’98 (1998), pp. 296-303.

17. Ruhleder K. and Jordan, B., Meaning-Making Across

Remote Sites: How Delays in Transmission Affect Interaction, Proc. of ECSCW’99 (1999), pp. 411-429.

18. Tang., J. and Minneman, S., VideoDraw: A Video

Interface for Collaborative Drawing, Proc. of CHI’ 90 (1990), pp.313-320.

19. Yamazaki, K., Yamazaki, A., Kuzuoka, H., Oyama, S.,

Kato, H., Suzuki, H., and Miki, H., GestureLaser and GestureLaser Car: Development of an Embodied Space to Support Remote Instruction, Proc. of ECSCW'99 (1999), pp. 239-258.

20. Yano, H., Noma, H., Iwata, H., Miyasato, T., Shared

Walk Environment Using Locomotion Interfaces, Proc. of CSCW’2000 (2000), to be included.

21. URL: https://www.doczj.com/doc/8c18633244.html,/ROBOTS/index.html

22. URL: http://www.crl.go.jp/jt/jt321/mvl/thema2.html

智能机器人的现状和发展趋势

智能移动机器人的现状和发展 姓名 学号 班级:

智能移动机器人的现状及其发展 摘要:本文扼要地介绍了智能移动机器人技术的发展现状,以及世界各国智能移动机器人的发展水平,然后介绍了智能移动机器人的分类,从几个典型的方面介绍了智能移动机器人在各行各业的广泛应用,讨论了智能移动机器人的发展趋势以及对未来技术的展望,最后提出了自己的建议和设想,分析我国在智能移动机器人方面发展并提出期望。 关键词:智能移动机器人;发展现状;应用;趋势 1引言 机器人是一种可编程和多功能的,用来搬运材料、零件、工具的操作机,或是为了执行不同的任务而具有可改变和可编程动作的专门系统。智能移动机器人则是一个在感知 - 思维 - 效应方面全面模拟人的机器系统,外形不一定像人。它是人工智能技术的综合试验场,可以全面地考察人工智能各个领域的技术,研究它们相互之间的关系。还可以在有害环境中代替人从事危险工作、上天下海、战场作业等方面大显身手。一部智能移动机器人应该具备三方面的能力:感知环境的能力、执行某种任务而对环境施加影响的能力和把感知与行动联系起来的能 力。智能移动机器人与工业机器人的根本区别在于,智能移动机器人具有感知功 能与识别、判断及规划功能[1] 。 随着智能移动机器人的应用领域的扩大,人们期望智能移动机器人在更多领 域为人类服务,代替人类完成更复杂的工作。然而,智能移动机器人所处的环境 往往是未知的、很难预测。智能移动机器人所要完成的工作任务也越来越复杂; 对智能移动机器人行为进行人工分析、设计也变得越来越困难。目前,国内外对 智能移动机器人的研究不断深入。 本文对智能移动机器人的现状和发展趋势进行了综述,分析了国内外的智能 移动机器人的发展,讨论了智能移动机器人在发展中存在的问题,最后提出了对 智能移动机器人发展的一些设想。 1

机器人实训室建设

机器人实训室 计划选购如下: 1、21自由度仿人形智能机器人 一、产品概述 “21自由度仿人形智能机器人”提供了一个机器人教学的实训平台,以仿生运动学为基础,融合造型、机械、电子、传感器、计算机软件硬件和人工智能等技术,可进行仿生机器人的行为控制实训、机器人结构实训、机器人步态规划实训等多种实训项目;同时是机器人竞赛的理想比赛教学产品。 该产品采用关节式结构,共有21个自由度。配合PC上位机软件,可以对头、臂、腰、腿等各个关节进行精确定位,可形成连续的运动,做出各种复杂的动作,可以实现前进、后退、拐弯、摇头、舞蹈等多种动作。在此基础上配备相应的传感器,还可以使其具有一定的智能检测和判断:可以检测光强、声音、温度等环境参数,也可以通过红外开关实现避障、通过倾覆传感器判断是否跌倒或者安装其它传感器实现其它不同的功能。 控制器采用ATMEGA128微处理器,具有26路舵机接口,1个串口,7路模拟输入接口,10路数字I/O口及8路拨码开关。 二、产品特点 1.采用大扭矩数字舵机,双输出轴,具有速度快,定位准等特点 2.具有RS232/USB接口的上位机控制软件,可实现机器人的动作设计和复杂运动的实时控制 3.学生可自主编程,进行下位机程序开发 4.支持无线电台收发功能 5.提供电池供电,可脱离电源线的限制 三、产品组成 1.21自由度铝合金材料的机器人 2.处理多路舵机和传感器信号的控制器 3.具有RS232/USB接口的上位机控制软件 4.传感器模块包括声音传感器、光电传感器、倾覆传感器、温度传感器 5.其它传感器:红外测距传感器、碰撞传感器、霍尔传感器、湿度传感器、超声波传感器(选 配) 6.无线控制模块组件(选配) 7.无线遥控器组件(选配) 8.产品说明书、配件

移动机器人导航技术总结

移动机器人的关键技术分为以下三种: (1)导航技术 导航技术是移动机器人的一项核心技术之一[3,4]"它是指移动机器人通过传感器感知环境信息和自身状态,实现在有障碍的环境中面向目标的自主运动"目前,移动机器人主要的导航方式包括:磁导航,惯性导航,视觉导航等"其中,视觉导航15一7]通过摄像头对障碍物和路标信息拍摄,获取图像信息,然后对图像信息进行探测和识别实现导航"它具有信号探测范围广,获取信息完整等优点,是移动机器人导航的一个主要发展方向,而基于非结构化环境视觉导航是移动机器人导航的研究重点。 (2)多传感器信息融合技术多传感器信息融合技术是移动机器人的关键技术之一,其研究始于20世纪80年代18,9]"信息融合是指将多个传感器所提供的环境信息进行集成处理,形成对外部环境的统一表示"它融合了信息的互补性,信息的冗余性,信息的实时性和信息的低成本性"因而能比较完整地,精确地反映环境特征,从而做出正确的判断和决策,保证了机器人系统快速性,准确性和稳定性"目前移动机器人的多传感器融合技术的研究方法主要有:加权平均法,卡尔曼滤波,贝叶斯估计,D-S证据理论推理,产生规则,模糊逻辑,人工神经网络等"例如文献[10]介绍了名为Xavier的机器人,在机器人上装有多种传感器,如激光探测器!声纳、车轮编码器和彩色摄像机等,该机器人具有很高的自主导航能力。 (3)机器人控制器作为机器人的核心部分,机器人控制器是影响机器人性能的关键部分之一"目前,国内外机器人小车的控制系统的核心处理器,己经由MCS-51、80C196等8位、16位微控制器为主,逐渐演变为DSP、高性能32位微控制器为核心构成"由于模块化系统具有良好的前景,开发具有开放式结构的模块化、标准化机器人控制器也成为当前机器人控制器的一个研究热点"近几年,日本!美国和欧洲一些国家都在开发具有开放式结构的机器人控制器,如日本安川公司基于PC开发的具有开放式结构!网络功能的机器人控制器"我国863计划智能机器人主题也已对这方面的研究立项 视觉导航技术分类 机器人视觉被认为是机器人重要的感觉能力,机器人视觉系统正如人的眼睛一样,是机器人感知局部环境的重要“器官”,同时依此感知的环境信息实现对机器人的导航。机器人视觉信息主要指二维彩色CCD摄像机信息,在有些系统中还包括三维激光雷达采集的信息。视觉信息能否正确、实时地处理直接关系到机器人行驶速度、路径跟踪以及对障碍物的避碰,对系统的实时性和鲁棒性具有决定性的作用。视觉信息处理技术是移动机器人研究中最为关键的技术之一。

智能移动机器人的现状与发展论文 2

题目移动机器人的发展现状及趋势授课老师唐延柯 学生姓名 学号 专业电子信息工程 教学单位德州学院 完成时间 2013年11月16日

一、摘要 (2) 二、引言 (2) 三、智能机器人的构成 (3) 3.1硬件构成 (3) 3.2 软件构成 (3) 四、国内外在该领域的发展现状综述 (4) 五、智能移动机器人的应用及分类 (5) 5.1 智能机器人的应用 (5) 5.2 智能机器人分类 (7) 六、展望与讨论 (9) 6.1智能机器人的发展趋势展望 (9) 6.2 建议及设想 (10) 七、结论 (10) 八、参考文献 (11)

智能机器人的现状及其发展趋势 一、摘要 本文扼要地介绍了智能机器人技术的发展现状,以及世界各国智能机器人的发展水平,然后介绍了智能机器人的分类,从几个典型的方面介绍了智能机器人在各行各业的广泛应用,讨论了智能机器人的发展趋势以及对未来技术的展望,最后提出了自己的建议和设想,分析我国在智能机器人方面发展并提出期望。 关键词:智能机器人;发展现状;应用;趋势 The status and trends of intellectual robot Abstract:This paper briefly discusses the development, status of intellectual robot, development of intellectual robot in many countries. And then it presents the categories of intellectual robot, talks about the extensive applications in all works of life from several typical aspects and trends of intellectual robot. After that, it puts forward prospects for future technology, suggestion and a tentative idea of myself, and analyses the development of intellectual robot in China. Finally, it raises expectations of intellectual robot in China. Key words: intellectual robot; development status; application; trend 二、引言 机器人是一种可编程和多功能的,用来搬运材料、零件、工具的操作机,或是为了执行不同的任务而具有可改变和可编程动作的专门系统。智能机器人则是一个在感知- 思维- 效应方面全面模拟人的机器系统,外形不一定像人。它是人工智能技术的综合试验场,可以全面地考察人工智能各个领域的技术,研究它们相互之间的关系。还可以在有害环境中代替人从事危险工作、上天下海、战场作业等方面大显身手。一部智能机器人应该具备三方面的能力:感知环境的能力、执行某种任务而对环境施加影响的能力和把感知与行动联系起来的能力。智能机器人与工业机器人的根本区别在于,智能机器人具有感知功能与识别、判断及规划功能[1]。 随着智能机器人的应用领域的扩大,人们期望智能机器人在更多领域为人类服务,代替人类完成更复杂的工作。然而,智能机器人所处的环境往往是未知

机器人实验室建设方案

机器人实验室建设方案 果 刘 小 学

1、机器人实验室建设的目标与意义 近年来,学生能力的培养已成为备受关注的问题,培养学生能力是实施素质教育的关键组成部分,是当前时代发展和教育发展的迫切要求。培养学生动手、动脑能力一直是老师、家长关注的热点。机器人教学是培养学生动手、动脑能力的有效途径。亿学通教学机器人采用电子积木设计理念,为学生创设了一个好的动手的实践平台。机器人的搭建不拘一格,按 照不同的思路可以很容易的搭建创造出各种各样完成不同功能的机器人或智能化的作品。 在不断的动脑做的过程中,学生也不断的提高自己的动手能力。亿学通教学机器人套件中含有众多传感器、电子模组,如光电传感器,声音传感器,气体传感器,温度传感器等等,他们的灵敏度和感应范围甚至超越了人的感知界限,例如电子指南针,红外传感器等。这些传感器的功能强大,完成各种任务少不了他们,对学生非常有吸引力的,在实践中学生都要积极思考,怎样应用这些先进技术,才能更好完成任务,这位动脑思考搭建了好的科学平台,给学生提供了丰富的想象和创造空间。机器人活动培养了学生的动手动脑能力,这些能力的提升使得学生的想法成为了显示,使得他们的个性得到了发挥。在动手、动脑实践中,还培养了学生的主动创新的精神,通过创新思维学生们提升了创造的能力。 机器人搭建组装、编写程序、调试是一个复杂的过程,需要多人分工合作。在这个过程中每个人有不同的想法,同伴间会不断发生思维的交锋,当意见不一致的时候,小组的同学就要与同伴进行有效的沟通,发表自己的观点,争得同伴的认可,达成共识,完成任务。特别是在参加机器人比赛的时候,学生不能和老师进行交谈,完全有学生独立解决现场发生的问题,并完成预定的比赛任务。所以机器人活动不是一个人的任务,而是一个小组、一个团队的共同任务,要把众人的优点集中起来,发挥集体的优势,学生在共同创作中学会相互协作,懂得互相配合的重要性,在合作中加深了同学间的感情,懂得了善待他人、共同奋斗的团队精神。而我们的学生正是在这种精神鼓舞下,互相启发、互相鼓励,创造了一个又一个的奇迹。 2、在《信息技术课》教学领域的开展 为了进行机器人教育实验,学校可采用自愿报名与挑选相结合的方式,分批选拔学生作为机器人小组的研究成员,利用节假日或晚自习的时间进行教学和研究。坚持以学生为主体的设计理念,以启迪学生的创造性思维、培养学生动手能力、计算机编程能力、合作能力为总体目标,由指导教师带领学生以研究性学习的方式开展机器人项目的教学与研究,积极的把开放性教学思想渗透到实际教学中,努力激发学生自身的兴趣和潜在的创造性意识。通过不断地研究不断地实践以及不断地创作,学生的学习兴趣、创新能力、动手能力及编程能力 等方面都有明显的提高。 科学的将高新技术引入到教学环节中,不仅可以使许多原本枯燥乏味的课程变得形象有趣,同时也使教学内容能够跟上时代的飞速发展。在机器人的教学实验中,我们的做法是分三步走:

工业机器人专业实训室建设方案

工业机器人技术专业实训室建设方案该工业机器人实训体系建设方案是根据目前工业机器人建设的最新要求, 吸收国内外同类建设方案优点,充分考虑学校区域工业机器人发展特点和区域人才培养的需求,并结合工业机器人教学的特点精心构建而成。 该建设方案集成多种实验实训系统,提供了众多实验例程与典型应用,便 于学生、老师熟悉和掌握工业机器人的实际应用。 为符合学校物工业机人专业的建设目标和要求,实训室方案的丰富建设经 验与优势、以及专业的定制化服务能力,根据学校的需求,特此设计提出了一个以院校专业学科建设为宗旨的工业机器人应用实训室综合解决方案,方案包含工业机器人的基础实训室建设和工业机器人典型应用实训室建设。

2 实训体系配置及预算 注:本配置表是按照专业建设最全面的设备来配置,仅供参考,我司会根据合作院校的具体需求来定制化配置或调整。 述特色化专业课程体系完整配套),具体建议如下:(此配置暂以30人/班配置) 工业机器人综合实验室实训服务体系报价表 序号实训体系服务项目名称 数 量 单 位 单价(元)总价(元)备注 一、工业机器人基础知识实训体系 1 机器人认知实训室30 台¥14,100.0¥423,000.0 2 工业机器人仿真实训室15 台¥55,000.0¥825,000.0 3 工业机器人编程与调试实训室 5 台¥275,000.0¥1,375,000.0 二、工业机器人工程应用技术实训体系 4 工业机器人装配应用工作站 2 台¥328,000.0¥656,000.0 5 工业机器人焊接应用工作站 2 台¥314,000.0¥628,000.0 6 工业机器人分拣应用工作站 2 台¥442,000.0¥884,000.0 7 工业机器人搬运应用工作站 2 台¥276,000.0¥552,000.0 8 工业机器人机床上下料工作站 2 台¥321,000.0¥642,000.0 合计金额¥5,985,000.0 注:本表方案是按照专业建设最全面的设备来配置,仅供参考。我司会根据具体的需求来定制化配置。

乐高机器人实验室建设方案

乐高教育探究实验室配置方案 及课程介绍 乐高教育的理念 多年来,中国的教育界一直以加强学生素质教育为核心,探索和实践着教育改革的方向和途径。全国基础教育工作会议的召开,对全面推进素质教育提出了十分明确的要求,课程教材改革在推进素质教育、培养21 世纪技能的重要作用已经成为全国上下的共识。 我们引入了以乐高教育为主的教育解决方案,它是长期以来与世界各国教育界密切合作,不断探究和开发出最先进的教育方案,并在25 年的教学实践中获得成功应用,受到世界各国教育界的广泛接受和推崇。乐高教育在世界各地教育界的应用中已逐步形成了自身的、符合这些教育理念的可持续发展的优秀平台。 我们提供的教育解决方案确保学生获得快乐和有效的学习。教师在教学过程中将“指导”和“建构”的理念相互结合,这将帮助教师在学生以团队为单位来共同解决问题的课堂上,扮演顾问型的指导者,而不是灌输者。

乐高教育的应用 随着素质教育的推进和新课程新教材的改革,在各个学科的教学中对学生动手操作、动手实验的要求越来越多,因此除了常见的物理实验室、化学实验室、生物实验室,各个学校还建立了很多新课程的实验室,例如科学(自然)实验室、信息技术实验室、劳技实验室、通用技术实验室、探究实验室。 这些新课程的实验室由于建设时间较短,并没有形成统一的标准,实验室的设备配置区别很大。乐高教育平台可以很好的在这些课程上使用,而且如果在这些实验室里都有乐高设备,那么教师在教学的时候,学生在学习的时候就会非常的方便。 乐高探究实验室所涉及课程包含有:机械基础,动力机械,机械工程,机器人等。 机器人教育的目的 学生借助乐高机器人的平台,在老师的知道下,开展丰富多彩的机器人活动, 通过活动提高学生对科技活动及知识的兴趣,培养学生动手能力、创新能力; 提高学生发现问题、分析问题、解决问题的能力;充分体会“做中学,玩中 学”的无穷乐趣。建立机器人活动室,不仅满足课堂教学的需求,还能开展 第二课堂活动,建立科技兴趣小组,丰富学生的课外活动,还可以参加到全 国青少年机器人竞赛和全国电脑制作大赛中去。满足学生的同事也能给科技 老师不断发现新的问题提供自身的发展的机会,参加到全国的比赛及交流中 提高教师自身的素质。 (1)、通过创新实验室学习,对中小学生进行计算机编程、工程设计、动手制作与技术构建等知识的教学,培养他们动脑动手和独立思考、解决问题的技能,不断发展青少年的观察力、想象力和创新能力。 (2)、通过创新实验室活动,不断丰富完善学校创新科技教育内涵;同时也在创新这个更广阔的平台上,提升学校的教育品质和规模,形成良好的社会效益。 (3)、通过创新实验室活动,组建各类科技代表队,参加省级、国家级乃至国际中小学生机器人竞赛活动;同时还参与相关的国内外交流学习活动,开拓学生视野。 (4)、在开展创新实验室培训活动的同时,也面向在校科技教师进行相关业务培训,不断提高科技辅导教师的理论水平和实践能力。 (5)、在开展创新实验室培训活动的基础上,不断推出最新的教学内容、手段和器材,开发具有学校特色的校本课程。

机器人上用的传感器的介绍

机器人上用的传感器的介绍 作者:Ricky 文章来源:https://www.doczj.com/doc/8c18633244.html,更新时间:2006年05月20日打印此文浏览数:18549 感知系统是机器人能够实现自主化的必须部分。这一章,将介绍一下移动机器人中所采用的传感器以及如何从传感器系统中采集所需要的信号。 根据传感器的作用分,一般传感器分为: 内部传感器(体内传感器):主要测量机器人内部系统,比如温度,电机速度,电机载荷,电池电压等。 外部传感器(外界传感器):主要测量外界环境,比如距离测量,声音,光线。 根据传感器的运行方式,可以分为: 被动式传感器:传感器本身不发出能量,比如CCD,CMOS摄像头传感器,靠捕获外界光线来获得信息。 主动式传感器:传感器会发出探测信号。比如超声波,红外,激光。但是此类传感器的反射信号会受到很多物质的影响,从而影响准确的信号获得。同时,信号还狠容易受到干扰,比如相邻两个机器人都发出超声波,这些信号就会产生干扰。 传感器一般有以下几个指标: 动态范围:是指传感器能检测的范围。比如电流传感器能够测量1mA-20A的电流,那么这个传感器的测量范围就是10log(20/0.001)=43dB. 如果传感器的输入超出了传感器的测量范围,那么传感器就不会显示正确的测量值了。比如超声波传感器对近距离的物体无法测量。 分辨率:分辨率是指传感器能测量的最小差异。比如电流传感器,它的分辨率可能是5mA,也就是说小于5mA的电流差异,它没法检测出。当然越高分辨率的传感器价格就越贵。 线性度:这是一个非常重要的指标来衡量传感器输入和输出的关系。 频率:是指传感器的采样速度。比如一个超声波传感器的采样速度为20HZ,也就是说每秒钟能扫描20次。 下面介绍一下常用的传感器: 编码器:主要用于测量电机的旋转角度和速度。任何用电机的地方,都可以用编码器来作为传感器来获得电机的输出。

机器人创新实验室建设方案

科技活动创新实验室建设方案 一、指导思想 为贯彻落实《全民科学素质行动计划纲要》和《国家中长期教育改革和发展规划纲要(2010-2020)》精神,以及响应“双高双普”部室建设的要求,进一步提高学校科学创新活动水平,为我校学生提供经常性、便捷的校内科普活动场所,加强对学生的创新意识和实践能力的培养,丰富学生的课外活动生活。根据陕电教[2016] 4号文件的有关精神,现向学校建议在我校开展“科技活动创新实验室”创建活动。 二、主要目标 学校科技活动创新实验室将以推动学校科技教育的发展,培养学生的创新意识和实践动手能力,广泛开展多种形式的科技教育活动,丰富学生的课余生活,提高学生的科学素养为主要目标。建立具有相关活动设备、器材和工具,以本校科技兴趣小组学生为主要对象参与实验和实践活动为主要活动形式,争取面向学校全体学生辐射,以开展科普教育和科普活动,同时也将做为学校对学生进行思想道德教育的一个有效阵地,为建设和谐校园和谐社区发挥应有的功能。 三、创建设想 根据我校实际情况,初步建议从两个方面进行建设。 1、创建机器人活动工作室,开展电脑机器人教学活动。 未来的世界是机器人的时代,机器人会作为普通的一员走进我们的生活。近年来全国各个学校都已经开始在学校建立了电脑机器人工作室,而我省的中小学机器人竞赛项目已开展了十六届了,我校由于设备缺乏,

一直没有参赛,电脑机器人活动明显落伍。根据目前的形势以及“双高双普”的部室建设要求,建立机器人活动室迫在眉睫。 需要配置:教育机器人控制器5套,教育机器套件5套,机器人备件5套,台式电脑5台,手提电脑两台,桌椅,电源以及其它工具。 预计经费:15万。 2、创建一间创客空间实验室,拓展学生科技活动空间。 “创客”一词来源于英文单词“Maker”,是指出于兴趣与爱好,努力把创意转变为现实的人及群体。创客空间是一个有加工车间,工作室功能的开放的实验室,创客们可以在创客空间里共享资源和知识,来实现他们的想法。2012年美国政府计划用四年时间在1000所中小学引入创客空间,并配备开源软硬件,3D打印机和激光切割机等创客工具。在国内,北京、上海、广州、深圳和温州等地在创客教育方面的起步较早。2014年11月29日,清华大学举行“清华创客日”活动并决定将每年11月的最后一个周六定为“清华创客日”。2015年4月24日,由清华众创空间i.Center 牵头,全国60余所高校、10余家企业共同发起成立创客教育基地联盟。2015年5月18日,由中国教育报发起的中国青少年创客教育联盟在温州实验中学举行成立大会,北京景山学校、北师大附属实验中学等全国35所名校成为创始学校,创客教育日渐兴起,创客运动正在创造一种教育文化,鼓励学生参与其中针对现实世界的问题探索创造性的解决方案。李克强总理曾说过:“创客是中国经济未来增长的不熄引擎”,“全民创新,万众创业”。 因此,在我校率先创立创客空间工作室非常有必要,可以激发学生

工业机器人专业实训室建设方案.doc

xxx 有限公司 工业机器人专业实训室建设方案 文件编号: 受控状态: 分发号: 修订次数:第 1.0 次更改 持有者: 批准审核制定

该工业机器人实训体系建设方案是根据目前工业机器人建设的最新要求,吸收国内外同类建设方案优点,充分考虑学校区域工业机器人发展特点和区域人才培养的需 求,并结合工业机器人教学的特点精心构建而成。 该建设方案集成多种实验实训系统,提供了众多实验例程与典型应用,便于学生、 老师熟悉和掌握工业机器人的实际应用。 为符合学校物工业机人专业的建设目标和要求,实训室方案的丰富建设经验与优 势、以及专业的定制化服务能力,根据学校的需求,特此设计提出了一个以院校专业 学科建设为宗旨的工业机器人应用实训室综合解决方案,方案包含工业机器人的基础 实训室建设和工业机器人典型应用实训室建设。

体需求来定制化配置或调整。 述特色化专业课程体系完整配套),具体建议如下:(此配置暂以30 人/ 班配置) 工业机器人综合实验室实训服务体系报价表 序号实训体系服务项目名称 数 量 单 位 单价(元)总价(元)备注 一、工业机器人基础知识实训体系 1 机器人认知实训室30 台¥14, ¥423, 2 工业机器人仿真实训室15 台¥55, ¥825, 3 工业机器人编程与调试实训室 5 台¥275, ¥1,375, 二、工业机器人工程应用技术实训体系 4 工业机器人装配应用工作站 2 台¥328, ¥656, 5 工业机器人焊接应用工作站 2 台¥314, ¥628, 6 工业机器人分拣应用工作站 2 台¥442, ¥884, 7 工业机器人搬运应用工作站 2 台¥276, ¥552, 8 工业机器人机床上下料工作站 2 台¥321, ¥642, 合计金额¥5,985,注:本表方案是按照专业建设最全面的设备来配置,仅供参考。我司会根据具体的需求来定制 化配置。

智能移动机器人

智能移动机器人 近年来,随着机器人研究的不断发展,机器人技术开始源源不断地向人类活动的各个领域渗透,结合这些领域的应用特点,各种各样的具有不同功能的机器人被研制出来,并且在不同的应用领域都得到了广泛的应用。 本文主要设计一个配置机械手的智能移动机器人,可以调速、转弯、抓取物体。涉及到双目摄像头定位、激光测距、电机控制、压力传感器等技术。 一、系统总体结构图 机器人系统主要由机械系统、驱动控制系统、视觉系统、传感器系统、上位机系统、电源系统以及人机交互系统等组成。 系统总体结构图如下: 智能机器人平台采用了主从结构的分布式处理方式,由上位机系统来协调控制各个子模块系统。各个子系统都有自己的数据处理机制,数据处理都在本模块的DSP处理器中完成。上位机只是负责数据融合、任务分解、策略选择制定、协调控制各子模块等工作。当上位机需要某个模块的数据时,子模块向上位机提供该模块经过处理以后的数据。由于大量的数据处理都在各个子模块中完成,上位机得到的都是经过处理后的小量数据,大大减少了上位机的负担。采用这种方式既提高了上位机的效率,又增加了系统的稳定性,方便系统的维护。 二、机械手

该机械手的设计仿照人类手臂的构造,总共有五个自由度,包括抬手臂转动关节,肩转动关节,肘转动关节,腕转动关节,手爪旋转关节与手爪开闭关节。这种多自由度的设计使得机械手具有较大的灵活度,以适应抓取不同目标物体的要求。 三、控制系统 1、感知系统 感知系统也就是传感器系统,本智能机器人系统的传感器系统可以只包含两个传感器,一个是测障、测距用激光传感器,一个是抓物时压力感测的压力传感器。 红外测距传感器(简称PSD:Poison Sensitive Detector): 通常采用光学三角测量方法来确定机器人同物体之间的距离:传感器的红外发光管发出红外光,当红外光没有碰到障碍的时候,红外光保持前行;当红外光碰到障碍的时候,红外光反射回来,并进入探测器。这样,在反射点,发射器,探测器之间形成一个三角形,探测器通过镜面反射,将红外光射入一个线性CCD中,由CCD测量反射光的角度,并由角度的大小来计算障碍物的距离。本机器人系统配置4路PSD传感器,分别以接近于90度的角度间距安装于机器人的前、后、左、右四个方向上和机械臂抓手的手掌内。 图2 PSD传感器位置示意图 压力传感器: 测得与物体接触的压力值返回给DSP分析处理:是否继续抓紧动作。装在机械臂抓手的每个手指上。 传感器系统结构图

工业机器人实验室建设方案

机器人技术实验室建设方案 一、机器人技术发展历史与现状 机器人技术是多学科交叉与综合的高技术,对国民经济和国家安全具有重要的战略意义。目前全球面临一个技术变革的时代,无论美国的制造业复兴计划还是欧盟的工业4.0战略,机器人都是其中的重要内容。 机器人产业被誉为“制造业皇冠顶端的明珠”,发展机器人产业已成为世界各国抢占未来经济科技制高点的国家战略。随着工业产业的转型升级,工业机器人的应用呈逐年快速增长态势。根据国际机器人联合会(IFR)的数据,2011年是工业机器人产业自1961年以来最蓬勃发展的一年,全球市场同比增长37%。而中国市场则成为增幅最大的市场这一年,中国工业机器人销售量达22577台,较2010年实现了50.7%的增长。德国KUKA机器人、日本川崎机器人等世界500强的机器人制造公司,近年来也将市场重点转到了中国,ABB公司甚至把自己的全球总部设在了中国。 中国在2014年已经成为全球最大的工业机器人消费国,预计2015年,中国工业机器人市场需求量将达到62400台,占全球总量的30%,居全球之首。未来10年内中国机器人市场还将至少保持30%以上的高速增长。工业机器人“中国制造2025”也被称为中国版的工业 4.0规划,工业机器人的使用是实现中国制造转型升级的强力技术手段。 二、机器人技术实验室建设的重要性 1、制造业升级的重要性 长期以来,由于我国人口众多、劳动力价格低廉、生产技术水平又相对落后,工业机器人的应用受到了很大限制。但是,随着工业机器人价格的不断降低和性能的不断提高,劳动力成本不断上升,尤其是汽车业的快速发展,我国工业机器人应用情况正在发生质的变化。制造企业不再最求劳动力的廉价,而是努力获得高精高效的生产方式与管理手段。 同时劳动力成本的不断提高促使工业机器人不断进入企业。随着经济的发展,制造业工人从早期的仅解决温饱问题到现在对薪资和工作条件提出更高要求,像焊接、喷涂等恶劣工作条件的岗位将会被机器人所代替,制造业巨头“富士康”提出百万机器人上岗目标的政策和德国库卡在上海基地的产量占全球的1/3充分说明了这点。 与此同时,国家对与蓬勃发展的自动化产业配套的高技能人才的需求也逐渐增加,其中工业机器人的应用与维护技能人才的需求尤为突出,重复性简单、人口密集型的劳动逐步将被淘

某学校机器人实验室筹建与发展方案

***市实验高中 机 器 人 实 验 室 筹 建 与 发 展 方 案 2015年3月

目录 一、意义 (3) 二、指导思想 (3) 三、目标规划 (3) 四、初期重点工作 (5) (一)组织机构建设 (5) (二)组织教师培训 (5) (三)加强队组活动,开展相关活动 (6) (四)校本机器人教材开发 (6) (五)监测评价 (6) 五、保障措施 (7) 附件1:机器人实验室建设方案 (8) 附件2:机器人课程开课方案 (14) 附件3:机器人课程授课方案 (17)

一、意义 中小学机器人教育与实践活动,是落实新课标要求,开展素质教育的一项崭新的内容。教育部从二○○三年起,把中小学机器人比赛纳入全国中小学电脑制作活动,同时普通高中新课程已将人工智能技术及简易机器人制作列入选修内容。目前,发达国家已把机器人教学实践活动作为中小学信息技术的必修课。在我国的上海、广东等发达地区,已开展了中小学机器人教育教学的试点工作,并取得了一定的实践经验。鉴于此,我校也应当开展中小学机器人教育教学工作。为此,我们仍有很多大量的工作要做。 二、指导思想 丰富中小学学生生活;激发创新精神;培养实践能力,大力推进学生素质教育。 三、目标规划 1、总体建设目标 通过开展机器人搭建、编程、创意等实践活动,开发学生的创新思维,培养和提高学生的动手实践能力;探索中小学机器人课程建设的规律和方法;加快我校中小学校机器人实验室建设;提高我校中小学机器人硬件开发水平和软件研发能力。

提高学校信息化教育水平,普及机器人基础知识,培养学生的信息素养、创新精神及实践能力,培养机器人后备人才,为实施“科教兴国、人才强国”战略奠定扎实基础。利用一至三年时间,建设健全的我校机器人实验室。 2、具体实施目标 (1)筹措资金,利用3-4个月的时间,分步骤有计划地建设符合学校发展实际的多功能机器人技术实验室(建设方案详见附件1)。 (2)在建设机器人实验室的基础之上,培养和培训机器人教学的师资力量,利用各种资源建立起我校机器人教学的师资队伍。如参观周边学校具有机器人教学实践经验的学校吸取他们的有效经验(外派学习)。与机器人实验室建设同步实施。 (3)利用1个月左右时间,建全实验室的软硬件配置,使之达到开设机器人课程的标准,并准备开课。 (4)在各项准备工作完成后,有计划有选择的逐步开设我校机器人课程(具体实施方案详见附件2)。时间初步定为2013-2014年第一学期。 (5)教学的同时,关注省、市、自治区的各类有关机器人的比赛,并根据实际教学情况有选择的参加比赛,争取为我校取得好的成绩和荣誉。

机器人最实用的10种传感器盘点

机器人最实用的10种传感器盘点 随着智能化的程度提高,机器人传感器应用越来越多。智能机器人主要有交互机器人、传感机器人和自主机器人3种。从拟人功能出发,视觉、力觉、触觉最为重要,早已进入实用阶段,听觉也有较大进展,其它还有嗅觉、味觉、滑觉等,对应有多种传感器,所以机器人传感产业也形成了生产和科研力量。 内传感器 机器介机电一体化的产品,内传感器和电机、轴等机械部件或机械结构如手臂(Arm)、手腕(Wrist)等安装在一起,完成位置、速度、力度的测量,实现伺服控制。 位置(位移)传感器 直线移动传感器有电位计式传感器和可调变压器两种。角位移传感器有电位计式、可调变压器(旋转变压器)及光电编码器三种,其中光电编码器有增量式编码器和绝对式编码器。增量式编码器一般用于零位不确定的位置伺服控制,绝对式编码器能够得到对应于编码器初始锁定位置的驱动轴瞬时角度值,当设备受到压力时,只要读出每个关节编码器的读数,就能够对伺服控制的给定值进行调整,以防止机器人启动时产生过剧烈的运动。 速度和加速度传感器 速度传感器有测量平移和旋转运动速度两种,但大多数情况下,只限于测量旋转速度。利用位移的导数,特别是光电方法让光照射旋转圆盘,检测出旋转频率和脉冲数目,以求出旋转角度,及利用圆盘制成有缝隙,通过二个光电二极管辨别出角速度,即转速,这就是光电脉冲式转速传感器。此外还有测速发电机用于测速等。 应变仪即伸缩测量仪,也是一种应力传感器,用于加速度测量。加速度传感器用于测量工业机器人的动态控制信号。一般有由速度测量进行推演、已知质量物体加速度所产生动力,即应用应变仪测量此力进行推演,还有就是下面所说的方法: 与被测加速度有关的力可由一个已知质量产生。这种力可以为电磁力或电动力,最终简化为对电流的测量,这就是伺服返回传感器,实际又能有多种振动式加速度传感器。

移动机器人路径规划技术综述

第25卷第7期V ol.25No.7 控制与决策 Control and Decision 2010年7月 Jul.2010移动机器人路径规划技术综述 文章编号:1001-0920(2010)07-0961-07 朱大奇,颜明重 (上海海事大学水下机器人与智能系统实验室,上海201306) 摘要:智能移动机器人路径规划问题一直是机器人研究的核心内容之一.将移动机器人路径规划方法概括为:基于模版匹配路径规划技术、基于人工势场路径规划技术、基于地图构建路径规划技术和基于人工智能的路径规划技术.分别对这几种方法进行总结与评价,最后展望了移动机器人路径规划的未来研究方向. 关键词:移动机器人;路径规划;人工势场;模板匹配;地图构建;神经网络;智能计算 中图分类号:TP18;TP273文献标识码:A Survey on technology of mobile robot path planning ZHU Da-qi,YAN Ming-zhong (Laboratory of Underwater Vehicles and Intelligent Systems,Shanghai Maritime University,Shanghai201306, China.Correspondent:ZHU Da-qi,E-mail:zdq367@https://www.doczj.com/doc/8c18633244.html,) Abstract:The technology of intelligent mobile robot path planning is one of the most important robot research areas.In this paper the methods of path planning are classi?ed into four classes:Template based,arti?cial potential?eld based,map building based and arti?cial intelligent based approaches.First,the basic theories of the path planning methods are introduced brie?y.Then,the advantages and limitations of the methods are pointed out.Finally,the technology development trends of intelligent mobile robot path planning are given. Key words:Mobile robot;Path planning;Arti?cial potential?eld;Template approach;Map building;Neural network; Intelligent computation 1引言 所谓移动机器人路径规划技术,就是机器人根据自身传感器对环境的感知,自行规划出一条安全的运行路线,同时高效完成作业任务.移动机器人路径规划主要解决3个问题:1)使机器人能从初始点运动到目标点;2)用一定的算法使机器人能绕开障碍物,并且经过某些必须经过的点完成相应的作业任务;3)在完成以上任务的前提下,尽量优化机器人运行轨迹.机器人路径规划技术是智能移动机器人研究的核心内容之一,它起始于20世纪70年代,迄今为止,己有大量的研究成果报道.部分学者从机器人对环境感知的角度,将移动机器人路径规划方法分为3种类型[1]:基于环境模型的规划方法、基于事例学习的规划方法和基于行为的路径规划方法;从机器人路径规划的目标范围看,又可分为全局路径规划和局部路径规划;从规划环境是否随时间变化方面看,还可分为静态路径规划和动态路径规划. 本文从移动机器人路径规划的具体算法与策略上,将移动机器人路径规划技术概括为以下4类:模版匹配路径规划技术、人工势场路径规划技术、地图构建路径规划技术和人工智能路径规划技术.分别对这几种方法进行总结与评价,展望了移动机器人路径规划的未来发展方向. 2模版匹配路径规划技术 模版匹配方法是将机器人当前状态与过去经历相比较,找到最接近的状态,修改这一状态下的路径,便可得到一条新的路径[2,3].即首先利用路径规划所用到的或已产生的信息建立一个模版库,库中的任一模版包含每一次规划的环境信息和路径信息,这些模版可通过特定的索引取得;随后将当前规划任务和环境信息与模版库中的模版进行匹配,以寻找出一 收稿日期:2009-08-30;修回日期:2009-11-18. 基金项目:国家自然科学基金项目(50775136);高校博士点基金项目(20093121110001);上海市教委科研创新项目(10ZZ97). 作者简介:朱大奇(1964?),男,安徽安庆人,教授,博士生导师,从事水下机器人可靠性与路径规划等研究;颜明重(1977?),男,福建泉州人,博士生,从事水下机器人路径规划的研究.

智能式移动机器人设计说明书

智能移动式送料机器人机械系统设计 摘要:智能移动式送料机器人以电动机作为驱动系统,运用单片机传感器等技术达到其智能移动的目的,实现行走、刹车、伸缩、回转等多种动作的操作。因此它具有机械化、程序化、可控化、适应性、灵活性强的特点。 前言:工业机器人是一种典型的机电一体化产品在现代生产中应用日益广泛,作用越来越重要,机器人技术是综合了计算机、控制、机构学、传感技术等多学科而形成的高新技术是当代研究十分活跃,应用日益广泛的领域。

现在,国际上对机器人的概念已经逐渐趋近一致。一般说来,人们都可以接受这种说法,即机器人是靠自身动力和控制能力来实现各种功能的一种机器。联合国标准化组织采纳了美国机器人协会给机器人下的定义:“一种可编程和多功能的,用来搬运材料、零件、工具的操作机;或是为了执行不同的任务而具有可改变和可编程动作的专门系统。”我国研制的排爆机器人不仅可以排除炸弹,利用它的侦察传感器还可监视犯罪分子的活动。监视人员可以在远处对犯罪分子昼夜进行观察,监听他们的谈话,不必暴露自己就可对情况了如指掌。 智能小车,又称轮式机器人,可以在人类无法

适应的恶劣和危险环境中代替人工作。它是一个集环境感知,规划决策,自动驾驶等功能于一体的智能系统。现如今已在诸多领域有广泛的应用。对于快要毕业的大学生来说也是一个实时、富有意义和挑战的设计课题。 正文: 设计方案: 一课题名称:智能移动式送料机器人设计 二机器人工作过程及设计要求 自主设计智能移动小车,设计一个取料 手爪装配到小车上,完成取料机器人的机械系统设计,并进行机器人运动规划和取料虚拟仿真,使机

器人完成如下动作:沿规定路径行驶——工件夹取——车体旋转——手爪张开,将工件从储存处送到运料车上。 三机器人设计的内容 一机械手的设计:

智能机器人实验室建设研究

智能机器人实验室建设研究 摘要:机器人是一个多学科高度交叉的新兴前沿领域,在智能科学与技术专业的教学改革和学生综合素质培养上发挥着举足轻重的作用。文章针对智能机器人实验室为教学和实践服务的问题,分析智能机器人实验室的建设定位和实验体系的建立过程,提出以科研和竞赛促进实验室发展的观点。 关键词:计算机科学与技术;人工智能;智能机器人;实验室建设 智能机器人作为一个最为典型的工程系统对象,涵盖了机械、电气、信息、通讯、控制、系统等所有现代工程专业的内容,因此机器人作为一个典型的系统对象,是所有创新工程专业教育改革的理想载体,可以贯穿工程训练、专业基础教育和专业创新教育的全过程,是教学实验和研究的最理想的平台。智能机器人实验室是一个涉及多学科,测、控充分结合的实验室,集各种传感与执行机构于一体,又是一个测控一体化的综合实验对象。 首都师范大学智能机器人实验室是专门服务于智能科学与技术专业本科教学和实践的系统平台,是高年级智能机器人、模式识别等专业课程的实践创新基地。经过近10年的建设发展,我们在实验室建设、实验体系研究、目标定位等方面积累了一定的经验。 1 实验室建设定位和配置 智能机器人与模式识别是智能科学与技术专业的特色,其中,智能机器人重点培养学生在智能机器人设计与开发、智能机器人传感器技术、智能控制、多传感器信息采集与融合等方面的实际应用能力;模式识别是基于信号处理、人工智能、计算机等技术,用机器代替人去识别和辨识客观事物,用数学技术方法来研究模式的自动处理和判读的学科内容,其相关系统理论和方法的研究近年来迅速发展,因此实验室的建设主要围绕这两门课程的实践教学和科研活动展开。 实验室自2005年创建,占地面积约150m2。根据教学实践需要,实验室设备配置包括通用计算机系统、小型组足球机器人、轮式智能移动机器人平台、大学生创新实践中级套件、示教型教学工业机器人、各种传感器等。软件系统包括Matlab、机器人编程系统、通用编程语言环境、模式识别工具箱等,在这些设备的基础上,开展智能科学与技术的实践教学。随着设备的老化,根据课程发展的需要,实验室即将购进人形机器人、灭火机器人、游历机器人以及自主设计机器人的各类配件。 2 实验室创新实验体系 2.1 通用创新实验体系 根据智能科学与技术的学科特点,智能机器人实验室提供循序渐进的系统训

相关主题
文本预览
相关文档 最新文档