当前位置:文档之家› Virtual Reality in Edutainment A State of the Art Report

Virtual Reality in Edutainment A State of the Art Report

Virtual Reality in Edutainment A State of the Art Report
Virtual Reality in Edutainment A State of the Art Report

Virtual Reality in Edutainment:A State of the Art Report

Michael Brandejsky,Florian Kilzer

Abstract

In this paper,we present the current state of Virtual and Augmented Reality Environments used in educa-tion as well as entertainment.We provide an overview on Edutainment applications and state of the art re-search projects,current Mixed Reality projects in mu-seums and corresponding state-of-the-art interfaces for utilizing Augmented Reality.

Further we explore achievements in medicine,indus-try,emergency response and military and evaluate the currently existing technology for games and sports. Keywords:Virtual Reality,Augmented Reality, Massively Multiplayer Online Games,Disney,Emer-gency Response,Disaster Response,Medicine,Military, Museums,location-based

1Introduction

Edutainment is a word creation combining education and entertainment,where the union between learning and entertainment as motivation factor is described. Few papers and research projects really target at Edutainment itself,most consider education,training and entertainment as di?erent aspects.Therefore we divided our paper into these sections,respectively giving detailed information in their subsections,that each deal with the di?erent aspects of Edutainment: Education and training can be found in di?erent se-rious and commercial application areas,for example medicine,military and industry,entertainment is pro-vided in areas as gaming and sports.Covered in an-other section,most real Edutainment applications can be found in museums.

Finally,we introduce interfaces used in Virtual Re-ality and conclude with an evaluation of the proposed strategies and systems.

2Education

Besides using VR/AR applications in advanced simu-lations to facilitate understanding of e.g.geometry or maths problems in classrooms and enabling collabora-tion VR is used for training and education in many dif-ferent areas,described in detail in the following topics. [Shelton2002]2.1Medical Training

Since its introduction in1991Virtual Environments have been created and used in many areas of medicine, including:

?Training and Education

?Surgical Planning

?Image Guidance

?Tele-Surgery

Though still in its infancy,Virtual Reality in medicine is gaining recognition.Acceptance of VR training however has been slow,partly because of scepticism within the medical community.[Gorman et al.1999]

Surgery is highly costly and risky,many medical er-rors are caused by human factors.In2001a survey of139general surgery program directors showed,that technical skills training outside the operating room was considered vital and reducing the patient’s risk;and that VR could provide this training.[Haluck et al.2001] Simulators enable training in a safe environment, without the risk of causing mistakes with catastrophic results.Currently only a few fully functional,com-mercial simulators for surgical education exist on the market,such as MIST-VR(Minimally Invasive Surgery Trainer)and the PreOp endoscopic simulator. Laparoscopy Laparoscopic procedures require great skills on part of the https://www.doczj.com/doc/1a13291901.html,pared to open surgery,minimally invasive surgery is less traumatic to a patient.Only three or four small incisions are neces-sary to insert the instruments:The laparoscope,a tube, containing an optical?bre to transmit light into the ab-dominal cavity.A camera,mounted on the laparoscope, displays internal body structures and the laparoscopic tools on a monitor.

Di?culties in laparoscopy include restrictions of tool movement,arbitrary tool movement is not possible.As another example,moving the tools through the laparo-scope appears to be inverse to the direction of the hand movement.

Thus,there is interest in developing e?ective training systems for laparoscopy.Main advantages of such systems are that the opportunity to train is constantly available,making mistakes without exposing the patient to risk,unlike in real life.

Surgeons,who were trained with VR made signi?-cantly fewer intraoperative errors.Contrary to the be-lieves of many physicians,also low-?delity VR simula-tors,that did not feel like real patients,could be sub-stitute for clinical experience.However,for surgery in areas of highest risk (such as in carotid arteries),a much higher level of simulation ?delity is necessary for getting safe skills.[Gallagher and Cates 2004]

To train e?cient techniques,e.g.gazing at the tar-get instead of following the tool as novices tend to do,a training environment was developed,utilizing a La-paroscopic Impulse Engine to control the a virtual tool on a 2D screen,enhanced by eye-tracking hardware that records the eye-movements.(see Figure 1)[Law et al.

2004]

Figure 1:A laparoscopic training environment provid-ing information about trainees’eye-gaze strategies.

Intraocular surgery Intraocular surgery is a highly de-manding microsurgical intervention,working with in-struments of at most 0.9mm diameter in the human eye,mistakes in micrometer range can have devastating consequences.Haptic feedback is barely existent due to the softness of the tissue,surgeons have to rely on their view through a stereo microscope and their experience.Training on animal cadavers is not su?cient con-cerning the di?erences in anatomy,and human bodies for some pathologies are di?cult to ?nd.

The solution can be using Virtual Reality simulations,however,to be accepted in surgery training,they have to be convincing in giving the users the feeling as if they are actually conducting the surgery.

The EyeSi system (as depicted in Figure 2)uses an optical tracking system for locating the used in-struments.Through a proprietary hardware image processing is performed in less than 50ms,which is vital to get a realistic simulation without disturbing

latency.Optical tracking provides a contact-free use of instruments,with a higher accuracy than magnetic systems.[Wagner et al.2002]

Through a stereoscopic display,a high-quality image is rendered,considering very complex collision detec-tion of tissue and instruments and a physically correct simulation of forces (e.g.a realistic behavior when a membrane

tears).

Figure 2:The EyeSi training system compared to the microscope used in surgery.

Endoscopy gastroscopy or colonoscopy use ?exible,tube-shaped devices that are inserted into the digestive system.Haptic clues provide the major means for ori-entation,using a simulation environment,that provided haptic feedback was found to be essential in training physicians.[K?rner and M?nner 2002]

Acupuncture training Acupuncture is a popular mea-sure in traditional Chinese medicine to treat various diseases by stimulating special points on the body.To know the exact locations and respectively their e?ects when stimulated,is essential.Mistakes in the treatment can cause dizziness,pain or internal bleeding.

Before VR,students only could train on real patients or arti?cial human models,with considerable disadvan-tages,i.e.a high risk for the patient or the unrealistic force feedback from the mannequin.

Using a highly sophisticated mathematical model and a 6-degrees-of-freedom PHANToM stylus haptic pen as metaphor for the virtual needle [Heng et al.2004]cre-ated a realistic tactile feeling of skin resistance.(Figure 3)

The user interface was designed intuitively to imple-ment following functions:

?Moving Mode:grab,position and rotate the virtual patient on a 2D-screen

?Acupuncture Atlas:visualize acupuncture points on the virtual body

?Needle practice:a given point has to be hit exactly,regarding position and depth of the needle ?Training results:the correct position and the (over-all)accuracy of a trainee is

shown

Figure 3:Acupuncture training system using haptic feedback for simulating skin resistance

2.2Emergency Response

After the attack of 9/11many di?erent approaches for training emergency responders in case of disasters have been proposed.Especially in the United States many military sponsored research funds have been established or focused at this topic.All systems can be seen both as emergency support for real events and as training facility and education environment,since training is essential for the system being operable in case of a real emergency.

The Emergency Response Modelling and Simulation System (ERMSS)is a simulation tool for coordination in case of emergencies.[Jain and McLean 2003]

It places the incident commander into a realistic environment,where he or she is able to get information such as a detailed street map,including location of emergency personnel and resources,structural informa-tion about a supposedly attacked building and nearby buildings and information about tra?c The incident commander has to estimate the number of people in the building and manage coordination of hospital locations and capacities.

Another development targeted at ?rst responders is BioSimMER ,an Immersive Virtual Reality application

that provides the training basis for a simulation,in which a biological attack has been carried out on a small airport.(Figure

4)

Figure 4:A virtual patient being treated,simulating realistic symptoms.

The newly established Homeland Defense Center Network amongst other security measures wants to cre-ate VR simulations for training ?rst responders and se-curity personnel,providing visualization tools for ar-eas such as explosive device handling,collaboration in chemical decontamination,infrastructure controlling and ?re-?ghter training.[Corley and Lejerskar 2003]The Immersive VR Decision Training System by [Ponder et al.2003]features an audio-visual immersion,utilizing magnetic position tracking,surround sound and a stereoscopic rear projection with active stereo shutter-glasses to avoid cyber-sickness e?ects,that can occur when using HMDs.

The system supports a wide range of di?erent inter-active scenarios,that train correct situation assessment and decision making in stressful situations.

Due to the lack of correct haptic feedback for exam-ples such as cardiopulmonary resuscitation,the system concentrates on decision making,the trainee orders a semi-autonomous virtual assistant to execute medical measures using voice commands.(Figure 5)

Command and Control for Disaster Response Ac-cording to [Rosen et al.2002]the current US Federal Response Plan is designed only for limited disasters,i.e.locally limited e?ects of diseases or terror attacks.Unlimited disasters,e.g.the outbreak of diseases like smallpox,luckily never actually were the reason for the employment of this plan.However,di?erent exercises attempted to simulate an unlimited disaster and the

Figure5:Immersive VR Decision Training System:con-cept and key elements

response plan’s outcome;e.g.the TOPOFF exercise, which tried to train personnel for the release of the plague.It had to be aborted after a week,when the health system collapsed.

As a consequence,the current organizational struc-tures and capabilities(implemented as Cybercare)are not believed to be able to handle managing e.g.a biowarfare attack and need to be updated,ideally including techniques of Virtual Reality for better handling the great amount of data.

In each phase of the response(detection,control and treatment of a disaster)decisions are made based on the currently available data,that need to be visualized quickly and correctly to be helpful in the decision pro-cess.Most aspects of command and control require cor-rect judgment and intuition and can therefore be per-formed only by skilled,experienced personnel,other sit-uations can be automatically and quickly interpreted by computers.

Heads-up displays and Augmented Reality provide additional information sources for commanders and other personnel,so they can monitor incoming data while moving in the command center during a crisis.

A key feature of a new system should be the distri-bution of command to di?erent locations,to not have a single,vulnerable control center and to be able to re-spond quickly from di?erent positions in the country. Distributed collaboration enabled via real-time audio and video,as already implemented for example in med-ical applications,is one part.The next step in remote cooperation needs to be Tele-Immersion:Requirements are a3D environment,including3D scanning and dis-play technology,tracking and a secure,fast and reliable network infrastructure.Enhanced requirements are sur-round audio,robotics and haptics.

Improved user interfaces and data displays also take peripheral awareness into account:implicit communi-cation enables people to achieve mutual understanding and collaboration without transmitting a large amount of information.

Another future milestone for Tele-Immersion is the conversion of current two-dimensional camera images to a3D-video-stream,by acquiring and transmitting structural information.These will be displayed on an auto-stereoscopic display,thus creating an e?ect,like the"look through a3D window"to the other side of the communication.Current experiments however are not yet ready for operation,mainly due to a lack of?delity. These systems,when?nally implemented,need to be tested and their users prepared to prevent errors in the real case scenario,and to ful?ll other guiding principles:to respond quickly,and to sustain and coordinate the response.Training personnel is achieved by already using the new technologies in the current healthcare system or by adapting the system with data for exercises.

All papers about new response technologies merely describe future possible application purposes of VR technology and can be seen as case studies.They do not include new techniques,but strategies how to use currently available technology to enhance the system. Research is conducted in this direction,but not ready for use.

2.3Military Training

Training in peacekeeping operations The correct be-havior of soldiers in peacekeeping missions is a contem-porary issue.Incorrect behavior in dangerous situations can easily result in casualties.Therefore military em-phasizes training before such missions.Virtual Reality application have been proven to e?ectively facilitate the training’s success.

The topic considered in this paper by[Loftin et al. 2004]is human interaction at a military checkpoint, "forcing the training of social skills."

The system consists of two di?erent setups:The ?rst experiment took place in a fully immersive CAVE environment,that included two3x3meter screens at a resolution of1280x1024pixels.For creating a three-dimensional vision LCD CrystalEyes stereo shutter glasses were used,the position of the head was tracked via Ascension Technology’s Flock of Birds software, a six-degrees-of-freedom magnetically tracker with a

single sensor attached to the glasses.Three computers were connected to a computing grid to provide for the great amount of data that had to be processed.For simple communication only head tracking and voice recognition as input for interactivity were used.IBM’s ViaVoice provided the recognition software,1500sound and voice ?les were used as responses.

The second experiment included a non-immersive 3D desktop system.For di?erent scenarios scripting was implemented to create the task,setting,and virtual characters,so that typical checkpoint conditions can be created as closely as possible and each situation throughout the session is unique.

The software JACK was used for creating realistically looking and behaving human 3D models.It provides an environment that creates high degrees-of-freedom human models based on simple parameters like height and colors,also including appropriate animation of extremities as well as programable facial animation.For the experiments two situation types were im-plemented:a standard,considered "boring"task and critical situations.The ?rst contained neutral scenar-ios in which for example a car stops at the checkpoint and has to be inspected.Driver and passengers (repre-sented as virtual agents)have to identify themselves to the trainee,while the trainee’s virtual partner provides cover during the interaction.(see Figure 6)Special sit-uations e.g.a false identi?cation,a car that approaches to fast or an ambulance with an emergency patient ap-peared randomly throughout the training

session.

Figure 6:Interaction with a virtual driver in the CAVE environment.

The results of the experiments in the system pro-totype showed,that training in a virtual environ-ment could signi?cantly reduce error rates;studies also showed better results for training in the immersive CAVE,compared to the desktop system.

Air Combat Simulator [Huang et al.2004]propose a system that is capable of simulating a chase between multiple aircrafts.To achieve the Virtual Reality e?ect,two di?erent platforms for the chased airplane and the chaser were used.The ?rst is a Stewart platform operated by a three-degrees-of-freedom force-feedback joystick.The second is a system,that consists of a two-degree-of-freedom camera,mounted on a pan-tilt servo platform in front of a 2D video screen.A computer grid and standard TCP/IP protocol is used for calculation,rendering and communication between the two subsystems.

Basically the camera of the UAV needs to perform a motion detection of the target,process the data and as a result steer the platform on which the camera is mounted to keep the target in center.Two main di?erent tracking approaches are introduced,that not only consider the main identi?ed target but also less probable targets.

The ?rst plane is completely controlled by human operators,the aim of the pursuit is to either train military personnel for attacking or escaping airplanes in combat and to test new algorithms for the steering of an unmanned air vehicle (UAV).The autopiloting technology can also be used for supporting human pi-lot during a simulated or -in future versions -real chase.The aim of the developed system not only lies in mili-tary applications but also wants to provide more re?ned methods for the entertainment industry in ?ight simu-lator games.

Intelligent virtual agents In training battle situations human factors like visual and auditory focus and range need to be taken into account.Studies show that the training impact is greater,if virtual opponents react more human-like using techniques of awareness.An awareness model is necessary to ensure e.g.when a virtual agent can see or hear and therefore respond to a human interactor.

A human or in succession a virtual avatar is able to perceive another person or object if constraints de?ned by sense acuity are met.Visual acuity measures the eye’s ability to resolve ?ne detail and has to take human factors,lightning situation and contrast between target object and background into account.For hearing,factors like spatial acuity (distance to the sound source)and frequency acuity (possible hearing range)need to be processed.

Simulating their human counterparts,virtual agents should have an enhanced perception which also takes

lateral vision and hearing into account.To create a sophisticated training environment, e.g.when ap-proaching arti?cial enemies for an attack,a virtual agent needs to react to visual events,that are located slightly out of visual range,by turning its head and getting the source of the event into focus.

The paper by [Herrero and de Antonio 2005]de?nes algorithms for physically correct processing of sound and light,so that constraints of awareness and human-like perception are ful?lled to enhance the experience for the viewer and to create a more sophisticated arti?cial intelligence in virtual environments.

2.4Training for Industrial Purposes

A considerable aspect for Mixed Reality in industry are training costs,when training could damage expensive equipment or equipment that is hard to repair,and if machines needed to be stopped for training purposes.Tele-Training,or Mobile Collaborative Augmented Re-ality (MCAR),is a continuation of the concept of mo-bile computing:users,able to access information any-time and anywhere utilizing wireless networks,working in a CSCW environment enhanced by computer graph-ics.[Boulanger 2004]

Using Augmented Reality,the trainee is able to operate in a known environment,using his or her traditional tools,rather than creating an unknown,new experience through immersive

VR.

Figure 7:With Mobile Collaborative Augmented Real-ity,a user is trained to change a defect chip on a circuit board of an ATM machine.

Position and rotation of the user’s head-mounted dis-play are computed either traditionally by markers or

by using Camera Tracking :The position of the cam-era is calculated by determining the 3D motion of the https://www.doczj.com/doc/1a13291901.html,ing image processing algorithms the relative motion can be calculated from the image ?ow.With this information and a given starting point in a known environment,the absolute position can be estimated.Costly objects can be represented by virtual objects,superimposed into the real world.Collaboration for multiple trainees and,possible remote,trainers is en-abled:such collaboration functions are audio/video con-ferences,remote presence,manipulation and pointing.(Figure 7)

Airplane inspection In aircraft inspection,the human inspector’s experience plays a critical role,training on (virtual)airplanes is vital to improve an inspector’s per-formance and technique for ?nding defects.

The system of [Duchowski et al.2001]used an As-cension Technology Corporation’s Flock of Birds mag-netic tracker and a HMD combined with a binocular eye tracker.

The training environment was a rebuilt Lockheed L1011cargo bay,in which all hull plates had been removed -and were replaced by textured versions in Virtual Reality,providing typically found defects as cracks,corrosion,abrasion and holes.

The trainee had to inspect and ?nd all defects,?rst searching alone,later under the guidance of an expert.The eye-movement was recorded and mapped onto the hull.Afterwards this data was analyzed and discussed,i.e.the areas of interest,the eye-movement path be-tween these areas and the time spent at each ?xation https://www.doczj.com/doc/1a13291901.html,ing this technology inspectors can be trained to use a more systematic and less random,called feed-forward,search strategy.(shown in Figure 8)[Sadasivan et al.

2005]

Figure 8:Feedforward search strategy used by an in-spection expert to identify failures in the hull of a sim-ulated airplane.

Training for Lathe operations An equipment,aimed stronger at the training of non-visual feedback,i.e.hap-tics and sound,is the Lathe-Simulator of Joetsu Univer-sity of Education in Japan.

As mastery of sensory information(i.e.coordination of sight,force and hearing)is essential for avoiding mis-takes,which could lead to damage of the machine or even injuries,safe training for industrial high schools or vocational training schools is necessary.

The simulator consists of modi?ed lathe handles,lo-cated on the same position as on the real machine,which provide feedback via force.Sensors determine the actual state of the machine and display the workpiece accord-ingly on a screen as well as correct(cutting)sounds. For best training e?ect,acoustic and visual signs fore-warn the user,before he or she can make lathe-or object-destroying mistakes.[Li et al.2002]

2.5Training in Sports

The area of sports provides great opportunities for de-velopment of training as well as entertainment products. Representatively we present two papers about Virtual Reality environments in handball and martial arts. Handball Video training methods in handball conven-tionally emphasize on playing a captured video on a big screen.The video is stopped at the right position(e.g. in a throwing motion,when the ball is released).The trainee,positioned in front of the screen,is asked,which action to take now.

Virtual Reality o?ers interesting possibilities i.e.the ability to react directly to a throw;to motion capture, replay and analyze this motion afterwards.Di?erent training situations can be generated by modifying pre-viously recorded motions of the attacker.

Former models,that already used methods to modify the trajectories of the ball geometrically,did not take biomechanical parameters into account.

The main aspect in video training is,whether the goalkeeper reacts naturally.While considering visual cues provided by the virtual opponent,the trainee should correctly anticipate and take the same actions in virtual situations as well as in real ones.

The animation model’s skeleton used in the experi-ment was simpli?ed to7joints,providing37degrees-of-freedom.(Figure9)Data was acquired by motion capturing handball players,using the Vicon370motion capture system,that utilizes7infrared cameras at60Hz. The optical system tracked50standardized throws with highest possible force into50cm-large targets.

From this data for each throw,a set of parameters for the recreation of the animation(e.g.anthropometrical data,wrist position,durations,trajectories)were calculated,normalized and stored.

The goalkeeper is placed into a non-immersive3D environment,that consists of a large,cylindrical screen with size9x2.4meter and a?eld-of-view of135degree. Three synchronized video-projectors(Barco1208S)and stereo-shutter-glasses engender a virtual sports hall. The center-of-view is placed at the average height of the goalkeepers’eyes,vision parameters adjusted to recreate a realistic e?ect of a sports

?eld.

Figure9:Handball Virtual Reality setup and simpli-?ed skeleton used to generate the motion of the virtual attacker.

The system was tested comparing three kinds of throws:throwing actions that were replayed directly from the motion capture;a parameterized version, that tried to make the resulting motion equal to the recorded sequence;and completely calculated actions, that used di?erent parameters for the animation.

The aim of the survey was to test,whether simulated virtual opponents in a game generate the same response from a human goalkeeper.It proved that such a sim-pli?ed model and the virtual environment is realistic enough to evoke a natural reaction.[Bideau et al.2004] Martial Arts An intriguing combination of entertain-ment and sports training is presented by[H?m?l?inen et al.2005].

To analyze sports movements,especially in martial arts and other acrobatic sports,traditionally techniques like mirrors and video analysis are used.This has sev-eral disadvantages,e.g.when spinning in mid-air ath-letes cannot see themselves,or the reduced learning ef-fect due to the long delay between a sports session and analyzing its video.

To compensate,video systems called"Interactive Video Mirrors"for real-time motion analysis were in-troduced.The bene?ts of mirrors(i.e.to see oneself directly)and video(i.e.to use e?ects like slow-motion) are combined when using screens to display the video directly or slightly delayed on site.

The setup of the training game consists of two oppos-ing video walls to see oneself even when rotating.They can be tilted to provide a better view for the audience.A 2.8GHz Pentium 4laptop computer was su?cient to process the images acquired by a USB web-camera,po-sitioned in front of the 5x 1meter cushioned ?eld.(Fig-ure 10)

Since external sensors in sports are uncomfortable to wear and prone to failure,image processing techniques i.e.background subtraction and optical ?ow were used to calculate shape,position and movement.(Figure 11)Rather than the typically used ?rst-person view in multiple-screen CAVE setups the screens provided a pseudo-3D Augmented Reality third-person view.Slow-motion playback in training allows to inspect the technique more explicitly and to spot errors in pose and motion,exaggerated motions (e.g.jumping higher or hitting faster)were used for playing fun as in typical martial-arts games or

movies.

Figure 10:Augmented Reality Kung-Fu Game on stage in a theater,the player faces either one of the projected screens showing the familiar form of traditional martial arts games.

For the game the users image,treated as a 2D plane,was incorporated into a three-dimensional world.Op-tical ?ow of contour points was calculated to measure force and accuracy of hits and punches,sound e?ects were localized correctly in world coordinates using mul-tiple speakers.

Disadvantages of the pseudo-3D setup included the inability to move sideways,e.g.to dodge attacks to the sides,which is important in most martial arts sports to e?ectively start a

counterattack.

Figure 11:Image processing techniques to calculate shape and motion of the user’s outline and the center of mass

3Museums

Museums try more and more to incorporate new tech-nology into their programs,to be more appealing to visitors,to present artifacts and contents in new ways previously not possible.

Multimedia-enhanced exhibitions nowadays already contain additional types of presentation, e.g.sound,video and interactive three-dimensional graphics.An aim is to integrate this ancillary information seamlessly and appropriately into the context of the exhibition.

3.1Virtual Reality

Virtual Reality,represented already in "Virtual Mu-seum Systems",is likely to play a great part in this development.

Currently Virtual Museum implementations are sim-ply a matter of ?nancing:the range starts at simple multimedia presentations and ends in fully immersive projection based CAVE systems.As a result immersive CAVEs currently are installed in only a couple of muse-ums worldwide,one being the "Cave at the Foundation of the Hellenic World"in Athens,presenting architec-ture and culture of ancient Greece.(see Figure

12)

Figure 12:Images from the fully immersive CAVE environment in the "Hellenic World"Museum in Athens

Low cost systems can be viewed on inexpensive PCs or accessed through internet and present interactively

adjustable views of exhibits-represented as3D objects or static stereo images.

Motivation for VR in museums,besides providing more attractive and demonstrative information to the public,include simple reasons,as the exhibits or the environment are remote,unsafe or do not exist anymore,for example when presenting ancient sites. For most museums another motive to develop VR systems is the lack of space,if otherwise only a small part of objects could be exhibited,virtual objects provide an alternative;or objects may be too valuable or fragile to be exhibited and explored.

When Virtual Reality technology is presented to the public,great demands are made on the equipment.Re-quirements include withstanding hard everyday usage and to still be attractive to the users.

Basic usability guidelines to guarantee easy under-standing of the system cover tasks such as navigating in a virtual museum,to gain knowledge about the exhibits, and to interact(i.e.move,rotate,explore interiors)with these objects.

Real exhibitions in a virtual museum[Lepouras et al.2004]discussed in their paper the possibilities and requirements for navigation in a completely virtual museum.The structural design of the museum was researched,considering parameters like symmetry, orientation and avoidance of disorientation,the use of color and textures for facilitating the recognition e?ect. The simplest way of navigation was evaluated to be the user pointing into the direction of travel,where con-straints of reality(i.e.the circumvention of walking through walls and falling through the?oor,and the re-straint of only rotating around one’s own axis)are met. As visitors usually come from all social levels and backgrounds-and previous computer experience can-not be assumed,user interaction has to be user-friendly, consistent,intuitive and very easy to learn.Tested ex-amples for such interaction tools,that also can with-stand hard,everyday usage,were a simple2D mouse,a joystick and a3D Magellan mouse,that all proved to be more or less suited depending on the vast amount of https://www.doczj.com/doc/1a13291901.html,ability issues,to reduce the chance of disorientation in the virtual world,proposed the restric-tion of movement’s and rotation’s degrees-of-freedom for all the tools.

These issues also included the transition between the two modes navigation and interaction,which was solved by changing the mouse pointer accordingly in vicinity of objects,to indicate the possibility of manipulation.Interaction in the Museum of Color A paper more focused on interaction with the exhibits is from[Spalter et al.2000],covering the topic of interaction in a mu-seum and education project,called Museum of Color in Brown University.

The museum incorporates an Immersive Virtual Re-ality system(IVR),using head-mounted displays.For navigational issues again an architectural approach proved to be of value.

The user is guided through di?erent learning se-quences,located in di?erent rooms.Each?oor uses the same layout,higher?oors,reachable by an elevator, indicate the increasing di?culty of used concepts. Experiments in?rst?oor let the user play with param-eters as hue,saturation and value,in the second?oor the visitor explored concepts of groups of colors,the last?oor contained visualization of color spaces.For convenience,nonlinear navigation was enabled through a2D map of all?oors.

Contrary to expectations IVR was found to be di?-cult to use,a clear and transparent interface design very challenging to create.One technique,using metaphors for convenience,had to be adapted to facilitate the in-teraction with visualizations of abstract concepts. Using data gloves,interaction methods such as pinch-ing and pulling for deforming a surface prevailed,as did the ability to actually get inside a color space,to push planes around and directly examine the consequences of one’s action from di?erent points of view.

Visiting Virtual Reality Museum Exhibits To accel-erate the creation of virtual exhibits and to make it possible to propagate exhibits or parts of exhibits to a broader audience through internet applications[Hem-minger et al.2004]developed a system to transform real museum exhibits into Virtual Reality.Digitally record-ing,using automatically algorithms including multiple scans of3D laser scanner range?nders combined with photo cameras,was used to acquire complete exhibition rooms in photographic quality and three-dimensionally in millimeter precision.(see Figure13)

Besides,the system was found to be useful for archiv-ing and reenacting museums’s collections.

3.2Augmented Reality

Di?erent EU sponsored projects facilitate research of combining real artifacts and virtual objects.However, projects including Augmented Reality are still rare. The Visitor as Virtual Archaeologist One of the?rst case studies simulated archaeological activity using aug-

Figure13:Wireframe screen-shot of a3D scene,gener-ated automatically using data from multiple scans of a laser range?nder.

mented technology to provide a better understanding of the tasks and di?culties of actual archeology work,in-cluding collaboration and dividing of labor while prob-ing sites,to unearth objects and determine their func-tion and historical signi?cance.[Hall et al.2001] Bringing this(multimedia-based)experience to the visitors,goals such as teaching exploration and facil-itating collaboration,discussion among participants, curiosity and excitement are addressed.Currently existing simulations usually only target at children and are often locally dislodged from the main collection in a museum.The need of integrating the experience into the actual exhibit is emphasized.

First experiments aimed at detecting and capturing virtual history https://www.doczj.com/doc/1a13291901.html,ing GPS-enabled PDAs connected through a wireless network,visitors were sent on a historical quest on a sports?eld.

Acting and beeping like a metal detector when ap-proaching places of interest the PDA?nally showed the object in close proximity of an artifact.When the group of visitors,divided to search di?erent sections of the?eld,found all hidden shards of artifacts,the sec-ond part of the experiment started.The Virtual Time Machine,a mounted periscope device(see Figure14), showed the artifacts in place and in use.

Augmenting the Science Centre and Museum Expe-rience The"AR Visor"has proven as very suitable for the stressing and continuous use in museums.It is a handheld device,that consists of a camera and a display screen mounted on a handle.(depicted in Figure15) The camera images are processed using3D-tracking of markers utilizing the ARToolkit,actual

rotation Figure14:Using the Virtual Time Machine,visitors can see virtually into the past.

and position values are considered for superimposing three-dimensional representations of objects.The stereoscopic resulting image is shown directly behind the camera,giving an impression of looking through the visor into the virtual world.

Actual projects include the BlackMagic Kiosk,a book,attached to a turntable to avoid displacement and damage but still be able to rotate and see it from all di-rections.It displays information on one side,as well as virtual animated models that stick out of the next page. S.O.L.A.R System:Nine marker cards,representing the planets,are positioned at a panel showing their or-bits as lines.The visitors can take them into their hands and explore them,providing very high detail from ac-tual satellite images when looked at closely.

In The Augmented Reality Volcano Kiosk a more sophisticated way to present information was chosen. Modelling and animation of volcano formation,erup-tions and movement of tectonic plates was outsourced to a professional3D studio.The resulting data,in SDL model format,includes high level photorealistic tex-tures,per-pixel-shading and other special e?ects(e.g. particle e?ects for eruptions associated with sound). (Figure15)

The speed of the animation as well as other parame-ters can be controlled by a mechanical slider.

The book metaphor is ideal for museum visitors.It provides information piecewise so that each unit of in-formation can be observed independently.

Animation of virtual objects,their response to in-teraction,and less?nancial,physical or practical con-straints are the reasons for utilizing AR as a puissant education technology.The main challenge however is to

Figure15:Using an AR Visor the animated model of a volcano is augmented on a book.

create sophisticated models,by either manually creat-ing objects,visualizing data,or digitizing real artifacts, if available.

Besides the book paradigm more information,i.e. how to hold the handle,where to point the camera and where to look into,have to be o?ered to the bigger part of AR users,who do not have computer experience. This,as well as e?orts to persuade visitors to use the technology,is accomplished by visual cues(e.g.bicy-cle handles,two eye-holes,arrows),written instructions and a screen-saver,that starts,if no motion of the AR visor is detected.

Building Virtual and Augmented Reality Museum Ex-hibitions by[Wojciechowski et al.2004]

As mentioned before,European Union projects aim to develop VR for museum,expediting interest in cul-ture for European bene?t.One of these,the SCULP-TEUR project deals with the building of a semantic and content-based database.With this solution muse-ums can create and manipulate virtual representations of museum objects and store them together with addi-tional information and other multimedia data.

As the acquiring of3D data is the most costly and laborious step,multi-stereo and silhouette scanning techniques are presented for creation of object.High-detail3D scanning is predicted to become a?ordable for museums in the next years.Also an intuitive human-computer interface has to be developed to be accepted by the public.

The European Union Fifth Framework Programme IST project ARCO(Augmented Representation of Cul-tural Objects)utilizes see-through head-mounted dis-plays.The system includes content production,man-agement and visualization.Objects in the database, enriched by museum information and content,can be put together dynamically to new virtual galleries using X-VRML and X3D.De?ning a hierarchical structure it becomes faster to recreate new exhibitions viewable in a web-browser in2D or in AR in museum,superimposed on markers positioned near real artifacts.

An XML syntax is proposed for building Learning Scenarios,i.e.quizzes and interaction.(shown in Figure

16)

Figure16:Using XML and artifacts out of a database, quizzes can be created easily.

4Entertainment

Mixed Reality Entertainment is able to o?er virtual amazing experiences as a new form of entertainment for their guests,in a more emotionally way than traditional game style experiences.Representatively we present a couple of applications detailed described in the following topics.

4.1Disney‘s Research

The Walt Disney Company was one of the ?rst compa-nies which used the possibilities of the new technology for their theme parks.In 1993the company built their own VR Studio to explore the potential of the technology for new theme park attractions.

The ?rst DisneyQuest facility,a Disney indoor inter-active theme park,opened 1998in Orlando,Florida.The VR Studio was responsible for the development of three main DisneyQuest attractions:"Aladdin‘s Magic Carpet Ride ","Hercules in the Underworld ",and "Pirates of the Caribbean:Battle for Buccaneer Gold ".[Mine 2003]

First established as part of Walt Disney Imagineer-ing Research and Development,the VR Studio later changed to be part of the Walt Disney Internet Group,which drove the creativity and experience of the Disney theme parks into the online 3D world.In 1998the Stu-dio started to explore the possibilities of online gaming with the result "Toontown Online",a massively multi-player online PC game.In 2002,Disney VR decided to release the used engine Panda 3d as open source.[Mine et al.

2003]

Figure 17:Physical setup of the Aladdin Magic Carpet Ride

Aladdin‘s Magic Carpet Ride Aladdin is a HMD based four-person virtual reality magic carpet race through Agrabah,the world of Aladdin.The actual ride is the third version of the Aladdin attraction.

The ?rst one,"Aladdin Adventure!",was an experi-mental prototype in 1994.It was used to observe and survey the reactions of guests.

The guest sits in a motorcycle-style seat and controls

the simulation with a special designed steering mecha-nism,representing the front of the magic carpet.The control of the magic carpet is very intuitive,the guest pitches the carpet up and down to tilt the virtual carpet,pushes to accelerate and turns it left or right to control the yaw of the carpet.The Carpet Ride uses a CRT-based HMD with 640x480resolution and an 80degree horizontal ?eld of view.In the HMD microphones and stereo headphones are also included to communicate with other guests.The complete physical setup is shown in Figure 17.

More than 45,000guests experienced this Aladdin Adventure exhibit over a period of 14months.With the results of the observation the VR Studio came to the conclusion that a speci?c goal and a story-line was very necessary for the guests,some of the guests had also problems controlling the carpet.But the experiment itself was a great success,most of the guests were able to sustain the illusion that they were in another place.Some of the guests were also overwhelmed by the experience,so they had trouble to answer the questions.Finally most of the guests did not study the detail,they liked to explore new spaces.These results led to di?erent modi?cations and a new version of the ride.[Pausch et al.1996]

In 1996the second version of the Aladdin attraction "Aladdin‘s VR Adventure "was deployed.It contains a narrative storyline,a large number of reactive charac-ters and expanded the number of environments.[Mine

2003]

Figure 18:Interior view of the Hercules immersive pro-jection theater

Hercules in the Underworld Hercules was a four player,3D adventure in immersive projection theaters with stereo glasses.These multi-screen immersive the-aters were hexagonally shaped and had ?ve screens,the

entrance was on the sixth side.To maximize the angu-lar resolution the VR Center turned the projectors on their side.Due to technical limitations at the time of the exhibition only three front screens were in stereo.The other two just gave the guests sense of immersion.As a matter of understanding the guests used conven-tional joysticks to control their virtual character instead of unconventional devices like a tracked wand.The big challenge was the single viewport,which was shared be-tween the four characters.The result was a single wide path laid out at the Hercules environments on which the guests were free to roam around.

Hercules was later replaced with Pirates of the Caribbean.[Mine 2003]

Pirates of the Caribbean:Battle for Buccaneer Gold Pirates like Hercules uses an immersive projection the-ater and stereo glasses.It is an interactive theme park ride based on the classic Pirates of the Caribbean at-traction at Disneyland.Through technical innovations all screens are stereoscopic.After a queue line,where the background story is explained,the guest boards on a ship-themed motion platform and enters a 3D world of plundered towns,fortress islands and erupting vol-canoes.The ship-themed platform has a real steering wheel and six physical cannons which help the guests to immerse the experience (shown in Figure 19)[Mine

2003].

Figure 19:Pirates ship-themed motion platform One guest is steering the boat,the other three guests defeat virtual enemy pirate ships,forts,sea monsters and ghostly skeletons using the six cannons.

Through the complete free control of the virtual ship by the guests,Disney VR created three large eye-catching

areas (with the same function as the big castle at Dis-neyland):

?At the fortress island,soldiers of the fort attack the guests with ?reballs.Also an enormous valuable gold ship sails away,guarded by navy ships.?At the volcano island,the guests can ?ght other pirate ships.

?In the burning town,the guests ?ght other pirate ships while sailing through a narrow canal with buildings.

To guarantee an exiting journey from one island to an-other the VR Studio designed some ships which sneak attack the ship of the guests and a gigantic sea serpent rises up from the deep at a certain time.The Pirates Ride also has a climax ending:a battle against Jolly Roger‘s ghost ship and dozens of ?ying skeletons.This battle can end in two ways:

?The guests defeat Jolly Roger and enter a victory lagoon where they can shoot ?reworks from their cannons.

?Jolly Roger defeats the guests,the boats explodes and sink to the bottom of the ocean where sharks swim over the wreckage.

The lose ending is the more exciting one,to compensate for the fact that the guests lost the game.[Schell and Shochet 2001]

"Pirates of the Caribbean:Battle for Buccaneer Gold"won the award for Outstanding Attraction from the Themed Entertainment Association in 2001.

Toontown Online Toontown is a 3D massively mul-tiplayer online PC game (MMPOG)for children.The goal was to divide the time of the players into three parts:battle,minigames,and social activities.So the developers created a story about a menace of busi-nesslike robots named Cogs.The Cogs trying to turn the colorful world of Toontown into a black-and-white metropolis of skyscrapers and o?ce buildings and only the Toons can stop them.To ?ght against the Cogs,the Toons use traditional-cartoon practical jokes and gags.To buy these gags the Toons need jellybeans.The only way to earn them is to play multiplayer minigames.For these minigames the Toons need some other Toons,which can be found on the the playground.This system based on cooperation and teamwork is one of the main di?erences between other MMPOGs.

Besides the battles,Toontown has some other activ-ities which can be done,such as interactive ?shing or an estate and pet option.To keep the children safe,the developers created a Toon name generator and a menu-based phrase chat system named SpeedChat.To cre-ate the unique Toonlook,known from the Disney Park

attractions,and have full control over the system the VR Studio developed the Panda3d system.[Mine et al.

2003]

Figure 20:A typical Toontown scene using Panda3D:Toons ?ght against Cogs using gags.

Panda3D Disney VR used a system called Disney’s World Designer to create the DisneyQuest Virtual Re-ality Attractions between 1997and 2000.For the Toon-town Online Game the VR Studio need a portable and e?cient 3D graphics engine.The result was a system called Panda,which stands for Platform Agnostic Net-worked Display Architecture ,a new 3D rendering and storytelling system based on the experiences they made with the Disney Quest attractions.The engine is writ-ten in C++,has an interpreted,Python-based scripting layer and a powerful suite of world-building tools called DIRECT (Disneys Interactive Real-time Environment Construction Tools).[Mine et al.2003]In 2002,Disney VR decided to make the engine open source,so they could more easily work with universities on Virtual Re-ality research projects.[Disney VR Center 2002]

4.2AR2Hockey

The AR2Hockey was developed as an example of a sys-tem in which two or more people can share a virtual space with a real space (collaborative augmented real-ity).In this system,two users play an air hockey game by moving a real object that represents the user‘s user’s paddle in the virtual world.AR is used to enhance phys-ical space with computer generated virtual space.Each user has a optical see-through HMD with a magnetic sensor attached,to measure the viewpoint of the user.A small camera,positioned near the right eye,is used to compensate the error of the magnetic sensor by de-tecting markers in the physical space.[Ohshima et al.

1998]

4.3Magic Book

The Magic Book was one of the ?rst AR entertainment applications.It was developed as a collaboration between the Human Interface Technology Laboratory (HIT Lab)at the University of Washington as well as researchers at Hiroshima City University and ATR MIC Labs in Japan and others outside the HIT Lab in Seattle.[Billinghurst 2000]

When the readers ?rst look at the Magic Book it looks like a normal children‘s storybook with colorful pages and simple text.If they are using HMDs the pictures miraculously pop o?the page and come to life as three-dimensional animated virtual objects,for each reader in his own perspective.The readers can also ?y into the immersive VR world and freely explore the scene represented as an avatar,by touching a switch on the headset (see Figure 21).Also several readers can gather around a single Magic Book and experience it together.Vision tracking techniques are used to track the users’movements and calculate the head position and orientation.The used vision tracking tech-niques,named as ARToolKit,are distributed freely for non-commercial use under the GPL license.[Kato

2002]

Figure 21:Avatar in a normal (exocentric)AR View and in an Immersive Scene (small picture)

For the ARToolkit tracking routines a camera,attached to the HMD,captures a video of the real world and sends it to the computer.The computer software converts the image to a binary black and white image and searches each frame for any markers.These markers are black signs on a white background surrounded by a thick black square.If a square is found,the software uses geometry formulas to calculate the position of the camera relative to the black square.After identi?cation of the marker,the position of the

camera is known.The virtual objects then can be rendered from this camera position.The output is shown back in the HMD,on top of the image of the real world.[Billinghurst et al.2000]

The MagicBook won the 2001Discover Award for best entertainment application and was one of the most suc-cessful projects of the Washington HIT Lab.

4.4AR Quake

AR Quake is an augmented reality ?rst person shooter based on the popular game Quake from ID Software.It overlays computer generated information of the Quake world onto the real world.The application uses a combination of a compass,GPS (satellite)tracking and an extended version of the pattern recognition software ARToolKit,to track the movements of the player instead of using a mouse or a keyboard.

The player uses a wearable computer system on a backpack,which computes the pattern recognition tracking and creates the virtual image.Connected to the wearable computer an optical see-through HMD with a small video camera is attached.A haptic feedback gun is used as highly intuitive trigger input that most users are familiar with.[Piekarski and Thomas

2002]

Figure 22:Combining the Quake environment and the real world

To merge the virtual and real world the developers had to create a level based on the Mawson Lakes campus of the University of South Australia.Figure 22shows the process of the level creation.First the buildings are textured with a grid pattern as testing mode to debug the level.Then monsters and items are added to the environment.When the virtual level matches the real world perfectly,the last step is to remove the texturing of the buildings,ground and sky to make it transpar-ent.The Tracking is a combination of di?erent methods to provide a continuous indoor and outdoor tracking.There are three main cases in which the tracking works di?erently:

?Outdoors far from buildings:In this case the sys-tem uses the GPS positional data to get the user coordinates.The inaccuracies are acceptable and regardless due to the large distance (>50m)to the next buildings.

?Outdoors near buildings :If the User is less than 50m to the next building the system uses GPS and the vision based tracking system ARToolKit.On every building ?ducial markers are mounted,the exact coordinates of every marker are known by the system.These markers correct the inaccuracy of the GPS.

?Indoors:Inside buildings the system uses the vi-sion based tracking system ARToolKit.First they placed,like outside,?ducial markers on the walls to provide tracking.At a second approach they placed the markers on the ceiling instead of the walls.This change was necessary because it was important that no matter where the user looks,at least one pattern must be visible to provide tracking.

Also important was the change of colors.The original Quake is made for a normal computer screens,so the colors are dark and gloomy.On a see-through HMD these colors appear translucent.To avoid these unin-tentional e?ect all the colors are changed into brighter colors.

Since ARQuake uses the standard Quake engine,it is possible to use its multiplayer features.With a nor-mal desktop computer two player can ?ght against each other using a WaveLAN.The developers also added a pointing device to aid communication since Quake has nothing similar to a human hand.[Thomas et al.2000]

4.5AR Worms

In Worms,a popular turn-based computer game from Team 17,teams of worms attack each other with artillery-style shooting and platform-style movement.The last team standing is the winner.

Researchers at the University of Canterbury created an

Figure23:A typical BattleTech Center Augmented Reality version of the game.In its actual form it is the second version of the game.The?rst one only used the pattern recognition software ARToolKit to track the movements of the player.This leads to di?erent problems with the marker detection.

Its second version,named"Hybrid AR Worms", uses a hybrid tracking system based primarily on a magnetic sensor attached on the player HMD to detect player head orientation and ARToolKit with?ducial markers on the playing surface to compensate the magnetic tracking errors.It is a further development of the techniques which where used in the AR Hockey Implementation[Ohshima et al.1998].

Beside the AR Mode,it also features an immersive virtual reality mode,which uses a simple transition in which the real world fades and the camera descends to the level of the worm,like in the Magic Book implementation[Billinghurst2000].

The game also has multiplayer features,two players can either play on a single computer or in a network. Also an unlimited amount of observers are able to view the game from any perspective.The terrain is based on a height map,a particle engine is used for rocket trails and explosions and a physics engine to calculate wind, gravity and air friction.A three-dimensional sound en-gine is used to play sounds at their actual physical loca-tion.Figure24shows di?erent scenes of the game and the original Worms game.

[Nilsen et al.

2004]Figure24:Di?erent Visualizations with the Virtual Showcase:(top left)the markers on the game board, (bottom left)the game in the immersive virtual reality mode,(top right)the game in the augmented reality mode,(bottom right)the original Worms from Team17 4.6Location-based Entertainment

One of the highest pro?le uses of Virtual Reality in the public-sector is Location-based Entertainment.The Virtual Reality provides a new sense of realism,coupled with motion-platforms,which takes passengers where they have never been before,using less space and giv-ing their operators greater?exibility than their roller coaster predecessors.

An example for Location-based Entertainment is Vir-tual World Entertainment‘s BattleTech.A BattleTech Center transports visitors into a futuristic war game. The visitors are placed into the year3025,in control of BattleMech robots at war in a computer-generated terrain with computer-generated weather conditions.A typical BattleTechcenter is shown in Figure23.[Enter-tainment2005]

4.7Kidsroom

The KidsRoom is a fully-automated,interactive narra-tive playspace for children.It is designed to stimulate the imagination of children in the spirit of Peter Pan and transform them into a fantasy world.

The KidsRoom theatrically re-creates a child‘s bed-room.Two walls of the KidsRoom resemble walls in a real children‘s room,the other two are large back-projected video screens.Four Speakers projects sound e?ects and music on each side.For lighting e?ects is a theatrical lighting installed.On the?oor four colored rugs are placed and used as reference points.

Three cameras overlooking the KidsRoom are used

for the computer vision analysis of the scene.The?rst camera is the top view and used for motion detection and tracking.Two other cameras are used to detect body movements on the green and red rugs(which are in front of the two video screens).Additionally a fourth camera is installed to document the movements of the children and for the spectators on the outside of the KidsRoom(shown in Figure25).Finally a microphone is used to detect the loudness of

shouts.

Figure25:Look from Camera4into the KidsRoom Six computers control the technical equipment and carry the children through an interactive,imaginative adventure.The adventure starts in the bedroom and progresses through three other worlds.The children are taken into a forest world,a river world and a monsters world.Throughout the story,children interact with ob-jects in the room,with one another,and with virtual creatures projected onto the walls.The whole adven-ture takes10-12minutes.

[Bobick et al.1999]

The KidsRoom inspired a new interactive playspace for children that was constructed by Media Lab spino?NearLife and is exhibited in the Millenium Dome,Lon-don.It is named Kidsroom2and received a Prestigious Interactive Design Award from I.D.Magazine.[Nearlife 2000]

5Mixed Reality Interfaces

Mixed Reality interfaces enable people to interact with the real world in a more natural way,they provide a seamless transition between real world and the aug-mented world.Representatively we present four exam-ples of interfaces.5.1QuakeRunner

QuakeRunner is a Mixed Reality interface and game, based on Quake3Arena from ID Software.Instead of using a mouse or a keyboard the player runs and jumps in reality inside a small circle.The idea be-hind QuakeRunner is?rstly a better identi?cation by the player due to his improved involvement in the game and secondly the addition of a physical component to the game,the strains of the movement are directly felt by the player.

Currently two di?erent game-modes are implemented:?Single Player:The player must collect items.?Multi Player:Two Runners play against each other by capturing the?ag of the other Runner.

The Player enters the game by entering the initializa-tion circle,painted on the?oor.A projector on the ceiling displays the game in front of the player on a wall.The system uses a magnetic tracker which is attached to the player,to detect the actions of the https://www.doczj.com/doc/1a13291901.html,ing the returned information,the position and orientation of the tracker,heuristics recognize the actions and perform the necessary updates of the gamestate.To create a successful and enjoying game the physical engine must work correctly,otherwise the movements in reality don’t match the virtual ones. [Faust

2003]

Figure26:Virtual Showcase with a skull of a Deinony-chus

5.2Virtual Showcase

The Virtual Showcase is a high-resolution optical see-through augmented reality display which allows three-dimensional graphical augmentation of real objects placed inside.Built with half-silvered mirrors

it looks similar to a standard museum https://www.doczj.com/doc/1a13291901.html,ing the half-silvered mirrors all sides of the showcase can simultaneously overlay a virtual image over the encased exhibit(see Figure26).By using stereo imagery and tracked shutter-glasses,a truly three-dimensional image can be generated.

The Showcase is driven by normal PCs with conven-tional3D graphics cards.Video monitors act as dis-play devices and are re?ected by the mirrors on the showcase.Currently Virtual Showcases have four video monitors,one on each side of the showcase.The shutter-glasses are tracked with visual tracking methods using re?ecting markers attached on the glasses.To enable stereo vision the shutter-glasses are synchronized with the graphic card,triggered wirelessly over infrared.

A Virtual Showcase has many advantages in comparison to traditional AR display systems:

?Realistic occlusion e?ects between the virtual ob-ject and the exhibit are possible.To create them, per-pixel illumination of the video projectors is used.This solution to one main problem of AR is one of the biggest advantages of the Virtual Show-case.

?Due the use of spatial displays it provides a high and scalable resolution unlike HMDs.

?It has a better support for eye accommodation.

?The environment inside and surrounding is better controllable.

?The calibration of the Showcase is easier due to its independency of the viewing person.

Another big advantage is the multiuser ability,multiple users can interact simultaneously with virtual content displayed in a single showcase stimulating communica-tion and encouraging joint discovery of the displayed items.The Showcase has a limited user capacity, currently only four users can view simultaneously(one video monitor for each user).

A Virtual Showcase can tell an interactive,three-dimensional story about the objects on display,within a particular context and giving information in an en-gaging way.It also allows to present electronic copies of exhibits that are not physically present as well as con-tent on demand.[Bimber et al.2003]

The story creation is split into?ve major components:

?Content generation:creating the necessary com-ponents by using3D modelling techniques or laser scanning technology

?Authoring:describing how the components play to-gether

?Presentation:con?guration used to achieve a high degree of realism

?Interaction:techniques and devices for interactiv-ity

?Content Management:to ensure a certain level of

reusability

Figure27:Di?erent visualizations with the Virtual Showcase:(top left)the physical skull of Deinonychus inside the display,(bottom left)scanned skull geometry, (top right)the paranasal air sinuses and the bony eye rings integrated into the skull,(bottom right)the skin superimposed on the skull.

Augmented Paleontology Augmented Paleontology (AP)is a good example to demonstrate the features of a Virtual Showcase.Modern paleontologists use 3D computer graphics to help reconstructing pieces of the past.Three-dimensional scanning technologies produce3D replicas of the fossils which can be modi?ed and processed.Previously the paleontologists had no possibilities to present the researched data in a accurate three dimensional way.To change this circumstance scientists in di?erent research laboratories developed a prototype of a Virtual Showcase for Augmented Paleontology.

The prototype brings together a fossilized skull of a Deinonychus and3D computer graphics,to?ll in un-preserved details of the dinosaur.This can help people to imagine how the creatures could have looked on and under the surface(see Figure27)[Bimber et al.2002]. Recapitulating,the Virtual Showcases concept o?ers a seamless integration into museums and a transition for museums into the future of displaying digital collections.

5.3Augurscope

The Augurscope is a tripod-mounted outdoor mixed re-ality interface which enables groups of users an aug-mented experience.This display can easily be wheeled between di?erent locations,rotated and https://www.doczj.com/doc/1a13291901.html,ing an onboard camera the interface supports both augmented reality and augmented virtuality.

?For augmented reality it overlays a virtual environ-ment over an outdoors physical environment.?For augmented virtuality it embedded the real time image of the outdoors physical environment into the virtual environment.

Inside the Augurscope works a normal laptop computer,outside a GPS receiver with an electronic compass is attached (see Figure 28).When the device is moved,GPS information,rotational information and tilt information are linked with the virtual world position information and

updated.

Figure 28:The Augurscope close up.

For the ?rst public trial as 3D historical reconstruc-tion display the researchers chose a recreation of the Nottingham‘s medieval castle.The medieval castle was destroyed in 1651and the "Ducal Palace"was con-structed over the ruins.

After the adaptation of an existing 3D model of the cas-tle,the developers hired an actor of the castle’s museum (which is located in Ducal Palace)to record several 3D scenes involving a medieval guard avatar.

The ?rst public deployment involved two displays.The ?rst one was the Augurscope itself,the second one was a public display which was located under a portable gazebo.

?The Augurscope showed an augmented reality us-ing the camera onboard.

?The public display showed a view of the virtual model,with its viewpoint slaved to the Augurscope,but o?set so that a graphical representation of the Augurscope was visible in the foreground.This rep-resentation included an embedded live video tex-ture taken from the Augurscope‘s onboard camera.This demonstrated the logical reverse of the Au-gurscope by showing a view of today‘s castle inset into the 3D model of the medieval castle.

During the ?rst public trail a camera was placed to capture the movements of the visitors,audio data was captured too using a microphone attached to the Au-gurscope.The subsequently analysis of the data reveals some problems with the height di?erences of the visitors.Possibly due to the weight of the onboard equipment the Augurscope was not often moved to another location.Another problem was the bright sunlight which caused the visitors to di?cultly see the image on the monitor.[Schn?delbach et al.2002]

The researchers currently work on a successor version of the interface named "Augurscope 2".It has radical changes in it’s physical design and its technological con-cept.The Augurscope 2has a base unit on three wheels and a detachable top unit.The base unit renders the 3D graphics and sends the image as composite video to the top unit wirelessly.The top unit contains the necessary tracking technology for global position and orientation.(see Figure 29)[Equator 2003]

5.4Haptic Feedback

PHANToM Haptic Master (see Figure 30)is a pen-shaped force feedback device as well as an input device,calculating position and orientation in 3D space.

Haptic feedback for human-computer interaction demands fast calculation and appliance of forces to let the user feel the correct sensations.The PHANToM device is able to achieve this at a rate of 1kHz.[Boeck et al.2003]propose this device as best suited for modelling 3D https://www.doczj.com/doc/1a13291901.html,ing the metaphor of a user’s ?ngertip for the contact point on the surface,the user has the impression of physically touching a virtual ob-ject.Strategies as "snapping to the surface"ensure a high accuracy when drawing or editing points on a sur-face.

In their conclusion,force feedback is identi?ed as good means to enhances the users depth perception.[Jiang et al.2005]conducted experiments using stan-dard PCs and a modi?ed version of the Half-Life game.The player’s motion resulted in haptic feedback,i.e.when encountering obstacles in the virtual environment.

Figure29:The Augurscope2with a base unit and a detachable top unit.

Two tests were conducted,both aimed in direction of low-cost VR training for military and rescue.In the?rst a user steered the avatar through a maze,generating resistance through a force feedback joystick;the second test used distributed vibration feedback(vibrating de-vices on the user’s forehead and lower legs resembling high and low obstacles in the run through a dark tunnel. Results showed signi?cantly fewer errors,better speed in performance and higher accuracy in training-as well as a greater sense of immersion with haptic feedback. 6Summary and Conclusions

Mixed Reality Systems have crossed the line where ad-vanced3D environments are even possible for projects with less?nancial value.

Becoming more sophisticated,the potential of graphical computer-systems enables a new quality of Entertainment and Education in Mixed Reality.These technologies provide interactive entertainment options, new graphical possibilities and a way to enjoy studying in a highly dynamic educational environment.

This paper gave an overview of technologies for such systems.We have provided several examples of systems,based on the idea of using Mixed Reality for Training and Education,we also introduced some industrial location-based Entertainment facilities and prototype interfaces for improved

presentations.Figure30:PHANToM Haptic Master,a force feedback and input device.

The most applications of Mixed Reality technologies today are not as widely accepted as many think.The biggest potential lies in the medical,exhibition and entertainment https://www.doczj.com/doc/1a13291901.html,itary and sport applications also motivate the development of better techniques for Mixed Reality applications and interfaces.

Medical The practice of surgery has already been al-tered through the implementation of computer technolo-gies.These technologies are used to solve or assist in the solution of fundamental medical problems and for train-ing issues.We presented di?erent medical Mixed Real-ity applications,which work already very good and are only the beginning of a fascinating future in medicine. But despite all progresses it will take a signi?cant ef-fort of time and work to move these technologies into routine clinical use.

Military and Emergency Response Training Today Mixed Reality technologies are applied to a wide spec-trum of applications-ranging from training of emer-gency response units to air combat simulators.Most quotations in the presented papers prefer military re-search institutes and are in?uenced by military sources. Some of the presented applications are only case studies for hypothetical enhancement of current systems.Cur-rent limitations due to high computing power demands will disappear over the next few years.The next step in the development of these tools should be a focusing on more practical issues and on really required applica-tions.

Sports The use of Mixed Reality technologies allows athletes to practice the handling of di?erent competition

翻译技巧和经验第17期Virtual reality 该如何翻译

近一个时期来,Virtual Reality (VR)一词在报刊等新闻媒体上频频出现,十分活跃,从而也引出了对VR 该怎么好的问题。据《光明日报》载,VR 的译名计有:“虚拟实在”(1996 年,10 月28 日)“临境”、“虚实”、“电象”、“虚拟境象”以及“灵境”(1997 年1 月16 日);此外,还有最常见的“虚拟现实”(如《文汇报》1997 年3 月12 日“小辞典”)。 对VR 究竟怎么理解和怎么翻译,笔者不敢妄下结论。不过,从最近一期外刊上读到的一篇文章(载于 Jane's DEFENCE'97,作者为 Ian Strachan,Editor,Jane's Simulationand Training Systems)在这两方面都可给人以不小的启发。为帮助说明问题,现将此文中有关的部分摘录于下。文章标题和引语是: VIRTUALLY REAL Virtual reality is rapidly becoming the training tool o f the 21stCentury - but what exactly is it? 将此翻译过来大致是: 可说是真的Virtual reality 正在迅速成为二十一世纪的训练器具--但它确切的含义是什么呢? 文中在讲到 virtual 一词的定义时是这样说的:One dictionary definition of "virtual" is"something which is unreal but can be considered as being real for some purposes." 文章接着对 VR 作了几种实质性的释义: Cyber-something? To some, VR is putting on a Cyber-helmet (whatever that is) and involvestotal immersion (whatever that means) in cyberspace (where ver that is). Collimation? To some, VR is a view through a collimated display system. Tactile sensors? To some, VR implies tactile simulation as well a visual syst em. 对以上这些解释或释义,Strachan 认为它们都是对的:Which of these interpretations of VR is right? Ibelieve that they all are.(对VR 的这些诠释中哪个是对的?我相信它们都是对的。)为什么都对?Strachan 说那是因为 English is a living language and insistence on any preciseinterpretation of VR is narrow and not in accordance with what has already become commo nusage.(英语是一门活的语言,而坚持对 VR 作任何精确的诠释是狭隘观念,也不符合约定俗成。) 通过以上介绍,我们似乎能够得出这样的结论:各位专家学者所给的译名都是对的。因为,汉语也是一门活的语言,坚持对 Virtual Reality 作任何精确的翻译是狭隘观念。但是,另一方面,则仍有必要从以上译名中挑选出一两个最好的来,或者也可以另起炉灶,如果还有更好的话。那么,我们认为“虚拟现实”和“虚拟实在”与原文更加“形似”。其它几个译名虽各有千秋,但总嫌“神似”有余,而“形似”不足。词和词语的翻译同句子和篇章一样应力求“神形兼具”。在这种情况下,找到一个最理想的译名实属必要。 借鉴以上众多译名,并综合Strachan 所做的分析和解释,经过反复比较和推敲,我们认为 VirtualReality 视情可译为“拟真技术”、“拟真”或“虚拟真实”。这几个译名与所介绍的那些在含义上大致相同而表述略有不同,或许是最为得体的。理由如下: 第一,这么译十分符合英文词语的本义。Virtual 在这里意为 in effect, though not in fact; not such infact but capable of being consider

(完整版)初级语法总结(标准日本语初级上下册)

1.名[场所]名[物/人]力*笳◎去歹/「仮歹 名[物/人]总名[场所]笳◎去歹/「仮歹 意思:“ ~有~ ”“~在~”,此语法句型重点在于助词“心‘ 例:部屋忙机力?笳◎去歹。 机总部屋 2.疑问词+哲+动(否定) 意思:表示全面否定 例:教室忙疋沁o 3?“壁①上”意思是墙壁上方的天棚,“壁才是指墙壁上 例:壁 4. ( 1)名[时间]动 表示动作发生的时间时,要在具体的时间词语后面加上助词“V”,这个一个时间点 例:森总心比7時V 起吉去To 注意:只有在包含具体数字的时间时后续助词“V”,比如“ 3月14 日V, 2008年V”;星期后面可加V,比如“月曜日V” ,也可以不加V;但是“今年、今、昨日、明日、来年、去年”等词后面不能加V。 此外:表示一定时间内进行若干次动作时,使用句型“名[时间]V 名[次数]+动” 例:李1週間V2回7°-^^行吉去T。 (2)名[时间段]动:说明动作、状态的持续时间,不能加“ V” 例:李毎日7時間働^^L^o (PS: “V”的更多用法总结请看初级上第15课) ( 3)名V 【用途】【基准】 表示用途时,前接说明用途的名词,后面一般是使"去T等动词 表示基准时,前名词是基准,后面一般是表示评价的形容词。 例:--乙①写真总何V使"去T力、。「用途」 --申請V使"去T。「用途」 乙①本总大人V易L^^ToL力'L、子供V总難L^^To 「基准」 X —X—力*近乙買⑴物V便利^To 「基准」 (4)动(基本形) OV 【用途】【基准】:使用与上述( 3)一样 例:乙O写真求一卜总申請T^OV 使"去T。 ^OV>^3>^ 買“物T^OV 便利^To (5)小句1 (简体形)OV,小句2:名/形動+肚+OV 表示在“小句1 ”的情况下发生“小句2”的情况不符合常识常理,翻译为“尽管…还是…,虽

标准日本语初级语法总结(上)

第一部分——名词 名词:名词是词性的一种,也是实词的一种,是指待人、物、事、时、地、情感、概念等实体或抽象事物的词。在日文中的充当句子成分时可以做主语,宾语,目的语等。与数量词、代词等构成体言。在单词方面完全由汉字组成的词汇如:説明せつめい 等很多是音读即类似汉语发音,当然也有训读的时候(海うみ ),甚至一个词有音读和训读两种念法(紅葉こうよう 和紅葉もみじ )。由汉字和假名组成的词语大多训读念法例:お知らせ 名词无变形但有时态的差别: 名词 简 体 敬 体 现在式 过去式 现在式 过去式 肯定 名词+だ 例:学生だ 名词+だった 学生だった 名词+です 学生です 名词+でした 学生でした 否定 名词+ではない 学生ではない 名词+ではなかった 学生ではなか った 名词+ではありません 学生ではありません 名词+ではありませんでした 学生ではありませんでした 另外名词的中顿形:名词+で、~~~。 名词的推量形,表示推测或向对方确认:でしょう。 初级上名词相关时态、句型: 敬体: 1.现在肯定:N です わたしは 王です。「3」2.现在否定:N ではありません わたしは 日本人では ありません。「3」 3.过去肯定:Nでした 前の 会社は 日系企業でした。「11」 4.过去否定:Nでは ありませんでした 先週,大阪は いい 天気では ありませんでした。「11」 简体: 1.现在肯定:Nだ 今日は 日曜日だ。「18」 2.现在否定:Nではない クリスマスは 祝日では ない。「18」 3.过去肯定:Nだった 恋人からの 誕生日の プレゼンは ネクタイだった。「18」 4.过去否定:Nではなかった 昨日は 休みでは なかった。「18」

高中英语 unit5 virtual reality-reading教案 上海牛津版S2A

Unit5 Virtual Reality Reading教案 一、章节分析(Reading section ) (一)综述 本章节讲述虚拟现实(VR)在各个领域的运用并分析其利弊。由于此主题较新,并与学生日常生活的电脑网络的使用有关,学生们对此应该是比较熟悉也颇有兴趣的。 因此,教师应充分利用学生的兴趣来教授本课,并进行适当拓展。 本课的任务有两个: 1.学生通过对课文的学习。掌握一些核心词汇,例如:imaginary, realistic, security, image等。 2.通过学习课文,了解如何运用想象写说明文,为writing部分做一定的铺垫和 准备。 (二)阅读目标 1知识目标 学习课文中重点词、词组、句型和语法。 2能力目标 通过阅读了解虚拟现实在各个领域的使用以及其他相关知识。 3情感目标 正确判断电脑、网络以及虚拟现实对日常生活的利弊影响。 (三)教学方法 采用任务型教学法组织教学,通过听说,讨论等具体活动,达到教学效果。 (四)重点和难点 1词汇学习 1)核心词汇 artificial

commit imaginary realistic security image amazing 2)拓展词汇 inspect manufacture architect amazing glide head-set inspect medium virtual fantasy 3)词组和短语 look down upon go back in time come true thanks to introduce … into reach out just for entertainment before long

标准日本语初级上册语法总结(改:动词原形)

日语初级上册语法总结 ㈠日语常用的词汇分类及用法: 1 名词:在句子中作主语,谓语,宾语,定语(名词+の名词)。 2 形容词:定语,谓语。 3 形容动词:定语,谓语。 4 动词:定语,谓语。 5 副词:可做状语,修饰动词,形容词,形容动词。 6 助词:相当中文里的助词,用于说明一个句子或一个词,与其它句子或词的关系。 ㈡动词的分类及「て形」、「ない形」、「た形」的变形规则。 ㈣上册所学语法中与「て」「ない」「た」相关的语法。 ①「て」:~ています、~てから、~てもいいです、~てください、~てはいけません、~ても 补充:动词的「て」形表示动作的先后顺序,以及动作行为的方式方法。 例えば:顔を洗って学校へ行きます。 歩いて駅へ行きます。 ②「ない」:~なくてもいいです、~ないでください、~ないほうがいいです、~なければならない 补充:动词的「ない」形表示否定。 例えば:会社へ行かない。 ③「た」:~たことがあります、~たほうがいいです、~たり~たりします、~たら 补充:动词的「た」形表示过去时。 例えば:フランスへ行った。

㈢名词,形容词,形容动词,动词的简体及敬体变形 ㈤常见助词用法的归纳总结。 1「は」用与提示主语,像「には、では、へは、からは、までは等」属于助词的重叠使用。 起加强语气或提示为主题的作用。 例えば:田中さんは日本人です。 教室には学生がいます。 2「が」提示主语和描述状态的作用。 常用「が」的情况有:1其后为形容词。 2表示自然现象。 3其前为疑问词。 4整句中的一小部分的主语。 5另外自动词前也用「が」来提示而不用「を」。 例えば:天気がいいです。 空は青いです。 誰がいますか。 私は足が痛い。 電気が付いている。 3「も」表示后项事物和前项事物一样。相当于中文的[也]。 例えば:陳さんは中国人です。 李さんも中国人です。

Unit 4 Cyberspace Lesson 3 Virtual Reality 教学设计3-优质公开课-北师大必修2精品

Unit 4 Cyberspace Lesson 3 Virtual Reality Teaching aims: To understand the dialogue. To get specific information about the virtual reality holidays. To voice your opinion on a virtual university. Teaching difficulties: How to make the students take part in the class actively. Before the class. You are divided into two groups: Group A and Group B. If some student in your group volunteersto answer the question, your group will get a smile face. The group that gets more smile faces willbe a winner. OK? Teaching procedures: Ⅰ. Lead in First,pleaselookatthreegroupsofpicturestodecide:whichoneshowsrealsituation? Which one shows unreal situation?Real/ Unreal They are all unreal situations, but they make us feel as if we are in real situations. Do youthink so? They are called virtual reality. Today we will read a passage---Lesson3 Virtual Reality Please look at objectives by yourselves. Do you understand? ⅡReading Task1. Skip to get main idea. Try to answer the two questions. 3’ 1. How many people are there in the dialogue? Who are they?

新标准日本语语法点总结(初级上)

新標準日本語語法點總結(初級上) 1.名【场所】名【物/人】力* / 庭忙何力?笳◎去T力、。 部屋忙誰力"、去T力、。 2.名【物/人】总名【场所】笳◎去T /「岷T 図書館乙忙笳◎去T力、。 猫总椅子 3. 表示方位 上*9^隣 下L尢中忌 力、 前外 後6 九勺 4.疑问词+哲+动词(否定)教室V誰去乜人。 冷蔵庫忙何哲笳◎去乜人。 5.去T 去乜人 6.时间的表达方式 今何時何分IT力、。 今四時三十分IT。 毎日何時忙寝去T力、。 11時3 0分忙寝去T。 (叙述包含数字的时间后续助词V,例3月14日V, 但是今日、今、昨日、明日、毎日、去年等词后不加V,星期后一般加V,但也可以不加。午前中 試験总始去◎去T力、。 来週①木曜日IT。 (詢問時間用 ^^,當詢問的時間很具體時,在表示時間的詞語后加V,如何時V、何曜日V、何日V )力、5……表示某动作发生在某个期间,也可以表示某移動動作的起點和終點森月曜日力、5水曜日去疋休注5。

表達時刻【何時何分】 1點一時7點七時 2點二時8點八時 3點三時9點九時

北师大版高中英语必修二 unit4 lesson3 Virtual Reality reading

Unit4 Cyberspace-Lesson3 Virtual Reality 一、学习目标 1.透彻理解课文内容。 2.对新学到的单词和句型等能学以致用。 3.学习条件状语从句。 二、重难点分析 1.对课文里长难语句的理解以及句子成分分析。 2.学习条件状语从句。 三、学习过程 Step 1: 预习课文 1.用15分钟阅读,粗通课文的大意,回答下面的问题。 (1)What’s the main idea of the text? A. The weekend life of Cathy and Tom. B. The life that students hope for living in the future C. The imagination of virtual reality’s holiday and study situation D. Imagination of how to live in the future (2)Which of the following is true? A. In the virtual reality holiday, we wouldn’t experience something directly. B. Tome will go camping with his families this weekend. C. Tom will send the website address to Cathy when he gets to school. D. It’s not hard for us to imagine the virtual reality studying situation. (3)From the text, we can infer that________ . A. Cathy has lots of work to do this weekend. B. In the future, we will not have to go to our destinations in the flesh at all. C. We will not only be able to travel around the world, but go to study in world famous universities we wanted to. D. Cathy dislikes to spend a long time traveling on planes during holiday.

新版标准日本语初级上册语法总结

新版标准日语初级上册语法总结 ㈠日语常用的词汇分类及用法: 1 名词:在句子中作主语,谓语,宾语,定语(名词+の名词)。 2 形容词:定语,谓语。 3 形容动词:定语,谓语。 4 动词:定语,谓语。 5 副词:可做状语,修饰动词,形容词,形容动词。 6 助词:相当中文里的助词,用于说明一个句子或一个词,与其它句子或词的关系。㈡动词的分类及[ます形]「て形」、「ない形」、「た形」的变形规则。 动词的分类:ます形:て形: ない形: た形: ㈢名词,形容词,形容动词,动词的简体及敬体变形 ㈣上册所学语法中与「て」「ない」「た」相关的语法。 ㈤常见助词用法的归纳总结。 ㈥连词:连接句子于句子的词。 ㈦疑问词: ㈧副詞及接续词: 动词的分类: 动词「て形」「た形」的变形规则: 1、一类动词: ①动词的最后一个假名以「うつる」结尾时,将它们改为「って」「た」 買う買って 立つ立って 終わる終わって

②动词的最后一个假名以「むすぶ」结尾时,将它们改为「んで」「た」 読む読んで 遊ぶ遊んで 死ぬ死んで ③动词的最后一个假名以「くぐ」结尾时,将它改为「いて」「た」 書く書いて 泳ぎぐ泳いで ④行く行って「た」 ⑤話す話して「た」 2、二类动词:直接去掉加「て」「た」 食べる食べて出かける出かけて 鍛える鍛えて起きる起きて 3、三类动词:直接去掉「する」加「して」「た」。「来るー来(き)て」「た」。運動する運動して復習する復習して 買い物する買い物してチェックするチェックして 动词「ない形」的变形规则: 1、一类动词:将动词「ます形」的最后一个假名改为其「あ」段假名。若动词「ます形」的最后一个假名以「い」结尾时不要将其改为「あ」,而要改为「わ」。 買う買わない 立つ立たない 読む読まない

新版标准日本语初级上册语法解释 第2课

新版标日初级·语法解释 第2课 1.これ/それ/あれは [名]です 相当于汉语“这是/那是~”。 “これ”“それ”“あれ”是指代事物的词,相当于汉语“这、这个”“那、那个”。用法如下: (1)说话人与听话人有一点距离,面对面时: ·これ:距离说话人较近的事物 ·それ:距离听话人较近的事物 ·あれ:距离说话人和听话人都较远的事物 (2)说话人和听话人处于同一位置,面向同一方向时: ·これ:距离说话人、听话人较近的事物 ·それ:距离说话人、听话人较远的事物 ·あれ:距离说话人、听话人更远的事物 例:これは 本です。 それは テレビです。 あれは パソコンですか。 2.だれですか/何ですか 相当于汉语“~是什么?/~是谁?”。不知道是什么人是用“だれ”,不知道是什么东西时用“何”。句尾后续助词“か”,读升调。例:それは 何ですか。 あの人は だれですか。 注意:“だれ”的礼貌说法是“どなた”。对方与自己是同辈、地位相当或地位较低时用“だれ”。对方比自己年长或地位高时用“どなた”。 例:吉田さんは どなたですか。 3.[名]の[名]【所属】 助词“の”连接名词和名词,表示所属。 例:私のかぎ。 小野さんの傘。 4.この/その/あの[名]は [名]です 相当于汉语“这个/那个~是~”。修饰名词时,要用“この”“その”“あの”。其表示的位置关系与“これ”“それ”“あれ”相同。例:このカメラは 私のです。 その傘は 小野さんのです。 あの車は だれのですか。 5.どれ/どの[名] 三个以上的事物中,不能确定哪一个时用疑问词“どれ”“どの”。单独使用时用“どれ”,修饰名词时用“どの”。 例:森さんのかばんは どれですか。 長島さんの靴は どれですか。 私の机は どの机ですか 扩展:100以下数字 0 れい/ぜろ 1 いち 2 に 3 さん 4 し/よん  5 ご 6 ろく

Virtual Reality

Virtual Reality is a kind of computer simulation technique that is able to assist people experience virtual world by using special lens in head-sets. It can be applied in plenty of areas such as military training and medical science. For military training, with head-sets and some additional facilities, special situations like forests, desert and ruin can be virtually simulated. Then soldiers will feel that they are in real battlefield and complete tasks to improve themselves better. In terms of medical science, when facing dangerous operations, the data of patients’ tissues and organs can be input in the computer in advance. Then computer can create the structure of the patient in details. Surgeons wearing head-sets can practice on such simulation to get more familiar with the operation so as to handle unexpected dangerous condition well and avoid some mistakes. Virtual reality (VR) typically refers to computer technologies that use virtual reality headsets, sometimes in combination with physical spaces or multi-projected environments, to generate realistic images, sounds and other sensations that simulates a user's physical presence in a virtual or imaginary environment. A person using virtual reality equipment is able to "look around" the artificial world, and with high quality VR move about in it and interact with virtual features or items. VR headsets are head-mounted goggles with a screen in front of the eyes. Programs may include audio and sounds through speakers or headphones. VR systems that include transmission of vibrations and other sensations to the user through a game controller or other devices are known as haptic systems. This tactile information is generally known as force feedback in medical, video gaming and military training applications. Virtual reality also refers to remote communication environments which provide a virtual presence of users with through telepresence and telexistence or the use of a virtual artifact (VA). The immersive environment can be similar to the real world in order to create a lifelike experience grounded in reality or sci-fi. Augmented reality systems may also be considered a form of VR that layers virtual information over a live camera feed into a headset, or through a smart phone or tablet device.

Unit4 Cyberspace L3 Virtual Reality课文翻译及注释

Unit4 Cyberspace Lesson3 Virtual Reality Tom: Hi, Cathy. What are you up to this weekend? Cathy:Don’t ask, Tom. I have lots of work to do. If I don’t finish my project on the history of the Internet for next Monday’s lesson, the science teacher will be angry. What about you? Do you have anything planned for Saturday and Sunday? Tom: It depends on the weather. If it’s good, Dad, Mum and I will probably go camping. But we won’t go if it rains. Hey, if I stay home, I’ll help you with your project if you like. Cathy: Thanks for the offer. Tom, can you suggest any good books for my project? If you tell me some title s, I’ll look for them in the library. Tom: Use the library computer. If you go to the Science Museum website, you’ll find lots of good information. I’ll send you the website address when I get home. Cathy: Thanks. Just think, if we had virtual reality holidays, we wouldn’t have any problems with the weather. What’s more, we wouldn’t have to spend a long time travelling on planes to get to our holiday destination s. Tom: What do you mean? Cathy: Well, in the future, we’ll be able to use modern technology to go anywhere we like. We won’t have to go there in the flesh at all! Wouldn’t that be great? I feel excited just thinking about it. Tom: I don’t understand. Cathy, do you mean we’ll use the computer to travel around the world, entering and exit ing countries in seconds and visiting all the historical site s? Cathy: That’s right! Just imagine never having to pack a suitcase! We would not only be able to travel around the world, but also go to study in any world famous universities we wanted to. Tom: That could be really exciting! But I still find it hard to imagine. You would see it but you couldn’t dip your toe s in the sea or eat the foods you saw. You would not experience it. Cathy: Well, if they invented virtual reality holidays, I’d go on an around-the-world tour. Tom, what would you like to do if someone gave you the chance? Tom: I don’t really know. Personally, I’m more interested in virtual universities than virtual reality holidays. I’d like to go to a world-famous university, like Stanford. But I guess, a virtual university just wouldn’t be the same, would it? Cathy: True, but just think--you would be able to study in such a world-famous university without going out of your room. 参考译文:虚拟现实 汤姆:你好,凯西。这个周末你打算做什么? 凯西:别问了,汤姆。我有很多事要做。如果我没有完成下星期一课上要求的关于互联网历史的课题,老师会生气的。你呢?你周六和周日准备做什么? 汤姆:这取决于天气。如果天气好的话,爸爸,妈妈和我可能会去野营。但如果下雨我们就不去。嘿,如果我呆在家里的话,你若愿意我可以帮你完成课题。 凯西:谢谢你的帮助。汤姆,你能为我的课题推荐一些好书吗?你告诉我一些书名,我去图书馆找一下。汤姆:使用图书馆电脑,如果你登录科学博物馆网站,将会找到许多好信息。我回家后会把网址发给你。

标准日本语上册语法

第1课李さんは中国人です 1、名は名です 相当于汉语的“~是~”。“~は”是主语部分。“~です”是谓语部分。助词“は”用于提示主题,读做“わ”。 李さんは中国人です。(小李是中国人。) わたしは日本人です。(我是日本人。) わたしは王です。(我姓王。) 2、名は名ではありません 相当于汉语的“~不是~”。“ではありません”的“では”,在口语中有时会发成“じゃ”。 王さんは学生ではありません(王先生不是学生。) わたしは日本人ではありません(我不日本人。) わたしは田中じゃありません(我不是田中。) 3、疑问句及应答 (1)名は名ですか 相当于汉语的“~是~吗?”。助词“か”接在句尾表示疑问。日语的问句在句尾不使用“?”。回答时用“はい”或“いいえ”。 あなたは小野さんですか(您是小野女士吗?) ー-はい、小野です。(是的,我是小野。) キムさんは中国人ですか(金女士是中国人吗?) ‐ーいいえ、中国人ではありません(不不是中国人。) (2)应答 回答疑问句的时候,可以只用“はい”“いいえ"也可以在“はい”之后加上“そうです”,在“いいえ”之后加上“ちがいます”,即成“はい、そうです”“いいえ、ちがいます”。不知道时用“わかりません(不知道)”。 森さんは学生ですか。(森先生是学生吗?) はい、そうです。(是,是学生。) いいえ、ちがいます(不,不是。) 4、名の名[从属机构、国家][属性]助词“の”连接名词和名词,表示前面的名词是后面名词从属的机构、国家或属性。 李さんはJC企画の社員です。(小李是JC策划公司的职员。)

北京旅行社は中国の企業です。(北京旅行社是中国的企业。) デュボンさんは大学の先生です。(迪蓬先生是大学的老师。) 注意在日语中,不论名词之间的是什么关系,一般加(の),如“(我的父亲)わたしの父”。汉语中说“我父亲”,而日语中不说“×わたし父” 国家/人/语言 国国名~人人~語语 中国中国中国人中国人中国語汉语 日本日本日本人日本人日本語日语 ?メリカ美国?メリカ人美国人?メリカ語英语 ?ギリス英国?ギリス人英国人?ギリス語英语 ?タリ?意大利?タリ?人意大利人?タリ?語意大利语 ?ンド印度?ンド人印度人?ンド語印地语 オーストラリ?澳大利亚オーストラリ?人澳大利亚人オーストラリ?語英语 韓国韩国韓国人韩国人韓国語韩语 スベ?ン西班牙スベ?ン人西班牙人スベ?ン語西班牙语 タ?泰国タ?人泰国人タ?語泰语 ド?ツ德国ド?ツ人德国人ド?ツ語德国语 ブラジル巴西ブラジル人巴西人ブラジル語葡萄牙语 フランス法国フランス人法国人フランス語法语 ベトナム越南ベトナム人越南人ベトナム語越南语 メキシコ墨西哥メキシコ人墨西哥人メキシコ語西班牙语 ロシ?俄罗斯ロシ?人俄罗斯人ロシ?語俄语 外国外国外国人外国人外国語外语 大陆 ?ジ?亚洲ヨーロッパ欧洲?メリカ北美洲 オーストラリ?澳洲?フリカ非洲?メリカ南美洲

虚拟现实技术在安全工程中的应用

虚拟现实技术在安全工程中的应用 【摘要】论述了虚拟现实技术(VirtualReality)的基本原理、应用范围。结合大空间公用建筑火灾虚拟现实系统,提出了安全工程中应用虚拟现实技术的基本结构及程序设计关键技术,给出了大空间公用建筑火灾虚拟现实系统编制实例。 【关键词】安全工程虚拟现实火灾救灾ApplicationofVirtualRealityinSafetyEngineering QiYixinXiaZhengyiWangJianFanWeicheng (ChinaUniversityofMining&Technology)(UniversityofScienceandTechnology ofChina) AbstractThebasicprincipleofuirtualrealityanditsextentofapplicationarediscu ssed.Incombiningwiththevirtualrealityofthefireofbigspacepublicbuildingthe basicstructureofthetechnologyofthevirtualrealityandthekeytechniqueofpro gramdesignareputforward.Anexampleofthevirtualrealitysystemofbigspace publicbuildingisprovided. Keywords:SafetyengineeringVirtualrealityFiresaving 1引言 安全是保障人类生存和生活质量的重要方面。随着人类社会的进步和生活水平的提高,人们对自身安全的要求越来越高。社会活动中,存在着大量不安全因素,要保证人的安全,离不开对社会环境的真实了解——安全管理人员需要了解管理区域内存在的事故隐患;科研人员需

完整版新版标准日本语初级上册语法总结1

新版标准日 语初级上册语法总结 ㈠ 日语常用的 词汇分类及用法: 1名词:在句子中作主 语,谓语,宾语,定语(名词+0名词)。 2 形容 词:定 语, 谓语 。 3 形容 动词 :定 语 ,谓语 。 4 动词 :定语, 谓语。 5 副词:可做状 语,修饰动词 ,形容 词,形容动词。 6 助词:相当中文里的助 词,用于 说明一个句子或一个 词,与其它句子或 词的关系。 ㈡ 动词的分类及「疋形」、「肚X 形」、「尢形」的变形规则。 动词的分类:「疋形」: 「肚X 形」:「尢形」: ㈢ 名词,形容词,形容动词,动词的简体及敬体 变形 ㈣上册所学语法中与「疋」「肚X 」「尢」相关的语法。 ㈤ 常见助词用法的归纳总结 。 ㈥ 连词 :连接句子于句子的 词。 ㈦ 疑问词: ㈧ 副詞及接 续词: 动词的分类: 立弐去r 走◎去r 読族去r 例元運動L 去r 復習L 去r 練習L 去r 買x 物L 去r 夕|丿少夕L 去r 于工少夕L 去r 动词「疋形」的变形规则: 1、一 类动词 : ① 动词「去r 形」的最后一个假名以 「x 、^、o 」 结 尾时,将它们改为「。疋」 買x 去r 買二疋 立弐去r 立二疋 走◎去r 走二疋 ② 动词「去r 形」的最后一个假名以 「族、厂、^」 结尾时,将它们改为「人^」 読族去r 読人疋 遊厂去r 遊人疋 死忙去r 死人疋 ③ 动词「去r 形」的最后一个假名以 「吉」结尾时, 将它改 为「xi 」 書吉去r 書x 疋 1 动词「去歹形」的最后一个假名以「X 」段假名 结尾时, 则为 一类动词 行吉去r 段假名 話L^r 结尾时, 则为二类动词 。 其中有一部 起吉召浴着召疋吉召 書吉去r 泳老注r 2 动词「去r 形」的最后一个假名以「元」 分特殊的二 类动词 (它们看起来类似一类) 見x§ 降 例元食卞去r 疋吉去r 降◎去r 3 通常情况下是两个 全部是片假名情况,除此之外 还有一个「来去r 」 这种类型的动词则为 三类动词。 借◎召足◎召 出 力、疗去r 見去r 足◎去r 鍛元去r 起吉去r 浴厂去r 着去r x 去r 借◎去r (部分特殊的二类动词) 汉字加上L 去r ,也有两个 汉字加上一个假名再加上L 去r ,或者

新版标准日本语初级语法总结,第五单元(上)

新版标准日本语初级语法总结 **************************** 第17课 **************************** 1、名が欲しいです ▲わたしは新しい洋服がほしいです。(我想有套新西服)ようふく ▲(あなたは)何が欲しいですか。(你想要什么) --新しいパソコン欲しいです。(想要新的电脑) 表示愿望时,使用“[名词1]は[名词2]が欲しいです”这一表达形式。“名词1”是 愿望的主体,“名词2”表示愿望的对象。 2、名を动たいです ▲(わたしは)映画を見たいです。(我想看电影) ▲今日はお酒を飲みたくないです。(今天不想喝酒) ▲(あなたは)何をしたいですか。(你想做什么) --何もしたくありません。(什么都不做) 表达相当于汉语“想~”的意思时,使用“[名词1]は[名词2]を~たいです”“名词1”是愿望的主体,“名词2”表示希望进行动作的对象。 “たい”前接动词“ます形”去掉“ます”的形式。使用“欲しいです”或“~たいです”的时候,如果是陈述句,则第一人称“わたし”是主语。如句子是疑问句则第二人称“あなた”是主语。这两种情况的主语都可以从句子的前后关系中判断出来,因此常常省略。 ▲水を飲みたいです。(我想喝水) ▲水が飲みたいです。(我想喝水) ▲水が欲しいです。(我想喝水) 如“わたしはお茶が飲みたいです”所示,たい”的对象有时不用“を”,而用“が”表示。但“欲しい”的对象只能用“が”来表示。※ ▲だれに会いたいですか。(你想见谁) --だれにも会いたくないです。(谁也不见) ▲どこへ行きたいですか。(你想去哪儿) --どこへも行きたくないです。(哪儿也不去)

相关主题
文本预览
相关文档 最新文档