当前位置:文档之家› IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A SYSTEMS AND HUMANS 1 3-D Virtual

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A SYSTEMS AND HUMANS 1 3-D Virtual

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A SYSTEMS AND HUMANS 1 3-D Virtual
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A SYSTEMS AND HUMANS 1 3-D Virtual

3-D Virtual Studio for Natural Inter-“Acting”Namgyu Kim,Woontack Woo,Member,IEEE,Gerard J.Kim,Member,IEEE,and Chan-Mo Park

Abstract—Virtual studios have long been used in commercial broadcasting.However,most virtual studios are based on“blue screen”technology,and its two-dimensional(2-D)nature restricts the user from making natural three-dimensional(3-D)interac-tions.Actors have to follow prewritten scripts and pretend as if directly interacting with the synthetic objects.This often creates an unnatural and seemingly uncoordinated output.In this paper, we introduce an improved virtual-studio framework to enable actors/users to interact in3-D more naturally with the synthetic environment and objects.The proposed system uses a stereo cam-era to?rst construct a3-D environment(for the actor to act in), a multiview camera to extract the image and3-D information about the actor,and a real-time registration and rendering soft-ware for generating the?nal output.Synthetic3-D objects can be easily inserted and rendered,in real time,together with the 3-D environment and video actor for natural3-D interaction.The enabling of natural3-D interaction would make more cinematic techniques possible including live and spontaneous acting.The proposed system is not limited to broadcast production,but can also be used for creating virtual/augmented-reality environments for training and entertainment.

Index Terms—Augmented reality,calibration,image seg-mentation,multiview and stereo cameras,registration,three-dimensional(3-D)interaction,three-dimensional(3-D)model reconstruction,virtual environment(VE),virtual studio,z-keying.

I.I NTRODUCTION

V IRTUAL studios have long been in use for commercial broadcasting and motion pictures.Most virtual studios are based on“blue screen”technology,and its two-dimensional (2-D)nature restricts the user from making natural three-dimensional(3-D)interactions.Actors have to follow prewrit-ten scripts and motion paths,and pretend as if directly interacting with the synthetic objects.This often creates an unnatural and seemingly uncoordinated output.This is quite apparent when the broadcast is live and the processing to correct such out-of-synch output is nearly impossible on the?y.For instance,it is common to notice,in a live broadcast of weather-

Manuscript received February10,2004;revised July27,2004, November14,2004,and December23,2004.The work described in this paper was supported in part and carried out at Advanced Telecommunications Research(ATR)Media Integration and Communications(MIC)Laboratory, Japan,from2000to2001,and also by the Korean Ministry of Education’s Brain Korea(BK)21program and the Korean Ministry of the Information and Communication’s Information Technology Research Center(ITRC) program,and continued at the Pohang University of Science and Technology (POSTECH)Virtual Reality(VR)Laboratory.This paper was recommended by Associate Editor I.Gu.

N.Kim,G.J.Kim,and C.-M.Park are with the Department of Com-puter Science and Engineering,Pohang University of Science and Tech-nology,Pohang,Korea(e-mail:ngkim@postech.ac.kr;gkim@postech.ac.kr; parkcm@postech.ac.kr).

W.Woo is with the Department of Information and Communications, Gwangju Institute of Science and Technology(GIST),Gwangju,Korea(e-mail: wwoo@gist.ac.kr).

Digital Object Identi?er10.1109/TSMCA.2005.855752forecast news,an incorrect timing on the part of the announcer (e.g.,the weather map being pulled out after a pause). Another limitation of the current virtual studio is in the cre-ation of the backdrop.In the example of the weather forecast, the backdrop was quite simple with2-D drawings of weather maps and temperature charts.2-D images or videos can be used as a backdrop as well(to make the user seem being somewhere else),but the current technology does not allow for the actor to interact with the environment.To situate the actor in a more complex environment and enable3-D interaction with it,the operating environment must be made in3-D.In general, however,the modeling of realistic3-D virtual environments (VE)is dif?cult and labor intensive.In addition,the rendering of the VE on the?y(plus mixing it with the image of the actor) requires expensive special-purpose rendering hardware.

In this paper,we introduce an improved framework for virtual studios that enables actors/users to interact,in3-D, more naturally with the synthetic environment and objects.The proposed framework consists of four main parts:1)image-based3-D-environment generation using a portable camera with stereoscopic adapter;2)real-time video-actor generation using a multiview camera;3)mixing graphical objects into the3-D scene;and4)support for“natural”interaction such as collision detection,?rst-person view,and tactile feedback. The proposed system is a hybrid approach of augmented-and virtual-reality techniques,and is a relatively inexpensive method for generating a3-D image-based interactive environ-ment in real time,without special rendering hardware and a blue-screen-studio setup.

This paper is organized as follows.In Section II,we review prior researches related to this work such as virtual studios, 3-D information-processing techniques,object registration, image-based rendering(IBR),and3-D interaction techniques. Section III will present an overview of the proposed interactive virtual studio,and detailed technical details about the four main components of the system follow in the subsequent sections. In the?nal sections,we illustrate the implemented system and evaluate its merits and limitations in the context of virtual stu-dios.Finally,we conclude the paper with an executive summary and a discussion of new possible applications and future work.

II.R ELATED W ORK

In general,virtual-studio sets require“blue-screen”(chro-makeying)technology,high-end graphic workstations,camera-tracking technology and signal compositing for high realism, and exact mixing results[5].The2-D nature of current virtual studios as developed and used in the current broadcast industry limits its use to situations where the camera angle is?xed and there is minimal user interaction[6].

1083-4427/$20.00?2006IEEE

Yamanouchi et al.overcame the limitations of the cost and space for the conventional blue-screen setups in their“Real Space-based Virtual Studio”system[7].This system combines the real-and virtual-space images.The virtual-environment image sequences are precaptured by a360?omnidirectional camera.The real space(foreground)images are obtained by the Axi-vision camera,which can capture color and depth information.The real and virtual images are mixed using the depth information in real time,but their algorithms are limited only to indoor studio sets.

Advanced video-conferencing systems share some common functionalities as those of the virtual studio.Daniillidis and co-workers developed a large-scale teleconferencing system [26],[27]and it included functionalities,such as3-D model acquisition from bi/tri-stereo image sequences,view-dependent scene description,and also large-scale data transmission for a remote site.Schreer and co-workers presented a teleimmersive conferencing system called the VIRtual Team User Environ-ment(VIRTUE)[28],[29],and a similar system called the Coliseum was developed by Baker et al.at[30].However, VIRTUE emphasized the problem of stereo calibration, multiple-view analysis,tracking and view synthesis,rather than issues like interacting with the environment.

While chromakeying enables simple2-D compositing, z-keying(distance-keying)allows compositing of objects and environments in3-D using pixelwise depth information [12],[14].However,z-keying is mostly used for separating foreground objects from a video sequence in a blue-screen environment[7],[12].Kanade demonstrated a3-D compositing technique using full3-D-environment depth information[13]. However,it required extensive equipment and a complex setup to extract and mix the objects.

The most straightforward method of extracting3-D informa-tion from a scene is to use multiple camera views and stereo correspondences.Scharstein and Szeliski gave a very good sur-vey and taxonomy of the different algorithms and approaches to the stereo-correspondence problem[34].There also have been approaches to3-D depth-information acquisition by meth-ods other than the traditional expensive stereo or multiple-camera setups.Yang et al.introduced a new method for using commodity graphics hardware to achieve real-time3-D depth estimation by a plane-sweeping approach with multiresolu-tion color-consistency tests[31].The3-D depth information can also be calculated by a motion analysis in short image sequences.For instance,Brodsky et al.investigated the processes of smoothing as a way of estimating structure from 3-D motion[33].Nayar et al.described a nontraditional stereo image-capture method(and system design guidelines), called the“catadioptric stereo,”using a single camera with a mirror[32].

IBR offers a solution to viewing images(e.g.,the virtual-studio backdrop)at different angles,thus producing highly realistic output[15],[16],[18].However,with IBR techniques, it is necessary to establish correspondences among pixels of two given images to produce a view-independent output,which amounts,again,to the3-D-information extraction problem. Even if the extraction problem was solved,it would still be dif?cult for the user to interact meaningfully with image-based rendered scene because the technique was developed mostly for static images.However,techniques for simple and limited interactions,like navigation[17],[20],[37],selection,and manipulation[19],in an IBR environment have been proposed. Another essential element in enabling natural3-D interaction in a virtual-studio setting is the extraction of the actor and tracking his/her movements(whole or partial).Gavrila[8]has demonstrated a human-extracting and-tracking algorithm for user interaction for virtual reality(VR)systems and Cheung[9] devised a similar algorithm for a smart surveillance system. The Arti?cial Life Interactive Video Environment(ALIVE) system[11]used the extraction and tracking algorithm devel-oped by Wren et al.[10]to analyze user intentions and control an avatar in a VE,although not a demonstration of a user’s direct interaction with his/her environment.Urtasun and Fua developed an approximate vision-based tracking system for a walking or running human using a temporal motion model [38]and,similarly,for articulated deformable objects[39]. In general,real-time vision-based reconstruction of general articulated motions in a robust way still remains a challenging https://www.doczj.com/doc/af6966541.html,ing separate sensors can surely make the problem somewhat more feasible as demonstrated by[40]and[41]. Interaction in the3-D virtual space(virtual studio being one such example)has been one of the main research topics for VR researchers[49]–[51].An important design goal for3-D interaction is“naturalness,”which refers to the resemblance to the way humans interact in the real world.The main charac-teristics of natural interaction include the use of multimodality, particularly the use of tactility for close-range interaction,and direct and noncognitive interaction[46].In the context of virtual studios,our goal is to provide a way to produce“natural-looking”interaction to convince the viewers.Ironically,this,in turn,also asks for the actor to act(and interact)naturally.Thus, the actor,if possible,must be able to see and directly touch the interaction objects in the virtual-studio setting.

The?nal phase in a virtual-studio production is the mixing of the backdrop and actor image.Apostoloff and Fitzgibbon[35] and Rother et al.[36]introduced video-matting algorithms based on background/foreground extraction by considering the spatiotemporal color-gradient relationships.Their algorithms can also be applied to the actor-segmentation process for scenes with a general static background.

III.S YSTEM O VERVIEW

Fig.1provides the overall structure of the proposed frame-work.As shown in Fig.1,we?rst generate the image-based environment using a portable stereo camera,e.g.,camcorder with stereoscopic adapter[1].Then,we capture the user,using a multiview camera,e.g.,Digiclops[22](which can extract color and depth information in real time)and segment the user out from natural video sequences with cues such as color, edge,motion,disparity,etc[2],[3].Next,given the camera parameters,we render computer-graphic objects(on a large screen display for the user to see)with the backdrop to provide users with the illusion of interacting with the environment.The mixing of the3-D real-image-based environment,the user im-age,and computer-graphic objects is performed by“z-keying”

KIM et al.:3-D VIRTUAL STUDIO FOR NATURAL INTER-“ACTING”3

Fig.2.Stereoscopic video camera(a camera with the NuView stereoscopic

adapter).

arise in using different cameras for different views,the quality

of the resulting stereo video sequences is reduced.First,there

exist distortion introduced by the mirror and the associated

optical system.Furthermore,the adapter captures each image

of the stereo pair in a different line,i.e.,?eld-sequential format

(i.e.,in half the resolution).Therefore,it is essential that these

shortcomings are compensated for the camera to be used in

real3-D-vision-based applications.In the following sections,

we focus on the compensation method for these3-D distortions.

A.Stereoscopic Video With a Single Camera

As shown in Fig.2,the stereoscopic adapter consists of

a sturdy black plastic housing,a re?ecting mirror and liquid

crystal shutters(LCS).The prismatic beam splitter and the

orthogonally positioned polarizing surfaces(1.45 ×1.25 )in

the LCS open and close the light valves,to record either the

direct image or the mirror-re?ected image on alternate?elds of

the video.As a result,the left image is recorded on the odd

?eld and the right image on the even?eld.The synchronization

of the light valves with the alternating?elds of the camcorder

is achieved through the cable connecting the video-out of the

camcorder and the connector in the adapter.

It is then necessary to convert the video sequence to the

“above/below”format.As explained earlier,the adapter pro-

duces a?eld-sequential stereoscopic3-D video by simultane-

ously recording the second eye view to the camcorder.The

resulting?eld-sequential video can be displayed on a2-D(TV)

monitor or a2-D screen with special stereo glasses.The?eld-

sequential format,however,is an inconvenient format to be

used in various other vision applications.For example,applying

image processing,such as?ltering or transformation,to such

?eld sequential video-data format,can cause a loss in quality

of the stereo images because such processing propagates the

effects into the interlaced lines,and thus produces3-D artifacts.

For the same reason,the available video-compression scheme

cannot be exploited to save the hard-disk space or limited chan-

nel bandwidth.Therefore,we?rst separate the?eld-sequential

format to the“above/below”format,where the left image is

placed to the top part of the image and the right image to the

bottom part.

After?eld separation,we transform the image to a“side-by-

side”format.Spatial and temporal interpolations are applied to

each image to produce high-resolution2-D/3-D images/video

sequences.There exists?ickering effects when displaying the

stereoscopic3-D videos,when captured using the adapter in

60Hz.The stereoscopic3-D video in60Hz is not as smooth

as compared to the2-D video in60Hz,because the effective

4IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS—PART A:SYSTEMS AND

HUMANS Fig.3.Capturing stereo video sequences:From?eld-sequential format to

side-by-side format.

refresh frequency drops to30Hz with the monitor allocating

30Hz to the left and the right images,respectively.In addi-

tion,displays(such as head-mounted display,polarized screen,

or autostereoscopic display)require projecting an image in

the original size to provide comfortable3-D display.Spatial

interpolation is also required in2-D applications using only

3-D depth information(e.g.,z-keying).Spatial interpolation is

achieved by?rst doubling the size(by line copy),then by linear

interpolation between lines as follows

F2i L=G i L

F2i+1 L =

(G i

L

+G i+1

L

)

2

.(1)

F L and

G L denote the left images in the side-by-side and above/below formats,respectively.The superscript i represents the index of the row in the image.The right image can be interpolated in a similar way.Fig.3shows procedures for producing a stereoscopic image.

B.Stereo Calibration and Error Compensation

To estimate the3-D information from the stereo video se-quences,we?rst need to perform a camera calibration that determines the distortion model and model parameters.We?rst capture the relationship between the3-D world coordinate and its2-D perspective projection onto the virtual stereo cameras by observing a calibration object, e.g.,a black and white noncoplanar square grid pattern,whose geometry is known with respect to the3-D coordinate system attached to this stereo camera.The resulting camera parameters allow us to learn the distortions that occurred during projection through the stereoscopic adapter also.

With the list of3-D world coordinates and the correspond-ing2-D image coordinates,camera parameters are estimated for each camera by solving a set of simultaneous equations. Finally,the epipolar geometry is constructed from the projec-tion matrices.Here,we use the Tsai’s algorithm to determine the distortion model and model parameters[23].The algorithm estimates11model parameters:?ve intrinsic and six extrin-sic parameters.The intrinsic camera parameters include the effective focal length f,the?rst-order radial-lens distortion coef?cient k1,the principal point(the center of the radial lens) (c x,c y),and the scale factor to account for any uncertainty due to frame-grabber horizontal scanline resampling s x.The extrinsic parameters include the rotation matrix R and the translational component T.After performing Tsai’s algorithm, the recti?cation can be accomplished by exploiting the transfor-mation matrix obtained from the relationship between two sets of extrinsic parameters,{R,T}left and{R,T}right[1]. Before we exploit the stereo images to estimate the pixelwise depth,we need to calibrate for the color values.The color distortion occurs due to another inherent weakness of capturing stereo video with the stereoscopic adapter.The orthogonally positioned polarizing surfaces in the adapter yield stereo video sequences with different levels of color.Note that color equal-ization not only allows comfortable stereoscopic3-D display, but also helps to estimate accurate3-D depth information, especially when the depth is estimated based on the intensity level.The color-equalization approach usually takes advantage of the method of“histogram matching”[42].Our approach is similar,but we use a new relational transition function based on the images’color statistics.

To equalize the color levels of both images in the stereo pairs,we use three test pairs,where each pair contains of only one color,i.e.,red,green,or blue.We?rst select the region of interest from the given pairs of stereo images and then estimate statistics regarding the color distortion.Given the statistics,we normalize and modify the color histograms. The new intensity(color)of the right image,the projected im-age through the mirror,F R_N is normalized using the formula de?ned as follows

F R_N=

σL

σR

(F R?m R)+m L.(2)

F,σ,and m represent the image,standard deviation and average,respectively.The subscripts L and R denote left and right images,respectively.

C.Disparity Estimation

After proper calibration and color equalization,we estimate disparity using a collection of cues.Disparity estimation is recognized as the most dif?cult step in stereo imaging.The task of disparity estimation in a3-D reconstruction is to?nd correspondences among a pair of stereo images and estimate 3-D positions by triangulation.We use a hierarchical block-matching scheme with several cues like the edges and intensity similarity based on the Markov random?eld(MRF)framework [2].This framework involves two steps:1)the hierarchical overlapped block matching;and then2)pixelwise re?nement. First,the disparity is estimated at a coarse level,e.g.,block size of16×16,using the full-search block matching.Then,the block size decreases to8×8,only if the block yields higher compensation error than a pre?xed threshold value.Given the initial disparity?eld and edge information of the stereo pair, we perform a pixelwise disparity estimation based on a coupled MRF/Gibbs random?eld(GRF)model[3].

Although we perform stereo calibration,there are still some errors(e.g.,a few line pixel differences)that occur from the spatial interpolation process by(1).Our hierarchical block-matching algorithm can overcome these pixel errors.That is,by

KIM et al.:3-D VIRTUAL STUDIO FOR NATURAL INTER-“ACTING”

5 Fig.4.Color compensation:(a-1)Original above/below image,(a-2)red-component histogram of(a-1),(the thick line represents data of the image above and the thin line represents data of the image below),(b-1)compensated above/below the image,(b-2)red-component histogram of(b-1).

performing the block matching from a large to a small size,we can correctly?nd the corresponding pixels within an ignorable error range.The error range depends on the estimation block size,e.g.,in the case of a8×8block,the block matching can cover about two or three pixel line errors[1].

D.Environment Generation—Implementation

With the NuView adapter on the Sony TRV900camera,we capture a grid test pattern for3-D calibration and red-green-blue(RGB)color patterns for color compensation.We then digitize the sequences using IEEE1394interface architecture. To maintain a stereo sync signal,the captured images are set to full size[720×480at30frames per second(fps)].The even lines contain the picture for the left eye and the odd lines for the right eye.As explained in Sections IV-A and IV-B,we?rst transform the images in the?eld-sequential format to the side-by-side format,and then interpolate them using(1)to recover the“full”size.

In our experiments,a noncoplanar test pattern was used to estimate camera parameters for the left(reference)and right (virtual,image coming through the adapter)cameras.The left and right cameras(with the calculated extrinsic parameters) were not positioned on the same plane as the reference(left) camera.According to our calibration results,the virtual(right) camera was set back at about50mm from the reference camera due to the mirror in the adapter.

After the camera calibration and image recti?cation,we performed color modi?cation using(2).To analyze the charac-teristics of the color distortion,we captured three test images, each containing only one color component,i.e.,red,green,and blue,respectively.Note that the statistics(mean and variance of each sequence)can be estimated during the camera-calibration process and applied on the?y.Fig.4compares the histograms of the original and the compensated pairs of stereo images for the red color.The effect of the color equalization is illustrated. The calibration and recti?cation improved the disparity estima-tion dramatically.

In addition to the procedures,as explained in Section IV-C, we also adopt a moving-object segmentation scheme to esti-mate depth information for moving objects along object bound-aries[1].As shown in Fig.5(a)and(b),we?rst perform disparity estimation for a static scene and then for the scene with moving objects.By using the statistics of the static scene, we can separate moving objects from the scene(reasonably for scenes with few and small moving objects).The seg-mented moving objects with disparity can be combined back into the static-background scene.A more detailed description of this process is given in Section V.The resulting back-ground video with depth information is ready to be used as a virtual backdrop.

V.U SER V IDEO G ENERATION

The“user/actor video”is generated using a multiview camera that can capture color and depth information simultaneously in real time.While the environment video was captured and re-constructed using the stereoscopic video camera(with adapter)

6IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS—PART A:SYSTEMS AND

HUMANS Fig.5.Three-dimensional real image-based environment generation:(a)Static background,and(b)background with a moving object(a walking human in the right side).

of?ine,the user video must be captured on the?y for mixing and interacting with the3-D graphic object.The multiview camera that we use is called the Digiclops1and provides color and depth information in real time(at about12fps in a resolution of320×240).

There has been a growing interest in object segmentation for various applications with object-based functionalities such as interactivity.The capability of extracting moving objects from video sequences is a fundamental and crucial problem of many vision applications[8].The dif?culties in automatic segmentation mainly come from determining the semantically meaningful areas.Even in cases where we have a clear de-?nition about the areas of interest,an accurate and reliable segmentation is still a challenging and demanding task because those areas are not homogeneous with respect to low-level features such as intensity,color,motion,disparity,etc.

To alleviate these dif?culties,several schemes have been proposed and among these,the use of motion information has been widely accepted as an important cue,under the 1The NuView is used for generating the virtual backdrop due to its porta-bility and simplicity,and the DigiClops[22]for user video for its real-time processing features.assumption that the objects of interest(OOI)exhibit coherent motion.However,methods based on motion similarity may not work because of occlusions and inaccurate motion estimation. Therefore,additional information is usually necessary to detect accurate boundaries of the moving objects.

In our framework,we separate objects from the static back-ground by subtracting the current image from a reference image(i.e.,a static background).The subtraction leaves only the nonstationary or new objects.The proposed segmenta-tion algorithm consists of the following three steps:1)static background modeling;2)moving-object segmentation;and

3)shadow removal.

A.Static-Background Modeling

We present a robust and ef?cient algorithm to separate mov-ing objects from the natural background by exploiting the color properties and statistics of the background image.The object-separation technique based on static-background modeling has been used for years in many vision systems as a preprocessing step for object detection and tracking[8],[43].However,many of these algorithms are susceptible to both global and local il-lumination changes that cause the consequent processes to fail.

KIM et al.:3-D VIRTUAL STUDIO FOR NATURAL INTER-“ACTING”7 In addition,the algorithms mostly focus on the approximated

object regions shaped like2-D rectangles or ellipses,not the

precise objects’s shape.Schreer et al.gave a solution to real-

time shadow detection and elimination for extracting exact

object shape in their video-conferencing system[44].They

used the hue and saturation approximation method in the hue-

saturation-value(HSV)color space.Our approach is similar to

theirs in the aspect of the object-detection criterion,but differs

in the color spaces used.

Our proposed normalized color space is able to cope with

reasonable changes in the illumination conditions.The pro-

posed algorithm for detecting moving objects from a static

background scene works fairly well on real image sequences

from outdoor scenes,as shown in Fig.5.

We?rst set up the multiview camera with known camera

extrinsic parameters(pitch value and height from?oor,which

was measured physically)and internal parameters(horizontal

and vertical?eld of view,and focal length,which are calculated

from multiview camera speci?cations).These camera parame-

ters(with the relative stereoscopic camera-position information

obtained in the previous camera-calibration stage)are used for

rendering and mixing the computer-graphic objects with the

already constructed virtual backdrop.

We then gather the statistics over a number of static back-

ground frames,i.e.,N(m,σ)where m andσdenote the pixel-

wise mean and standard deviation of the image.Let the color

image be I(R,G,B).The resulting mean image I m(R,G,B)

is used for the reference background image and the standard

deviation Iσ(R,G,B)is used for extracting moving objects as

threshold values.The mean image and standard deviation are

calculated as follows

I m(R,G,B)=1

L

·

L?1

t=0

I t(R,G,B)(3)

Iσ(R,G,B)=

1

L

·

L?1

t=0

(I t(R,G,B)?I m(R,G,B))2

(4)

where L denotes the total number of image frames used to estimate the statistics.We empirically choose L to be30.

B.Moving-Object Segmentation

Each pixel can be classi?ed into objects or background by evaluating the difference between the reference background image and the current image in the color space in terms of (R,G,B).To separate pixels of the moving objects from pixels of the background,we measure the Euclidian distance in the RGB color space,and then compare it with the threshold.The threshold is adjusted in order to obtain a desired detection rate in the subtraction operation.The distance D t and threshold Th are de?ned as follows

D t(R,G,B)=

s=R,G,B

(I t(S)?I m(S))2(5)

Th(R,G,B)=α·(Iσ(R)+Iσ(G)+Iσ(B))

(6)Fig.6.Classi?cation in the color space:(a)Classi?cation in RGB space,and (b)classi?cation in the normalized RGB space.

where the variableαis determined according to the changing light condition.

C.Shadow Removing

Normally,moving objects make shadows and shading effects according to(changing)lighting conditions.However,the RGB color space is not a proper space to deal with the(black and white)shadow and the shading effects.

Instead,as shown in Fig.6,we use the normalized color space,which represents the luminance property better [25],[43].For classi?cation in the normalized RGB space,the normalized mean and color images,respectively,are de?ned as follows

I m(r,g,b)=

I m(R,G,B)

S m(R,G,B)

(7)

I t(r,g,b)=

I t(R,G,B)

S t(R,G,B)

(8)

where S m and S t,respectively,denote the summation of the (R,G,B)component of the mean reference and t th frame images,i.e.

S m(R,G,B)=I m(R)+I m(G)+I m(B)(9)

S t(R,G,B)=I t(R)+I t(G)+I t(B).(10)

The distance D t and threshold for the normalized color space Th are de?ned as follows

D t(r,g,b)=

s=(r,g,b)

(I t(s)?I m(s))2(11)

Th(r,g,b)=α·(Iσ(r)+Iσ(g)+Iσ(b))(12)

where the variableαis determined according to the changing lighting conditions.

The current image I t(R,G,B)is?rst compared pixelwise with the mean reference image I m(R,G,B)and classi?ed as the background,if the difference is less than the threshold values Th(R,G,B).Meanwhile,the objects can further be classi?ed into shadows or shadings in the normalized2-D color space.

8IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS—PART A:SYSTEMS AND

HUMANS Fig.7.Moving-object segmentation:(a)Current image,(b)initial object segmentation,and(c)segmented object after shadow removal.

In the normalized color space,we can separate the shadows from the background.The difference between the normalized color image I t(r,g,b)and the normalized mean of the refer-ence image I m(r,g,b)is calculated.As shown in Fig.6(b),if the difference of the pixel is less than the threshold,the pixel is classi?ed as part of the background.Note that we can speed up the process by only checking the difference in the normalized color space,if the pixel was classi?ed as part of the objects.If the pixel is classi?ed as part of the objects(in the normalized color space),then we further decompose the color difference into the brightness and chromaticity components.Then,based on the observation that a shadow has similar chromaticity but slightly different(usually lower)brightness than those of the same pixel in the background image,we can label the pixels into shadows.

We also exploit the depth information to reduce mis-classi?cation,while segmenting out the moving object.The underlying assumption is that the moving objects have a limited range of depth values(i.e.,relatively small or thin).Using this assumption,we remove spot noises in the segmented object.In addition,we apply two types of median?lters (5×5and3×3)to?ll the hole while maintaining sharp boundaries.

Fig.7shows the segmented user from the natural-background scene,without using blue-screen techniques.As shown in Fig.7(c),we have segmented the object with sharp boundaries,after applying the proposed separation scheme in the normalized color spaces.

Note that in the chromakeying-based virtual studios,real “nonblue”props cannot be inserted(if the purpose is to extract just the human actor).In our static background-based approach, we can insert and remove any colored objects(the inserted objects would be part of the static background).See Section IX for more explanation and examples.

VI.G RAPHIC-O BJECT R ENDERING

The scenes made using a virtual studio usually introduce computer-generated synthetic objects,which may be interacted upon and exhibit behaviors.In general,we can add three types of computer-graphic objects into the environment:1)isolated objects that have no interaction with the user or other objects in the environment;2)objects that interact with one another; and3)objects that interact with the users.The environment evolves according to the interaction events.

In mixing computer-graphic objects into the VE,one of the most important technical issues is the registration between computer-graphic objects and background scenes.To provide a natural-looking environment,the exact position of the real camera must be known to place the objects correctly in the environment.

We exploit the camera parameters,which are obtained in the camera-calibration stage while generating the virtual space,as explained in Sections IV and V.Another important issue to achieve a more natural composition is taking the shadow and shading effects into account,according to the changing lighting conditions.In our approach,we assume that the lighting condi-tions are not signi?cantly changing and the directions of light sources are known.These assumptions work fairly well because we use outdoor scenes as background video and,according to the given lighting conditions,we can control the shading of the graphic objects appropriately.2

Our framework has two methods for detecting possible in-teraction between the graphic objects and users.One is based on an inclusion test that tests whether part of the users’3-D 2The graphic objects are rendered in the Open Graphics Library(OpenGL) environment[45].In OpenGL,the color image and the corresponding depth image are obtained from the front-color and depth buffer.

KIM et al.:3-D VIRTUAL STUDIO FOR NATURAL INTER-“ACTING”

9 Fig.8.z-keying maintaining size consistency:(a)User video,(b)background

video,and(c)composite video.

points are within another objects’boundaries.This method is

very sensitive to the computed3-D position error acquired from

the multiview camera,and thus,we use the portional number,

that is,the ratio of one’s points included in the other to the

total number of one’s points(and vice versa).According to our

experiments,the inclusion portional ratio was tuned to about

0.3%.The other method is based on the common geometry-

based collision detection.To realize this method,we have

to reconstruct the3-D geometry from the captured3-D-point

data,and such a reconstruction is computationally costly to be

done at every frame.In our framework,all interactable objects

are organized as a hierarchy of(invisible)bounding spheres

or boxes.When a suf?cient number of users’3-D points are

included in a bounding sphere of another object,then only

those included points are reconstructed into triangle patches.

Then,a more exact second test can be carried out between

the triangle patches and part of the object contained in the

intersected bounding sphere.

VII.M IXING(z-K EYING)

To compose the?nal output,the z-keying operation is carried

out between the video avatar and the background.To maintain

the consistency between the real space in which the user op-

erates and the virtual space of the graphic objects,we need to

use a uni?ed set of camera parameters.Thus,the parameters

of the camera with the adapter(i.e.,NuView)are used as the

reference.We?rst use the precalculated camera parameters

to set up a multiview camera.Then,we compensate for the

position and rotation errors of the multiview cameras by?nding

a principal axis in3-D from the disparity information of the

segmented user.

The camera position and rotation information can be esti-

mated based on an assumption that we can?nd a line passing

through the user perpendicular to the?oor.To preserve the

original3-D geometry of the composed image,the user video

has to be enlarged or reduced in proportion to the focal distance.

Let the user video and background video be a and b,respec-

tively.As shown in Fig.8,the depth between the camera and

the object in the composed image Z ba can be approximated as

follows[24]

Z ba=Z a×f b

a

(13)

where Z a denotes the distance between the camera and the

object.The focal lengths of the foreground and background

cameras are f a and f b,respectively.As a result,the user would

feel that the absolute size of the user video in the composed

image remains unchanged.

Fig.9shows the z-keying process and the?nal result,re-

spectively.Upon interaction,the appropriate parts(instead of

the whole environment)of the scene are updated.

We compare the depth values of the environment,segmented

actor,and graphic objects.The composite scene chooses the

color pixel corresponding to the highest depth value(i.e.,

closest).If there is a con?ict(e.g.,same depth value from

two different entities),priority is given in the following order:

actor,graphic objects,and the environment.

VIII.S UPPORT FOR N ATURAL I NTERACTION

In a typical virtual-studio setting,the user needs to see

oneself through a monitor to interact with the synthetic objects.

This often creates unnatural looks and behavior,because the

location of the display monitor(where the actor is looking

at)cannot always coincide with that of the interaction object.

Sometimes,the actor has to rely solely on prechoreographed

motion or producer’s moment-by-moment verbal instructions

and act without any visual feedback at all.

One popular solution to the above problem is to use“blue”

props.The actor would interact naturally with the blue prop,

a“physical”representation of the interaction object.In the

?nal production,the blue prop is replaced with the appropriate

graphic object through simple chromakeying.However,this is

only a partial solution because not every interaction object can

conveniently be represented with blue objects(e.g.,water,?y-

ing objects,etc.)In addition,there may be too much geometri-

cal differences between the blue prop and the actual interaction

object,again resulting in unnatural-looking interaction(e.g.,

noticeable seams between the hand and the object).

Therefore,in our proposed system,we solve the view-

direction problem by including display devices that can be

erased later from the?nal production by the static-background

segregation technique covered in Section V.The display device

may even be stereoscopic.For nonautostereoscopic display

devices,the user would have to wear a glass of some sort.

Image-processing techniques proposed by Wu et al.[48]can be

used to arti?cially remove glasses from human faces in video

streams.In order to supply tactility and improve natural-object

manipulation,we adopt a whole-body vibrotactile display de-

vice that can be worn on strategic places on the body in a

discreet manner.We use a vibrotactile display wear called the

POStech Tactile Wear(POS-T Wear),a computer-controlled set

of small vibration motors,custom developed in our laboratory.

The vibration device is small enough to be hidden by the

camera(e.g.,can be put on the palm of the hand or under

the shirt;see Fig.18).The vibration upon collision helps the

user’s perception of collision(occurrence and position of col-

lision)and results in more natural acting and event responses,

especially for segments with timed interaction(e.g.,hitting a

?ying tennis ball).The examples in the next section illustrates

these features.

10IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS—PART A:SYSTEMS AND HUMANS

Fig.9.Mixing result:(a)Environment scene,(b)segmented-user video,(c)graphic-object render scene,and(d)composed-result scene.

KIM et al.:3-D VIRTUAL STUDIO FOR NATURAL INTER-“ACTING”

11 Fig.11.Example:An outdoor scene with natural occlusion

effect.

Fig.12.Collision-based inter-“acting”virtual studio(image sequences).The sphere moves counterclockwise around the actor in(1)and(2),then collision with the user’s hand is detected in(3),and its circling direction is changed in(4).Now,it circles in a clockwise direction as seen in(5)–(9),then upon another collision in(10),changes its direction again.

information pertaining to that region very naturally without any help from an operator or extra device.

The fourth example in Figs.15–17shows user interac-tion with real props.As mentioned in Section V-C,a general chromakeying-based virtual-studio set cannot include nonblue-color props.The upper-left image of Fig.15shows the sta-tic background model that includes the real props,the chair, and the monitor.Our system extracts only the actor without the real props.Fig.16shows the synthetic-background scene (and the composed one)—the actor sits on the virtual chair

12IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS—PART A:SYSTEMS AND

HUMANS Fig.13.Location-based inter-“acting”virtual studio:Weather-forecasting

application.

Fig.14.Location-based inter-“acting”virtual studio:Weather-forecasting scenario(actor detected to be in a particular region on the

map).

Fig.15.Actor segmentation with real props.The upper-left image is the synthetic background.The segmented actor image is composited with the background (center column is the color images and the right column the depth images).

looking at the virtual book.In the real-studio set,the actor sits on the real chair looking at the monitor.Fig.17illustrates a situation of the actor manipulating the virtual book.The actor can act in a very natural way,looking at the changing contents of the monitor and even interacting with it.The interactable objects(such as the virtual book)have invisible hierarchical bounding volumes.If,for instance,the actor’s 3-D points are included in the book’s bounding volume,the

KIM et al.:3-D VIRTUAL STUDIO FOR NATURAL INTER-“ACTING”

13 Fig.16.Sitting on the prop chair:(a)Rendered virtual set,(b)actor sitting on the real chair,and(c)the composited virtual

scene.

Fig.17.Visual feedback for actor’s interaction:(a)Interaction in the virtual studio,(b)what is seen in the display monitor to the actor,and(c)the composited ?nal

scene.

Fig.18.POS-T Wear tactile-feedback hardware:(a)Vibration motors and the control hardware and(b)actor wearing the device on two hands and on the body.

center point of included points are calculated and a virtual hand can be drawn around that position as an interaction feedback [see Fig.17(b)].

The?fth example in Figs.18and19illustrate the interaction using tactile feedback(see a movie clip[52]).Fig.18shows the POS-T Wear vibrotactile display device.The vibratory motor is shaped like a?at coin with a radius of about7mm,and a thickness of about3.5mm.It has a voltage range of2.5–3.8V and can produce between8500and15000rpm.For this example,we use three vibrators on the two hands and on the body[see Fig.18(b)].Fig.19(a)–(c)show the initial VEs.In Fig.19(d),user is pushing a3-D button with his right hand. The color of the button changes from blue to red upon detecting collision by the hand.A vibration is given to the user’s hand at the time of the https://www.doczj.com/doc/af6966541.html,pared to reacting to the event only by visual feedback,the subtle addition of the tactility results in a much more natural(looking),smooth,and con?dent acting. After the button push,an object is?oated in the air and the user

14IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS—PART A:SYSTEMS AND

HUMANS Fig.19.Tactile feedback for actor’s interaction:(a)Actor image,(b)graphic image,(c)mixed image,(d)user interaction(3-D-button push),(e)direct interaction with virtual objects,and(f)collision-based interaction.

can push the object up and down using the touch feedback on his hands[see Fig.19(d)–(f)].

X.C ONCLUSION AND F UTURE W ORKS

Virtual studios have long been used in commercial broad-casting and are starting to evolve into the next level with improved technologies in image processing and computer graphics.Movies in which real life actors“seem”to act and interact with the synthetic characters are very common now.In this paper,we introduced a relatively inexpensive virtual-studio approach[personal-computer(PC)-based and inexpensive spe-cialty cameras]framework to enable actors/users to interact in three-dimensional(3-D)systems more naturally with the synthetic environment and objects.The proposed system uses a stereo camera to?rst construct a3-D environment(for the actor to act in),a multiview camera to extract the image and 3-D information about the actor,and a real-time registration and rendering software for generating the?nal output.Synthetic 3-D objects can be easily inserted and rendered,in real time, together with the3-D environment and video actor for natural 3-D interaction.

The enabling of natural3-D interaction would make more cinematic techniques possible including live and spontaneous acting.For the actor to act naturally and interact with the virtual environment(VE),the actor must be able to see the composited output in real time,preferably in the?rst-person viewpoint. Providing a?rst-person viewpoint is problematic,because the actor would have to wear a special display device[like a head mounted display(HMD)]and this does not go well with the

KIM et al.:3-D VIRTUAL STUDIO FOR NATURAL INTER-“ACTING”15

broadcasting purpose.The next best alternative is to provide a large display near or where the user is looking at(e.g.,camera or interaction object)and removing it(if necessary)from the ?nal composited scene,as was demonstrated in our system. We acknowledge that the proposed system uses many of the already known techniques in the?elds of computer vision, graphics,and VR,such as camera calibration,collision detec-tion,and3-D reconstruction.However,several practical en-gineering details and issues that go along when building a working system like this were well addressed,including color compensation,shadow removal,“seemingly”believable colli-sion handling,image formatting,image distortion,etc.

The proposed system is not limited to broadcast production, but can also be used for creating virtual/augmented-reality en-vironments for training and entertainment.In fact,even home-brewed digital contents can be produced for presentations or short educational materials due to its low cost.

We are currently working to further improve our system to be used for actual broadcasting.For instance,the image quality as proposed in this paper is not suf?cient for general broadcasting quality(about640×480resolution).To achieve higher quality, the actor segmentation and3-D point-based interaction must be computed in real time for images with640×480resolution. The alternative solution may be to use two cameras,one of high quality for shooting the actor,and the other,multiview, for computing the3-D information of the actor,and fuse the two types of information in a seamless manner.

The vibrotactile device is currently not wireless,making it dif?cult to use,plus,it has latency problems(slow response time).We hope to improve the quality of the hardware in these regards.Another weakness of the proposed system is that it does not allow any camera movement.Furthermore,for a more natural interaction and acting,it is necessary that the whole body of the actor be tracked(or at least parts other than just the end of the limbs).

R EFERENCES

[1]W.Woo,N.Kim,and Y.Iwadate,“Stereo imaging using a camera with

stereoscopic adapter,”in Proc.IEEE Int.Conf.Systems,Man,and Cyber-netics,Nashville,TN,Oct.2000,pp.1512–1517.

[2]W.Woo and Y.Iwadate,“Object-oriented hybrid segmentation using

stereo image,”in Proc.Society Optical Engineering(SPIE)PW-EI-Image and Video Communications and Processing(IVCP),Bellingham,WA, Jan.2000,pp.487–495.

[3]W.Woo,N.Kim,and Y.Iwadate,“Object segmentation for Z-keying

using stereo images,”in Proc.IEEE Int.Conf.Signal Processing Pro-ceedings(ICSP),Beijing,China,Aug.2000,pp.1249–1254.

[4]N.Kim,W.Woo,and M.Tadenuma,“Photo-realistic interactive3D

virtual environment generation using multiview video,”in Proc.Soci-ety Optical Engineering(SPIE)PW-EI-Image and Video Communica-tions and Processing(IVCP),Bellingham,WA,Jan.2001,vol.4310, pp.245–254.

[5]S.Gibbs, C.Arapis, C.Breiteneder,https://www.doczj.com/doc/af6966541.html,lioti,S.Mostafawy,and

J.Speier,“Virtual studios:An overview,”IEEE Multimedia,vol.5, no.1,pp.18–35,Jan.–Mar.1998.

[6]A.Wojdala,“Challenges of virtual set technology,”IEEE Multimedia,

vol.5,no.1,pp.50–57,Jan.–Mar.1998.

[7]Y.Yamanouchi,H.Mitsumine,T.Fukaya,M.Kawakita,N.Yagi,and

S.Inoue,“Real space-based virtual studio—Seamless synthesis of a real set image with a virtual set image,”in Proc.ACM Symp.Virtual Reality Software and Technology(VRST),Hong Kong,Nov.2002,pp.194–200.

[8]D.M.Gavrila,“The visual analysis of human movement:A survey,”

Comput.Vis.Image Understand.,vol.73,no.1,pp.82–98,1999.

[9]K.M.Cheung,“Visual hull construction,alignment and re?ne-

ment for human kinematic modeling,motion tracking and rendering,”

Ph.D.dissertation,Robotics Inst.,Carnegie Mellon Univ.,Pittsburgh, PA,Tech.Rep.CMU-RI-TR-03-44,Oct.2003.

[10]C.R.Wren,A.Azarbayejani,T.Darrell,and A.Pentland,“P?nder:Real-

time tracking of the human body,”IEEE Trans.Pattern Anal.Mach.

Intell.,vol.19,no.7,pp.780–785,Jul.1997.

[11]P.Maes,T.Darrell,B.Blumberg,and A.Pentland,“The ALIVE system:

Wireless,full-body interaction with autonomous agents,”Multimedia Syst.,vol.5,no.2,pp.105–112,1997.

[12]T.Kanade,K.Oda,A.Yoshida,M.Tanaka,and H.Kano,“Video-

rate Z keying:A new method for merging images,”The Robotics Inst., Carnegie Mellon Univ.,Pittsburgh,PA,Tech.Rep.CMU-RI-TR-95-38, Dec.1995.

[13]T.Kanade,Virtualized Reality.Pittsburgh,PA:Robotics Inst.,

Carnegie Mellon Univ.[Online].Available:https://www.doczj.com/doc/af6966541.html,/labs/ lab_62.html

[14]L.Blonde,M.Buck,R.Calli,W.Niem,Y.Paker,W.Schmidt,and

G.Tomas,“A virtual studio for live broadcasting:The Mona Lisa project,”

IEEE Multimedia,vol.3,no.2,pp.18–29,Summer1996.

[15]C.Zhang and T.Chen,“A survey on image-based rendering—

Representation,sampling and compression,”EURASIP Signal Process-ing:Image Communication,vol.19,no.1,pp.1–28,2004,Invited Paper.

[16]S.B.Kang,“A Survey of Image-Based Rendering Techniques,”Digital

Equipment Corp.,Cambridge https://www.doczj.com/doc/af6966541.html,b.,Cambridge,MA,Tech.Rep.

97/4,Aug.1997.

[17]S.B.Kang and P.K.Desikan,“Virtual navigation of complex scenes using

clusters of cylindrical panoramic images,”in Proc.Graphics Interface, Vancouver,BC,Canada,1998,pp.223–232.

[18]S.E.Chen,“QuickTimer VR:An image-based approach to virtual en-

vironment navigation,”in https://www.doczj.com/doc/af6966541.html,puter Graphics,SIGGRAPH,Los Angeles,CA,1995,pp.29–38.

[19]J.S.Pierce,A.Forsberg,M.J.Conway,S.Hong,R.Zeleznik,and

M.R.Mine,“Image plane interaction techniques in3D immersive environments,”in Proc.Symp.Interactive3D Graphics,Providence,RI, 1997,pp.39–43.

[20]H.Shum,“Rendering by manifold hopping,”in Proc.SIGGRAPH,Los

Angeles,CA,2001,p.253.

[21]3-D Video Inc.,NuView Owner’s Manual.[Online].Available:http://

https://www.doczj.com/doc/af6966541.html,/nuview.htm

[22]Point Grey Research Inc.,Digiclops and Triclops:Stereo Vision SDK.

[Online].Available:https://www.doczj.com/doc/af6966541.html,

[23]R.Tsai,“A versatile camera calibration technique for high accuracy3D

machine vision metrology using off-the-shelf TV cameras and lenses,”

IEEE J.Robot.Autom.,vol.RA-3,no.4,pp.323–344,Aug.1987. [24]NHK STR Labs,Virtual System for TV Program Production.NHK Labs

Note,No.447.

[25]Y.Gong and M.Sakauchi,“Detection of regions matching speci?ed

chromatic features,”Comput.Vis.Image Understand.,vol.61,no.2, pp.263–269,1995.

[26]J.Mulligan,N.Kelshikar,X.Zampoulis,and K.Daniilidis,“Stereo-based

environment scanning for immersive telepresence,”IEEE Trans.Cir-cuits Syst.Video Technol.,vol.14,no.3,pp.304–320,Mar.2004. [27]N.Kelshikar,X.Zabulis,J.Mulligan,K.Daniilidis,V.Sawant,S.Sinha,

T.Sparks,https://www.doczj.com/doc/af6966541.html,rsen,H.Towles,K.Mayer-Patel,H.Fuchs,J.Urbanic, K.Benninger,R.Reddy,and G.Huntoon,“Real-time terascale im-plementation of teleimmersion,”in https://www.doczj.com/doc/af6966541.html,putational Sci-ence(ICCS),Melbourne,Australia,2003,pp.33–42.

[28]O.Schreer and P.Kauff,“An immersive3D video-conferencing system

using shared virtual team user environments,”in Proc.ACM Collabora-tive Environments.Bonn,Germany,2002,pp.105–112.

[29]F.Isgro,E.Trucco,P.Kauff,and O.Schreer,“3D image processing in the

future of immersive media,”IEEE Trans.Circuits Syst.Video Technol., vol.14,no.3,pp.288–303,Mar.2004.

[30]H.Baker,D.Tanguay,I.Sobel,D.Gelb,M.Goss,W.Culbertson,and

T.Malzbender,“The coliseum immersive teleconferencing system,”HP Labs,Tech.Rep.HPL-2002-351,Dec.2002.

[31]R.Yang,M.Pollefeys,H.Yang,and G.Welch,“A uni?ed approach

to real-time,multi-resolution,multi-baseline2D view synthesis and3D depth estimation using commodity graphics hardware,”Int.J.Image Graph.,vol.4,no.4,pp.1–25,2004.

[32]J.Gluckman and S.K.Nayar,“Recti?ed catadioptric stereo sensors,”

in Proc.IEEE https://www.doczj.com/doc/af6966541.html,puter Vision and Pattern Recognition,Hilton Head Island,SC,2000,pp.380–387.

[33]T.Brodsky,C.Fermuller,and Y.Aloimonos,“Structure from motion:

Beyond the epipolar constraint,”https://www.doczj.com/doc/af6966541.html,put.Vis.,vol.37,no.3, pp.231–258,2000.

16IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS—PART A:SYSTEMS AND HUMANS

[34]D.Scharstein and R.Szeliski,“A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,”https://www.doczj.com/doc/af6966541.html,put.Vis.,vol.47,no.1,pp.7–42,2002.

[35]N.Apostoloff and A.Fitzgibbon,“Bayesian video matting using learnt

image priors,”in Proc.IEEE https://www.doczj.com/doc/af6966541.html,puter Vision and Pattern Recognition ,Washington,DC,2004,pp.407–414.

[36]C.Rother,V .Kolmogorov,and A.Blake,“GrabCut—Interactive fore-ground extraction using iterated graph cuts,”in Proc.ACM SIGGRAPH ,Los Angeles,CA,2004,pp.309–314.

[37]M.Uyttendaele,A.Criminisi,S.Kang,S.Winder,R.Szeliski,and

R.Hartley,“Image-based interactive exploration of real-world envi-ronments,”IEEE Comput.Graph.Appl.,vol.24,no.3,pp.52–63,May–Jun.2003.

[38]R.Urtasun and P.Fua,“3D human body tracking using deterministic tem-poral motion models,”Computer Vision Lab,Swiss Federal Inst.Tech.,Lausanne,Switzerland,Tech.Rep.IC/2004/03,2004.

[39]R.Plaenkers and P.Fua,“Articulated soft objects for multiview shape and

motion capture,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.25,no.9,pp.1182–1187,Sep.2003.

[40]J.Starck,G.Collins,R.Smith,A.Hilton,and J.Illingworth,“Animated

statues,”J.Mach.Vis.Appl.,vol.14,no.4,pp.248–259,2003.

[41]W.Sun,A.Hilton,R.Smith,and J.Illingworth,“Layered animation of

captured data,”Visual Comput.,https://www.doczj.com/doc/af6966541.html,put.Graph.,vol.17,no.8,pp.457–474,2001.

[42]R.C.Gonzalez and R.E.Woods,Digital Image Processing ,2nd ed.

Upper Saddle River,NJ:Prentice-Hall,2002,pp.94–102.

[43]S.McKenna,S.Jabri,Z.Duric,A.Rosenfeld,and H.Wechsler,“Track-ing groups of people,”Comput.Vis.Image Understand.,vol.80,no.1,pp.42–56,2002.

[44]O.Schreer,I.Feldmann,U.Goelz,and P.Kauff,“Fast and robust

shadow detection in videoconference applications,”in Proc.VIPromCom,4th EURASIP IEEE Int.Symp.Video Processing and Multimedia Commu-nications ,Zadar,Croatia,2002,pp.371–375.

[45]M.Woo,J.Neider,and T.Davis,OpenGL Programming Guide ,3rd ed.

Reading,MA:Addison-Wesley.

[46]V .Alessandro,Notes on Natural Interaction.[Online].Available:http://

https://www.doczj.com/doc/af6966541.html,/NotesOnNaturalInteraction.pdf

[47]Intel Com.,Intel Performance Primitive Library.[Online].Available:

https://www.doczj.com/doc/af6966541.html,/software/products/ipp/index.htm

[48]C.Wu,C.Liu,H.Y .Shum,Y .Q.Xu,and Z.Zhang,“Automatic eye-glasses removal from face images,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.26,no.3,pp.322–336,Mar.2004.

[49]C.Hand,“A survey of 3D interaction techniques,”Comput.Graph.

Forum ,vol.16,no.5,pp.269–281,Dec.1997.

[50]D.Bowman,J.Gabbard,and D.Hix,“A survey of usability evaluation in

virtual environments:Classi?cation and comparison of methods,”Pres-ence,Teleoperators Virtual Environ.,vol.11,no.4,pp.404–424,2002.[51]D.Bowman,E.Kruijff,https://www.doczj.com/doc/af6966541.html,Viola,and I.Poupyrev,3D User Interfaces:

Theory and Practice .Boston,MA:Addison-Wesley,2004.

[52]N.Kim,A movie clip about the 3-D interaction using tactile

feedback.[Online].Available:http://home.postech.ac.kr/~ngkim/Studio/

vsnatural.wmv

Namgyu Kim received the B.S.degree in computer science in 1995from Korea Advanced Institute of Science and Technology (KAIST),Taejon,Korea,and earned the M.S.degree in 1998in computer science and engineering from the Pohang University of Science and Technology (POSTECH),Pohang,Korea,where he is currently pursuing the Ph.D.degree.

He has an interest in the system and interaction design of a virtual/mixed-reality

environment.

Woontack Woo (S’94–M’00)received the B.S.in electronics engineering from Kyungbuk National University,Daegu,Korea,in 1989,and the M.S.degree in electronics and electrical engineering from Pohang University of Science and Technology (POSTECH),Pohang,Korea,in 1991.In 1998,he received the Ph.D.degree in electrical engineer-ing systems from University of Southern California (USC),Los Angeles.

He joined Advanced Telecommunications Re-search (ATR),Kyoto,Japan,in 1999,as an Invited

Researcher.Since February 2001,he has been with the Gwangju Institute of Science and Technology (GIST),Gwangju,Korea,where he is an Assistant Professor in the Department of Information and Communications.The main thrust of his research has been implementing holistic smart environments,which include UbiComp/WearComp,virtual environment,human–computer interaction,three-dimensional (3-D)vision,3-D visual communications,and intelligent signal

processing.

Gerard J.Kim (S’92–M’05)received the B.S.in electrical and computer engineering from Carnegie Mellon University,Pittsburgh,PA,in 1987,the M.S.degree in computer engineering in 1989,and the Ph.D.degree in computer science in 1994from the University of Southern California (USC),Los Angeles.

He had worked as a Computer Scientist with support from the U.S.National Research Council Postdoctoral Fellowship for two years at the National Institute of Standards and Technology (NIST).He

has been an Associate Professor in Computer Science and Engineering at the Pohang University of Science and Technology (POSTECH),Pohang,Korea,since 1995.His research interests include various topics in virtual reality (VR)(3-D interaction and innovative applications),computer music,and computer-aided design,such as intelligent design and design

reuse.

Chan-Mo Park received the B.S.degree in chem-ical engineering from Seoul National University,Seoul,Korea,and the M.S.and Ph.D.degrees from University of Maryland,College Park.In 2001,he was awarded the Honorary Doctor of Letters de-gree at University of Maryland University College,Adelphi.

He has been a Professor of Computer Science at University of Maryland,College Park,Korea Advanced Institute of Science and Technology (KAIST),Taejon,Korea,The Catholic University

of America,Washington,DC,and Pohang University of Science and Technology (POSTECH),Pohang,Korea,since 1969.Currently,he is the President of POSTECH.His main research interests are digital image processing,computer graphics and computer vision,system simulations,and VR.

一文读懂无线通信技术分类

一文读懂无线通信技术分类 无线技术正在迅速发展,并在人们的生活中发挥越来越大的作用。而随着无线应用的增长,各种技术和设备也会越来越多,也越来越依赖于无线通信技术。本文盘点下物联网中无线通信主要的技术。 无线通信技术分类美国通信委员会(FCC)分类 2015年,美国通信委员会(FCC,Federal Communications Commission)技术咨询委员会(TAC,Technological Advisory Council)网络安全工作组在一份白皮书中提到了将物联网通信技术分成了以下四类: Mobile/WAN,Wide Area Network - 移动广域网络,覆盖范围大 WAN,Wide Area Network - 广域网,覆盖范围大,非移动技术 LAN,Local Area Network - 局域网,覆盖范围相对较小,如住宅、建筑或园区 PAN,Personal Area Network - 个域网,覆盖范围从几厘米到几米不等主要的无线技术及分类如下表所示: 不知为何,FCC TAC将Sigfox归入了LAN,而LoRaWAN归入了WAN。Sigfox与LoRaWAN 都同属于LPWAN领域中的窄带技术,都是可以广域覆盖。Weightless SIG在LPWAN领域中主推的将会是Weightless-P。NB-IoT也没有列入其中。新的技术在不断出现,也在不断地重塑物联网市场的格局。 KEYSIGHT分类 在KEYSIGHT的一份PPT中《Low Power Wide Area Networks,NB-IoT and the Internet of Things》,将IoT无线技术做了比较详细的划分,如下图所示: 相关术语如下: NFC,Near Field CommunicaTIon - 近场通信

无线网络技术导论第二版课后答案

】 第一章绪论 填空 1. 局域网城域网广域网 2. LAN MAN WAN 3. ARPAnet 4. 数据链路层网络层 5. ALOHANET 6. 可以让人们摆脱有线的束缚跟便捷更自由地进行沟通 7. 协议分层 8. 协议 — 9. 应用程序、表示层、会话层、传输层、网络层、链路层 10. 应用层、传输层、网络互连层、主机至网络层 11. MAC协议 单选1-3 D A C 多选 1 AC 2 ABC 判断1-4 T F F F 名词解析: 1、无线体域网是由依附于身体的各种传感器构成的网络 2、无线穿戴网是指基于短距离无线通信技术与可穿戴式计算机技术、穿戴在人体上、具有智能收集人体和周围环境信息的一种新型个域网 3、TCP/IP:P12,即传输控制协议/因特网互联协议又名网络通讯协议是Internet最基本的 、 协议、Internet国际互联网络的基础由网络层的IP协议和传输层的TCP协议组成 4、OSI RM:即开放系统互连参考模型。 简答题: 1、答计算机网络发展过程可分为四个阶段。 第一阶段诞生阶段第二阶段形成阶段第三阶段互联互通阶段第四阶段高速网络技术阶段。如果想加具体事例查p1-2 2、答无线网络从覆盖范围可分为如下三类。第一类系统内部互连/无线个域网比如蓝牙技术红外无线传输技术第二类无线局域网比如基本服务区BSA移动Ad Hoc网络第三类无线城域网/广域网比如蜂窝系统等。 …

3、答从无线网络的应用角度看可以划分出 ①无线传感器网络例如能实时监测、感知和采集各种环境或监测对象的信息并通过无线方式发送到用户终端②无线Mesh网络例如Internet中发送E-mail③无线穿戴网络例如能穿戴在人体上并能智能收集人体和周围环境信息④无线体域网例如远程健康监护中有效地收集信息。 4、答P5不确定WLAN,GPRS,CDMA 5、答P9第一段第三句协议是指通信双方关于如何进行通信的一种约定。举例准确地说它是在同等层之间的实体通信时有关通信规则和约定的集合就是该层协议例如物理层协议、传输层协议、应用层协议。 6、答主要有国际电信标准国际ISO标准Internet标准 1.美国国际标准化协会AN SI 2.电气电子工程师协会(IEEE) 3.国际通信联盟 ITU 4.国际标准化组织ISO ¥ 协会(ISOC)和相关的Internt工程任务组IETF 6.电子工业联合会(EIA)和 相关的通信工业联合会TIA 7、答p13无线网络的协议模型显然也是基于分层体系结构的但是对于不同类型的无线网络说重点关注的协议层次是不一样的。 第二章无线传输基础 填空 1、电磁波 2、FCC 3、全向的 4、天线 5、定向 6、地波 7、衍射 8、快速或慢速平面的 9、Christian Doppler 10、数据 11、信号 | 12、传输 13、模拟 14、ASK FSK PSK 15、扩频技术 16、大于 17、DSSS FHSS THSS 单选 1-5 A C C C D 6-7 A B 多选 1 ACD 2 ABCD 3 BCD 4 ABC 5 AB 6 ABC 判断 1-5 F T T T T 6-10 F T T T F 10-14 F T T T 名词解析: 1、微波波长介于红外线和特高频之间的射频电磁波

智能锁采取哪种无线技术好

智能锁采取哪种无线技术好 随着越来越多家庭正在考虑升级生活安全,那么智能门锁无疑是用户的入门产品之一。首先明确指出,目前智能锁的技术水平使用户更轻松,并且具有一定的安全性,例如可以人脸识别开锁,远程来宾访问开锁,门会在用户进门后,自动上锁,其次,在2020年防疫时期,还能在个人防护和安全方面提供一些保障。 为了能让包括手机在内的其他智能家居设备进行通信,智能锁需要利用3种标准通信协议。今天千家小编就和大家简单分析一下各种协议的区别。 蓝牙 带有无钥匙锁的蓝牙的主要优点之一是,与WiFi相比,它的电池寿命更经济。如果用户在正常情况下使用智能锁,且进入和退出的次数合理,则电池使用蓝牙应可使用一年。 另一个关键优势是蓝牙锁与智能手机无缝集成的方式,而无需任何第三方集线器。如果用户的智能家居需求仅限于无钥匙进入,并且不需要任何其他设备的集线器,那么蓝牙将是明智的选择。如云里物里的MS48SF2C蓝牙模块就广泛应用在蓝牙智能锁中。 WiFi 现在,许多智能锁都具有WiFi连接作为可选的附加功能。只需插入设备,就可以从在线的任何地方远程控制锁,缺点就是网络连接有可能容易断开。 Z-Wave 使用兼容Z-Wave的智能锁,可能需要再购买一个集线器,然后可以转换从锁中接收到的信号,以便路由器可以解释该信号并与其进行交互。 Z-Wave限制了连接范围,因此该连接仅适用于100英尺(约30米)以上。如果锁不至少位于集线器附近,则可以使用多达4个扩展器设备将信号增强到500英尺甚至更高。用户需要根据房屋的布局,以便进行Z-Wave配件的预算考虑。 使用某些Z-Wave锁,也可以通过集线器在应用内访问锁界面,而无需专用的APP应用。 除非计划在入口和出口系统之外的智能家居中运行多个设备,否则花钱购买具有Z-Wave功能的智能锁可能不值得。本文来源网络。

现代生活中无线通信

生活中的无线通信 (公选课)结课论文 2014 — 2015学年第一学期 题目:超宽带(UWB)技术 专业班级:海洋13-1班 学号:0116 姓名:张然 指导老师:梁娜 日期:2014-12-12 摘要 本文主要对UWB通信技术进行简要的阐释。首先对UWB的技术背景、基本概念和特点进行介绍。技术应用范围脉冲无线电技术技术解决方案无载波脉冲方案单载波DS-CDMA方案 关键词:USB;脉冲;调制;家庭 目录 1 前言 (4) 1 UWB基本概念 (5) 2 UWB的主要特点及其应用 (5) 3 UWB的发展现状 (6) 4 关键技术,研究热点 (7) 4.1脉冲信号的产生 (7) 4.2调制方式 (8) (8) (8) 4.3收发机的设计 (9) 4.4中国对UWB电磁兼容性研究 (9) 5 家庭无线通信是UWB的发展方向之一 (10) 参考文献 (11)

1 前言 目前一种新的无线通信技术引起了人们的广泛关注,这就是所谓"UWB(Ultra WideBand,超宽带无线技术)"技术。正如其名称一样,UWB技术是一种使用1GHz 以上带宽的最先进的无线通信技术,被认为是未来五年电信热门技术之一。但是UWB不是一个全新的技术,它实际上是整合了业界已经成熟的技术如无线USB、无线1394等连接技术,本文就是对UWB做一简单的介绍。 1 UWB基本概念 超宽带(Ultra-wideband,UWB)技术起源于20世纪50年代末,此前主要作 为军事技术在雷达等通信设备中使 对高速无线通信提出了更高的要求, 超宛带技术又被重新提出,并倍受关 注。UWB是指信号带宽大于500MHz或 者是信号带宽与中心频率之比大于 25%。与常见的通信方式使用连续的载波不同,UWB采用极短的脉冲信号来传送信息,通常每个脉冲持续的时间只有几十皮秒到几纳秒的时间。这些脉冲所占用的带宽甚至高达几GHz,因此最大数据传输速率可以达到几百Mbps。在高速通信的同时,UWB设备的发射功率却很小,仅仅是现有设备的几百分之一,对于普通的非UWB接收机来说近似于噪声,因此从理论上讲,UWB可以与现有无线电设备共享带宽。所以,UWB是一种高速而又低功耗的数据通信方式,它有望在无线通信领域得到广泛的应用。目前,Intel、Motorola、Sony等知名大公司正在进行UWB无线设备的开发和推广。 2 UWB的主要特点及其应用 鉴于UWB信号是持续时间非常短的脉冲串,占用带宽大,因此它有一些十分 独特的优点和用途。在通信领域,UWB可以提供高速率的无线通信。在雷达方面,

无线通信技术各自的特点和相互比较

无线通信技术各自的特点和相互比较 目前使用较广泛的近距无线通信技术是蓝牙(Bluetooth),无线局域网802.11(Wi-Fi)和红外数据传输(IrDA)。同时还有一些具有发展潜力的近距无线技术标准,它们分别是:Zigbee、超宽频(Ultra WideBand)、短距通信(NFC)、WiMedia、GPS、DECT、无线1394和专用无线系统等。它们都有其立足的特点,或基于传输速度、距离、耗电量的特殊要求;或着眼于功能的扩充性;或符合某些单一应用的特别要求;或建立竞争技术的差异化等。但是没有一种技术可以完美到足以满足所有的需求。 1、蓝牙技术 bluetooth技术是近几年出现的,广受业界关注的近距无线连接技术。它是一种无线数据与语音通信的开放性全球规范,它以低成本的短距离无线连接为基础,可为固定的或移动的终端设备提供廉价的接入服务。 蓝牙技术是一种无线数据与语音通信的开放性全球规范,其实质内容是为固定设备或移动设备之间的通信环境建立通用的近距无线接口,将通信技术与计算机技术进一步结合起来,使各种设备在没有电线或电缆相互连接的情况下,能在近距离范围内实现相互通信或操作。其传输频段为全球公众通用的2.4GHz ISM 频段,提供1Mbps的传输速率和10m的传输距离。 蓝牙技术诞生于1994年,Ericsson当时决定开发一种低功耗、低成本的无线接口,以建立手机及其附件间的通信。该技术还陆续获得PC行业业界巨头的支持。1998年,蓝牙技术协议由Ericsson、IBM、Intel、NOKIA、Toshiba等5家公司达成一致。 蓝牙协议的标准版本为802.15.1,由蓝牙小组(SIG)负责开发。802.15.1的最初标准基于蓝牙1.1实现,后者已构建到现行很多蓝牙设备中。新版802.15.1a 基本等同于蓝牙1.2标准,具备一定的QoS特性,并完整保持后向兼容性。 但蓝牙技术遭遇了最大的障碍是过于昂贵。突出表现在芯片大小和价格难以下调、抗干扰能力不强、传输距离太短、信息安全问题等等。这就使得许多用户不愿意花大价钱来购买这种无线设备。因此,业内专家认为,蓝牙的市场前景取决于蓝牙价格和基于蓝牙的应用是否能达到一定的规模。 2、Wi-Fi技术 Wi-Fi(Wireless Fidelity,无线高保真)也是一种无线通信协议,正式名称是IEEE802.11b,与蓝牙一样,同属于短距离无线通信技术。Wi-Fi速率最高可达11Mb/s。虽然在数据安全性方面比蓝牙技术要差一些,但在电波的覆盖范围方面却略胜一筹,可达100 m左右。 Wi-Fi是以太网的一种无线扩展,理论上只要用户位于一个接入点四周的一定区域内,就能以最高约11Mb/s的速度接入Web。但实际上,如果有多个用户同时通过一个点接入,带宽被多个用户分享,Wi-Fi的连接速度一般将只有几百kb/s的信号不受墙壁阻隔,但在建筑物内的有效传输距离小于户外。 WLAN未来最具潜力的应用将主要在SOHO、家庭无线网络以及不便安装电缆的建筑物或场所。目前这一技术的用户主要来自机场、酒店、商场等公共热点场所。Wi-Fi技术可将Wi-Fi与基于XML或Java的Web服务融合起来,可

无线生活零距离第三期

【福建信息化局】无线生活零距离—搜赢第三期无线生活零距离,你不可不知道的事。 福建省信息化局的"搜赢"答题有奖活动第三期已出炉。“搜赢”第一期、第二期在大家踊跃的答题后,信息化局不负众望的给予活动最大的支持,活动奖品越来越给力!活动奖品有苹果笔记本、平板电脑、数码相机、打印机、护眼灯、眼部按摩器、计步器、计算器等,3000多份。最终大奖落入谁家,谁“搜赢”,谁知道! 活动时间:2012年5月17日—9月30日 官方网站 http://https://www.doczj.com/doc/af6966541.html,/zhuanti/syhdzt/ 附带问题及答案 1. 无线电对讲机最早在什么时候得到应用?从设计技术上无线对讲机分为哪几类? 无线电对讲机在二十世纪三十年代就开始得到应用。无线电对讲机在设计技术方面,可分为采用模拟通信技术设计的模拟对讲机(也称为传统对讲机),以及采用数字技术进行设计的数字对讲机。 2. 我国规定从那一年模拟制式的对讲机将推出市场? 根据工信部部署,模拟制式的对讲机已于2010年底停止新型号核准,2015年“模拟机”将全部退出市场。 3. 我国第一个数字对讲机芯片何时、何地诞生? 答:2011年,在福建无线电管理部门的积极推动下,泉州对讲机产业在“模转数”研 发取得的突破,福建联拓科技有限公司成功地开发出一款具有自主知识产权的LT1801A芯片,成为我国第一个数字对讲机芯片。 4.我国何时宣布开放民用对讲机市场?免执照、免费使用的对讲机需要具备什么条件? 中国信产部于2001年12月6日宣布开放民用对讲机市场。凡发射功率不大于0.5瓦,工作于400兆频段的对讲机可以免执照、免费使用。 5. 福建省泉州市作为著名的无线电对讲机产地,提出什么样的发展目标?在所出台的政策意见中将从哪些方面促进产业的发展? 发展目标:力争至2015年,数字对讲机产业产值达100亿元,年均增长38.0%,培育2家以上上市公司。 在《泉州市人民政府关于加快推进泉州数字对讲机产业发展的若干意见》中,提出的发展举措:

无线网络技术心得体会

心得体会 计算机技术的突飞猛进让我们对现实应用有了更高千兆网络技术刚刚与我们会面,无线网络技术又悄悄地逼近。不可否认,性能与便捷性始终是IT技术发展的两大方向标,而产品在便捷性的突破往往来得更加迟缓,需要攻克的技术难关更多,也因此而更加弥足珍贵。历史的脚印说到无线网络的历史起源,可能比各位想象得还要早。无线网络发展史上的第一块里程碑是1997年6月的 IEEE802.11标准的出台,虽然IEEE802.11标准在当时并没有引起革命性的网络变革,但802.11标准提供的“一点对多点接入”、“点对点中继”等工作模式提供了一种替代有线网络的高效高速的方案,为无线网技术的不断发展奠定了基础,在此后的被称为Wi-Fi的802.11b标准统一无线网络标准更是功不可没。 随着信息技术与信息产业飞速发展,人们对网络通信的要求也不断提高, 无线电技术能实现远距离的通信,即使在室内或相距咫尺的地方, 无线电也可发挥巨大作用。于是无线网络技术随之应运而生, 它克服了传统网络技术的不足, 真正体现了5W的要求。由于网络一般分为局域网和广域网(即因特网)两种,但本文将着重对局域网部分进行阐述。无线网络技术主要包括IEEE802. 11、https://www.doczj.com/doc/af6966541.html,N2 、HomeRF、蓝牙等。它使人们彻底摆脱了线缆的束缚,在整个区域内实现随时随地的无线连接。 虽然目前大多数的网络都仍旧是有线的架构,但是近年来无线网络的应用却日渐增加。在学术界、医疗界、制造业、仓储业等,无线网络扮演着越来越重要的角色。特别是当无线网络技术与Internet相结合时,其迸发出的能力是所有人都无法估计的。其实,我们也不能完全认为自己从来没有接触过无线网络。从概念上理解,红外线传输也可以认为是一种无线网络技术,只不过红外线只能进行数据传输,而不能组网罢了。此外,射频无线鼠标、WAP手机上网等都具有无线网络的特征。 无线网络技术的发展最终需要在应用层面上得到用户的充分认可。时至今日,无线传输标准可谓百家争鸣,除了最容易想到的无线局域网,用户也将能在不同的领域应用这些新技术,包括硬件设备与应用软件两方面。目前,有三种方式可供选择。红外线无线传输利用红外线波段的电磁波来传送数据,通讯距离较短,传输速率最快可达16Mbps。目前广泛使用的电视机或VCD机等家电遥控器几乎都采用红外线传输技术,只不过此时是窄带红外技术。蓝牙有着全球开放的自由频段2.4GHz,有效传输速度为721kb/s,数据传输速度1Mbps,2.0版本的蓝牙技术甚至达到3Mbps。WiFi即“无线相容性认证”,目前已出现多个标准。802.11b标准在理想情况下的传输速率为11Mbps,802.11g标准的理论传输速率也达到54Mbps。但在实际使用环境中,它们的传输速率也只有理论速度的一半左右。然而即便如此,用于音频传输也已经绰绰有余,此时可以很好地摆脱对线缆的依赖。但是,有些发烧音响爱好者怀疑无线传输是否能够保证音质。其实,无线传输的表现毫不逊色。现在市场上几款名家无线家庭影院产品,无论是红外线无线、蓝牙无线传输,还是WiFi无线技术方式,都有着很高的无线技术含量,并且都运用了自家的独门技术,无线传输的表现能力相当出色,能够确保连续和及时地传输数字影音讯号,不会出现讯号延迟停顿现象,音质清晰完美。在WiFi

电磁场与无线技术专业职业生涯规划范文

电磁场与无线技术专业职业生涯规划范文 电磁场与无线技术专业毕业生就业实行双向选择,可选择科研机构、工业部门等企事业单位从事设计制造、科技开发、应用研究和科研、教学及行政管理等方面的工作。下面是给大家带来的电磁场与无线技术专业职业生涯规划范文,欢迎阅读。 一、前言当我没进大学之前,我以为上大学就意味着解放,没有家长的束缚,没有老师的逼迫,没有学习的压力,但是不久我发现自己错了。高中的努力是为了考上重点大学,但重点大学的文凭仅仅是进入人才招聘市场基本的条件而已,就像高档写字楼门口写的“衣冠不整者不能入内”一样。我国高校扩招以来,每年都有大批毕业生毕业,再加上往年没有找到工作的大学生,就业形势非常严峻。虽然我们学校在“211”、“985”工程之内,也许同样是重点大学的学生,但企业不一定会选择我,因为我的户口。现实不会和我们想象的一样,它可能会不公,它可能会残酷; 但我相信,能力是重要的,无论我走到哪,我也不会害怕,都可以从新再来,因为我有能力。 不是任何你想得倒的东西就会是你的,没有人会像父母一样那么宠惯自己。能力也是一样的,它的形成必须要有经验,要能忍受,要与人交流,要知道思考和分析,要能把握住自己。刚进大学,我经历很多打击,没有能入选院学生会,没有成为班干部……外表、说话、做事好像什么都不如别人。所以我下定决心好好做自己,用心去把握每一个机会,让自己学会成长,使自己拥有能力。但怎么才能真正做到这些,怎么才能找到一个需要自己努力的方向,我仍然有些迷茫。

直到这学期上了大学生职业生涯规划课以后,我才开始对自己的大学生活、工作方向、人生规划有一些感觉,我才会认真地寻找我的价值与我可以和应该做的事情,我才懂得留意身边的每一个人、每一个细节来得到更多的机会。首先,大学生职业生涯规划课让我看到自己深处危机中,如果还不觉醒、还不给人生增加价值,那未来就将布满荆棘,人生会失去它应有的光彩。其次,大学生职业生涯规划课让我了解到了一些坚持不懈、努力奋斗、用心投入的人,他们能拥有今天的成就和前途的美好是因为他们用脑子思考、用双手实干积累出来的,而不是运气与无目的的忙碌; 也让我相信社会存在着一种向上流动的机制,只要我付出,就一定能有所收获。第三,大学生职业生涯规划课让我明白了社会与企业所需的人才类型,比如有基础知识(专业基础、文化底蕴)、能力(学习新事物的能力、与人交往与沟通的能力、随机应变的能力、知识运用能力)、素质(忠心、诚实、守信、互助、协作、大胆)等。第四,我会根据我掌握的信息来制定我的职业生涯规划,因为它可以使我拥有明确的目标,使我在实现理想的过程中少走弯路,使我比别人更懂得寻找新的机会、增加有用的经验,使我不会因我的无知而后悔。 二、自我分析人的一生在不断的根据他人和已经做过的事情来认识自己,有时会因为做错了事物而讨厌自己,有时会充满理性和勇气去改变自己。我认为自己总的性格具有两面性:在自己平常的生活中,喜欢安静,不太爱说话,总爱做自己应该做的事情; 在需要与人交往的时候,我会很外向、很开朗,总是喜欢笑。曾经做过一个职业测评,它说我的性格是管理者类型,特征是富有责任感、工作效率高、关注结果、性格开朗、善于社交、

无线网络技术教程习题答案金光

1.1如何理解计算机网络在现代社会的作用? 答:在现代社会生产生活中,网络技术实现信息的互通和流动,高速完善的网络能使信息更快捷、准确的传输,发挥强大的作用。网络已成为信息社会的技术命脉和知识经济的发展基础。 1.2请给出计算机网络的整体结构。 答:参考ISO/OSI模型以及TCP/IP模型。 1.3目前的骨干网络大多为光纤传输,部分城市实现了光纤到户,为此是否可以完全用光纤取代所有其他类型的网络?试分析。 答:不能取代所有其他类型的网络。由于光纤属于有线网络的范畴,无法取代一些只适用于无线网络的应用,如无线传感器网络(WSN)。 1.4 为什么网络协议栈都以分层形式实现?各层主要完成哪些功能? 答:网络体系结构是一个复杂的系统,所以采用结构化的方法,将其分解为若干层次设置相应的协议,便于维护和修改。 各层次主要功能参考ISO/OSI模型以及TCP/IP模型。 1.6试分析和比较无线网络和有线网络,可以从传输方式、组网结构等方面进行比较。答:有线通信的开通必须架设电缆,或挖掘电缆沟或架设架空明线;而架设无线链路则无需架线挖沟,线路开通速度快,将所有成本和工程周期统筹考虑。无线扩频的投资是相当节省的。 有线通信的质量会随着线路的扩展而急剧下降,如果中间通过电话转接局,则信号质量下降更快,到4、5公里左右已经无法传输高速率数据,或者会产生很高的误码率,速率级别明显降低,而对于无线扩频通信方式,50公里内几乎没有影响,一般可提供从64K到2M 的通信速率。 有线通信铺设时需挖沟架线,成本投入较大,且电缆数量固定,通信容量有限;而无线扩频则可以随时架设,随时增加链路,安装、扩容方便。 1.9为什么现阶段IPv6必须与IPv4共存,而不是直接取代?它们各有什么样的特点? 答:由于现阶段大部分互联网用户使用IPv4,若直接升级到IPv6,必须将互联网上所有节点都进行修改,实施起来比较困难,所以使用IPv4/IPv6的形式进行过度,不远的将来,IPv6将会得到全面使用,最终取代IPv4。 IP是TCP/IP协议族中网络层的协议,是TCP/IP协议族的核心协议。目前IP协议的版本是IPv4,发展至今已经使用了30多年。IPv4的地址位数为32位,也就是最多有2的32次方的电脑可以联到Internet上。 IPv6是下一版本的互联网协议,也可以说是下一代互联网的协议,IPv6除了一劳永逸地解决了地址短缺问题以外,还考虑了在IPv4中解决不好的其它问题,主要有端到端IP连接、服务质量(QoS)、安全性、多播、移动性、即插即用等。

几种无线通信技术的比较

几种无线通信技术的比较 摘要:随着电子技术、计算机技术的发展,近年来无线通信技术蓬勃发展,出现了各种标准的无线数据传输标准,它们各有其优缺点和不同的应用场合,本文将目前应用的、无线通信方式进行了分析对比,并总结和预见了它们今后的发展方向。 关键词:Zigbee Bluetooth UWB Wi-Fi NFC Several Wireless Communications Technology Comparison Abstract:As the development of electronic technology,computer technology, wireless communication technology have a rapid development in recent years,emerged wireless data transmission standard,they have their advantages and disadvantages,and different applications,the application of various wireless communication were analyzed and compared,and summarized and foresee their future development. 一.几种无线通讯技术 (一)ZigBee 1.简介: Zigbee是基于IEEE802.15.4标准的低功耗个域网协议。根据这个协议规定的技术是一种短距离、低功耗的无线通信技术。其特点是近距离、低复杂度、自组织、低功耗、低数据速率、低成本。主要适合用于自动控制和远程控制领域,可以嵌入各种设备。 ZigBee是一种高可靠的无线数传网络,类似于CDMA和GSM网络。ZigBee数传模块类似于移动网络基站。通讯距离从标准的75m到几百米、几公里,并且支持无限扩展。ZigBee是一个由可多到65000个无线数传模块组成的一个无线数传网络平台,在整个网络范围内,每一个ZigBee网络数传模块之间可以相互通信,每个网络节点间的距离可以从标准的75m无限扩展。与移动通信的CDMA网或GSM网不同的是,ZigBee网络主要是为工业现场自动化控制数据传输而建立,因而,它必须具有简单,使用方便,工作可靠,价格低的特点。而移动通信网主要是为语音通信而建立,每个基站价值一般都在百万元人民币以上,而每个ZigBee―基站‖却不到1000元人民币。每个ZigBee网络节点不仅本身可以作为监控对象,例如其所连接的传感器直接进行数据采集和监控,还可以自动中转别的网络节点传过来的数据资料。除此之外,每一个Zigbee网络节点(FFD)还可在自己信号覆盖的范围内,和多个不承担网络信息中转任务的孤立的子节点(RFD)无线连接。

无线网络技术及发展趋势

无线网络技术及发展趋势 一、引言 (一)调查背景 随着中国经济的高速发展和国民生活水平的提到,人们的生活节奏会越走越快,加上无线通信技术的广泛应用,传统局域网络已经越来越不能满足人们的需求,于是无线网络应运而生,且发展迅速。尽管目前无线网还不能完全独立于有线网络,但近年来无线网的产品逐渐走向成熟,正以它优越的灵活性和便捷性在网络应用中发挥日益重要的作用。无线网是无线通信技术与网络技术相结合的产物。从专业角度讲,无线网就是通过无线信道来实现网络设备之间的通信,并实现通信的移动化、个性化和宽带化。通俗地讲,无线就是在不采用网线的情况下,提供网络的互联功能。 广阔的应用前景、广泛的市场需求以及技术上的可实现性,促进了无线局域网技术的完善和产业化。已经商用化的802.11b网络也正在证实这一点。随着802.11a网络的商用和其他无线局域网技术的不断发展,无线局域网将迎来发展的黄金时期。 (二)调查内容 针对无线网是无线通信技术与网络技术相结合的产物、无线网目前在市场上广泛的应用这2点,所以我主要的调查对象为无线网的主要特点、优缺点,以及市场上应用比较广泛的技术做调查。另外由于人们的广泛使用我针对无线网的安全问题和未来的发展趋势作出相应的调查。 (三)调查方法 通过查询专业文献、网络资料以及实际的市场调查报告的方法,尽可能详细、准确的阐述问题。 二、关于无线网特点和常用无线技术以及安全性问题的概述 (一)、无线网的特点及应用 1、无线网络概述 无线网络技术涵盖的范围很广,既包括允许用户建立远距离无线连接的全球语音和数据网络,也包括为近距离无线连接进行优化的红外线技术及射频技术。通常用于无线网络的设备包括便携式计算机、台式计算机、手持计算机、个人数字助理 (PDA)、移动电话、笔式计算机和寻呼机。无线技术用于多种实际用途。例如,手机用户可以使用移动电话查看电子邮件。使用便携式计算机的旅客可以通过安装在机场、火车站和其他公共场所的基站连接到 Internet。在家中,用户可以连接桌面设备来同步数据和发送文件。

无线网络技术导论课后习题和答案解析

无线网络技术导论课后习题 和答案解析 -标准化文件发布号:(9456-EUATWK-MWUB-WUNN-INNUL-DDQTY-KII

第一章 名词解释 1、无线体域网:无线局域网是由依附于身体的各种传感器构成的网络。 2、无线穿戴网:是指基于短距离无线通信技术与可穿戴式计算机技术、穿戴在人体上、具有智能收集人体和周围环境信息的一种新型个域网。 3、TCP/IP:P12,即传输控制协议/因特网互联协议,又名网络通讯协议,是Internet最基本的协议、Internet国际互联网络的基础,由网络层的IP协议和传输层的TCP协议组成。 4、OSI RM:即开放系统互连参考模型。 第一章简答 1、简述计算机网络发展的过程。 答:计算机网络发展过程可分为四个阶段。 第一阶段:诞生阶段;第二阶段:形成阶段;第三阶段:互联互通阶段;第四阶段:高速网络技术阶段。(如果想加具体事例查p1-2) 2、无线网络从覆盖范围可以分成哪些类?请适当举例说明。 答:无线网络从覆盖范围可分为如下三类。第一类:系统内部互连/无线个域网,比如: 蓝牙技术,红外无线传输技术;第二类:无线局域网,比如:基本服务区BSA,移动Ad Hoc网络;第三类:无线城域网/广域网,比如:蜂窝系统等。3、从应用的角度看,无线网络有哪些?要求举例说明。 答:从无线网络的应用角度看,可以划分出: ①无线传感器网络,例如能实时监测、感知和采集各种环境或监测对象的信息并通过无线方式发送到用户终端; ②无线Mesh网络,例如Internet中发送E-mail; ③无线穿戴网络,例如能穿戴在人体上并能智能收集人体和周围环境信息; ④无线体域网,例如远程健康监护中有效地收集信息。 4、现在主流的无线网络种类有哪些? 答:P5(不确定)WLAN,GPRS,CDMA ,wifi 5、什么是协议?请举例说明。 答:P9第一段第三句;协议是指通信双方关于如何进行通信的一种约定。举例:准确地说,它是在同等层之间的实体通信时,有关通信规则和约定的集合就是该层协议,例如物理层协议、传输层协议、应用层协议。 6、与网络相关的标准化有哪些? 答:主要有:国际电信标准,国际ISO标准,Internet标准 1.美国国际标准化协会(ANSI) 2.电气电子工程师协会(IEEE) 3.国际通信联盟(ITU) 4.国际标准化组织(ISO)协会(ISOC)和相关的Internt工程任务组(IETF) 6.电子工业联合会(EIA)和相关的通信工业联合会(TIA) 7、无线网络的协议模型有哪些特点?

主流无线通信技术的对比和应用前景

几种短距无线通信技术的对比和应用前景 [导读]当我们谈及无线通信的时候,大家可能已经快速的联想到各种各样的无线设备,比如手机、无线路由器、遥控器等。这已经无处不在,很可能你现在看这篇文章用的就是无线设备。但还有更无处不在的,那就是无线信号,这也是经常被大家忽略的。其实如果它能被肉眼看见,你会真觉得简直浸在其中。其实它们已经应用到家居、办公、工业等诸多领域,下面我们通个对比以下几种不同的无线技术谈谈其应用前景。

目前使用较广泛的近距无线通信技术有蓝牙(Bluetooth)、无线局域网802.11(Wi-Fi)和红外数据传输(IrDA)。同时还有一些具有发展潜力的近距无线技术标准,如Zigbee、超宽频(Ultra WideBand)、5G Wi-Fi、433MHz、近场通信(NFC)、DECT等等。 1.蓝牙 蓝牙技术是一种无线数据与语音通信的开放性全球规范,其实质内容是为固定设备或移动设备之间的通信环境建立通用的近距无线接口,将通信技术与计算机技术进一步结合起来,使各种设备在没有电线或电缆相互连接的情况下,能在近距离范围内实现相互通信或操作。其传输频段为全球公众通用的 2.4GHz ISM 频段,提供1Mbps的传输速率和10m的传输距离。

但蓝牙技术遭遇了最大的障碍是过于昂贵。突出表现在芯片大小和价格难以下调、抗干扰能力不强、传输距离太短、信息安全问题等等。这就使得许多用户不愿意花大价钱来购买这种无线设备。因此,业内专家认为,蓝牙的市场前景取决于蓝牙价格和基于蓝牙的应用是否能达到一定的规模。但目前蓝牙在手持移动设备短距离传输上还占很大的市场份额。 2.ZigBee Zigbee是基于IEEE802.15.4标准的低功耗个域网协议。根据这个协议规定的技术是一种短距离、低功耗的无线通信技术。主要特点包括低功耗、低成本、低速率、支持大量节点、支持多种网络拓扑、低复杂度、快速、可靠、安全。ZigBee 网络中的设备可分为协调器(Coordinator)、汇聚节点(Router)、传感器节点(EndDevice)等三种角色。

几种主流无线通信技术的比较

几种主流无线通信技术 的比较 标准化管理处编码[BBX968T-XBB8968-NNJ668-MM9N]

几种主流无线通信技术的比较 来源:德国易能森有限公司供稿 [导读]近几年,随着面向家庭控制及自动化短距离无线技术的发展,家庭智能化所带来的机遇正成为现实。轻家居相比传统智能家居很明显的两个优势就是在易安装和易交互。在已出现的各种短距离无线通信技术中, EnOcean、Zigbee,Z-Wave和Bluetooth(蓝牙)是当前连接智能家居产品的主要手段。 关键词: 近几年,随着面向家庭控制及自动化短距离无线技术的发展,家庭智能化所带来的机遇正成为现实。轻家居相比传统智能家居很明显的两个优势就是在易安装和易交互。在已出现的各种短距离无线通信技术中, EnOcean、Zigbee,Z-Wave和Bluetooth(蓝牙)是当前连接智能家居产品的主要手段。 EnOcean EnOcean无线通信标准被采纳为国际标准“ISO/IEC 14543-3-10”,这也是世界上唯一使用能量采集技术的无线国际标准。EnOcean能量采集模块能够采集周围环境产生的能量,从光、热、电波、振动、人体动作等获得微弱电力。这些能量经过处理以后,用来供给EnOcean超低功耗的无线通讯模块,实现真正的无数据线,无电源线,无电池的通讯系统。 EnOcean无线标准ISO/IEC14543-3-10使用868MHz,902MHz,928MHz和315MHz 频段,传输距离在室外是300 米,室内为30米。 Zigbee Zigbee是基于标准的低功耗个域网协议。根据这个协议规定的技术是一种短距离、低功耗的无线通信技术。其特点是近距离、低复杂度、自组织、低功耗、低数据速率、低

无线网络技术简述与应用案例

无线网络技术简述与应用案例

无线网络技术简述与应用案例 年级:11计算机科学与技术2班 姓名:范挺鑫 学号: 201141402204

1.2无线网络分类 无线通信网络根据应用领域可分为: ●无线个域网(WPAN):在小范围内相互连接数个装置所形成的无线网络,通常是个人可及的范围内。例如蓝牙连接耳机及膝上电脑,ZigBee也提供了无线个人网的应用平台。 ●无线局域网(WLAN):相当便利的数据传输系统,它利用射频(Radio Frequency; RF)的技术,取代旧式碍手碍脚的双绞铜线(Coaxial)所构成的局域网络,使得无线局域网络能利用简单的存取架构让用户透过它,达到“信息随身化、便利走天下”的理想境界。 ●无线区域网(WRAN):基于认知无线电技术,IEEE802.22定义了适用于WRAN系统的空中接口。WRAN系统工作在47MHz~910MHz高频段/超高频段的电视频带内的,由于已经有用户(如电视用户) 占用了这个频段,因此802.22设备必须要探测出使用相同频率的系统以避免干扰。 ●无线城域网(WMAN):主要用于解决城域网的接入问题,覆盖范围为几千米到几十千米,除提供固定的无线接入外,还提供具有移动性的接入能力,包括多信道多点分配系统(Multichannel Multipoint Distribution System,MMDS)、本地多点分配系统(Local Multipoint Distribution System,LMDS)、IEEE 802.16和ETSI HiperMAN(High Performance MAN,高性能城域网)技术。 ●蜂房移动通信网(WWAN):无线广域网。WWAN技术是使得笔记本电脑或者其他的设备装置在蜂窝网络覆盖范围内可以在任何地方

几种的短距离无线通信技术对比

短距离无线通信技术比较 近年来,各种无线通信技术迅猛发展,极大的提供了人们的工作效率和生活质量。然而,在日常生活中,我们仍然被各种电缆所束缚,能否在近距离范围内实现各种设备之间的无线通信? 纵观目前发展较成熟的几大无线通信技术主要有ZigBee;蓝牙(Bluetooth),红外(IrDA)和无线局域网802.11(Wi-Fi)。 同时还有一些具有发展潜力的近距离无线技术标准,它们分别是:超宽频(UltraWideBand)、短距离通信(NFC)、WiMedia、GPS、DECT、无线139和专用无线系统等。它们都有各自立足的特点,或基于传输速度、距离、耗电量的特殊要求; 或着眼于距离的扩充性;或符合某些单一应用的特殊要求;或建立竞争技术的差异优化等。但没有一种技术完美到可以满足所有的要求。 蓝牙技术 蓝牙技术诞生于1994年,Ericsson当时决定开发一种低功耗、低成本的无线接口,以建立手机及其附件间的通信。能在近距离范围内实现相互通信或操作。其传输频段为全球公众通用的2.4GHz ISM频段,提供1Mbps的传输速率和10m的传输距离。该技术还陆续获得PC行业业界巨头的支持。

1998年,蓝牙技术协议由Ericsson、IBM、Intel、NOKIA、Toshiba等五家公司达成一致。蓝牙协议的标准版本为802.15.1,由蓝牙小组(SIG)负责开发。802.15.1的最初标准基于1.1实现,后者以构建到现行很多蓝牙设备中。新版802.15.1a基于等同于蓝牙1.2标准,具备一定的Qos特性,并完整保持后项兼容性。 但蓝牙技术遭遇最大的障碍在于传输范围受限,一般有效的范围在10米左右,抗干扰能力不强、信息安全问题等问题也是制约其进一步发展和大规模应用的主要因素。因此业内专家认为蓝牙的市场前景取决于蓝牙能否有效地解决上述制约难题。 IrDA技术 IrDA是一种利用红外线进行点对点通信的技术,是第一个实现无线个人局域网(PAN)的技术。目前它的软硬件技术都很成熟,在小型移动设备,如:PDA、手机上广泛使用。起初,采用IrDA标准的无线设备仅能在1m范围内以115.2kb/s速率传输数据,很快发展到4Mb/s以及16Mb/s的速率。 IrDA的主要优点是无需申请频率的使用权,因而红外通信成本低廉。并且还具有移动通信所需的体积小、功能低、连接方便、简单易用的特点。此外,红外线发射角度较小,传输上安全性高。 IrDA的不足在于它是一种视距传输,两个相互通信的设备之间必须对准,中间不能被其它物体阻隔,因而该技术只能用于2台设备之间的连接。而蓝牙就没有此限制,且不受

无线技术扫盲贴

无线技术扫盲贴(连载,从基础到测试) 一:为什么要采用无线 这个问题好比在问人类为什么在不断发展生产力,看过《上帝也疯狂》的人都知道,人类多可笑,为了让生活过的更加惬意,于是去创造更高的科技生产力,去解放更多的体力劳动,殊不知,为了这种惬意,投入了多少精力物力财力,生活也日益的复杂化了(个人牢骚而已),但是不可否认,即使如此,社会仍然一如既往的发展。 为了让计算机器化,人类发明了计算机,为了让计算机能处理更多的事情,于是在计算机的基础上发明了网络,产生了操作系统,产生了各种计算机的衍生类的发明。在今天电脑普及的时代,计算机更是深入到生活的各种场所。但是固定的工作环境已经无法满足为了更高的生活追求,更高的惬意生活的需要。人们需要移动,更需要在移动中与世界联系,为社会发展服务。 移动中打电话,于是手机出现了;移动中扫描超市货品,于是移动扫描终端出现了;移动中上网,于是无线AP就出现了。无线的其他利用还有: l难以布线的场所,或者有线布线成本远高于无线。 l频繁变化的工作环境,银行,公安,军事领域等。 l需要流动工作的区域,除了刚才说的超市,医院,机场等。 l远距离的信息传输:监控,森林防火。 想对于传统的有线来说,无线有一些优势:经济节约。易于扩展。使用便捷。当然也有有线所没有的优势:速率更高,安全性更高,技术更成熟。 本文主要探讨无线AP的技术以及测试AP中应用到的测试方法与测试工具。 二:无线的协议标准 大家在实际享受无线带给我们的便捷的时候,有没有注意到一些细节呢?如笔记本里的无线网卡的型号中,经常会有11b或11g字样。无线路由器上有的也标示支持11b/g甚至有的是11 b/g/n, 有的还支持11a。802.11(11是IEEE组织里的第11小组)是IEEE最初制定的一个无线局域网标准,主要用于解决办公室局域网和校园网中用户与用户终端的无线接入,业务主要限于数据存取,速率最高只能达到2Mbps。由于它在速率和传输距离上都不能满足人们的需要,因此,IEEE小组又相继推出了802.11b和802.11a两个新标准。

无线网络技术导论课后习题和答案解析

无线网络技术导论课后习题和 答案解析 标准化文件发布号:(9312-EUATWW-MWUB-WUNN-INNUL-DQQTY-

第一章 名词解释 1、无线体域网:无线局域网是由依附于身体的各种传感器构成的网络。 2、无线穿戴网:是指基于短距离无线通信技术与可穿戴式计算机技术、穿戴在人体上、具有智能收集人体和周围环境信息的一种新型个域网。 3、TCP/IP:P12,即传输控制协议/因特网互联协议,又名网络通讯协议,是Internet最基本的协议、Internet国际互联网络的基础,由网络层的IP协议和传输层的TCP协议组成。 4、OSI RM:即开放系统互连参考模型。 第一章简答 1、简述计算机网络发展的过程。 答:计算机网络发展过程可分为四个阶段。 第一阶段:诞生阶段;第二阶段:形成阶段;第三阶段:互联互通阶段;第四阶段:高速网络技术阶段。(如果想加具体事例查p1-2) 2、无线网络从覆盖范围可以分成哪些类请适当举例说明。 答:无线网络从覆盖范围可分为如下三类。第一类:系统内部互连/无线个域网,比如: 蓝牙技术,红外无线传输技术;第二类:无线局域网,比如:基本服务区BSA,移动Ad Hoc网络;第三类:无线城域网/广域网,比如:蜂窝系统等。3、从应用的角度看,无线网络有哪些要求举例说明。 答:从无线网络的应用角度看,可以划分出: ①无线传感器网络,例如能实时监测、感知和采集各种环境或监测对象的信息并通过无线方式发送到用户终端; ②无线Mesh网络,例如Internet中发送E-mail; ③无线穿戴网络,例如能穿戴在人体上并能智能收集人体和周围环境信息; ④无线体域网,例如远程健康监护中有效地收集信息。 4、现在主流的无线网络种类有哪些 答:P5(不确定)WLAN,GPRS,CDMA ,wifi 5、什么是协议请举例说明。 答:P9第一段第三句;协议是指通信双方关于如何进行通信的一种约定。举例:准确地说,它是在同等层之间的实体通信时,有关通信规则和约定的集合就是该层协议,例如物理层协议、传输层协议、应用层协议。 6、与网络相关的标准化有哪些 答:主要有:国际电信标准,国际ISO标准,Internet标准 1.美国国际标准化协会(ANSI) 2.电气电子工程师协会(IEEE) 3.国际通信联盟(ITU) 4.国际标准化组织(ISO)协会(ISOC)和相关的Internt工程任务组(IETF) 6.电子工业联合会(EIA)和相关的通信工业联合会(TIA) 7、无线网络的协议模型有哪些特点

相关主题
文本预览
相关文档 最新文档