当前位置:文档之家› IEEE TRANSACTIONS VISUALIZATION AND COMPUTER GRAPHICS An Insight-based Methodology for Eval

IEEE TRANSACTIONS VISUALIZATION AND COMPUTER GRAPHICS An Insight-based Methodology for Eval

IEEE TRANSACTIONS VISUALIZATION AND COMPUTER GRAPHICS An Insight-based Methodology for Eval
IEEE TRANSACTIONS VISUALIZATION AND COMPUTER GRAPHICS An Insight-based Methodology for Eval

IEEE TRANSACTIONS VISUALIZATION AND COMPUTER GRAPHICS

An Insight-based Methodology for Evaluating Bioinformatics Visualizations

Purvi Saraiya, Chris North, and Karen Duca

Abstract — High-throughput experiments such as gene expression microarrays in the life sciences result in very large datasets. In response, a wide variety of visualization tools have been created to facilitate data analysis. A primary purpose of these tools is to

provide biologically-relevant insight into the data. Typically visualizations are evaluated in controlled studies that measure user

performance on predetermined tasks. To evaluate and rank bioinformatics visualizations based on real world data analysis

scenarios, we need a more relevant evaluation method that focuses on data insight. This paper presents several characteristics of insight that enable us to recognize and quantify it. Ideally, users want maximum insight from the data in the least possible time.

Based on this, we evaluate five popular microarray visualization tools on the amount and types of insight they provide and the time it takes to reach the insight. Though we use this technique to analyze bioinformatics visualizations, it can be applied in other domains.

Index Terms—H.5.2 User Interfaces - Evaluation/methodology, Graphical user interfaces (GUI), I.6.9 Visualization - Information

visualization, Visualization systems and software, Visualization techniques and methodologies

—————————— ——————————

1 I NTRODUCTION

iolgists use high-throughput experiments such as mi-

croarray to answer complex biological research ques-

tions. Experiments, such as gene-expression microar-rays [1], [2] result in datasets that are very large. The data-sets usually contain information about several thousand genes. Scientists use these datasets to infer complex interac-tions between genes and proteins. But due to its magnitude, microarray data is prohibitively difficult to analyze without the help of computational methods.

The advent of high-throughput experiments is causing a shift in the way biologists do research, a shift away from simple reductionist testing on a few variables towards sys-tems-level exploratory analysis of 1000s of variables simul-taneously [3]. Hence, they use various data visualizations to derive biologically relevant insights. The main purpose in using these visualizations is to gain insight into the ex-tremely complex and dynamic functioning of living cells. In response to these needs, a large number of visualization tools targeted at this domain have been developed [4], [5], [6].

However, in collaborations with biologists, we received mixed feedback and reviews about these tools. First, with a wide variety of available tools, there is significant confusion among the biologists about which tool to use. Second, be-cause of the open-ended and exploratory nature of the tasks, it is unclear how and if these tools meet their needs in providing insight.

The ultimate goal of the research reported in this paper is to evaluate some of the popular microarray data visuali-zation tools, such as Spotfire? [7]. The key research ques-tions are: How successful are these tools in assisting the biologists in arriving at domain-relevant insights? How do various visualization techniques affect users’ perception of data? How does user’s background affect the tool usage?

The immediate goal is to devise an evaluation method-ology that better reflects the needs of the bioinformatics data analysis scenario. Typically, visualization evaluations have focused on controlled measurements of user perform-ance and accuracy on predetermined tasks [8], [9]. How-ever, to answer these research questions requires an evalua-tion method that more closely matches the exploratory na-ture of the biologists’ goals. The main consideration for any researcher is discovery. Arriving at an insight often sparks the critical breakthrough that leads to discovery: suddenly seeing something that previously passed unnoticed, or see-ing something familiar in a new light. Ultimately the real function of any visualization and analysis tool is to make it easier for an investigator to glean insight, whether from their own data or from external databanks. Thus, our pri-mary focus is on insight. We devise and deploy a novel insight-based approach to visualization evaluation that can be generally applied in other data visualization domains.

2 R ELATED W ORK

A large number of studies have been conducted to measure effectiveness of visualizations using different evaluation methods.

Controlled experiments: Many studies have evaluated visualizations through rigorous controlled experiments [8], [9]. In these studies, typical independent variables control aspects of the tools, tasks, data, and participant classes. De-pendent variables include accuracy and efficiency meas-ures. Accuracy measures include precision, error rates, number of correct and incorrect responses, whereas effi-ciency includes measures of time to complete predefined benchmark tasks. E.g., [10] compares three different visu-alization systems on different tasks in terms of solution time and accuracy.

Usability testing: Usability tests typically evaluate visuali-zations to identify and solve user interface problems. Methods involve observing participants as they perform designated tasks using a ‘think aloud’ protocol, noting the usability incidents that may suggest incorrect use of the interface, and comparing results against a predefined us-

B

2 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS

ability specification [11]. Refer to [12] for an example of a professional usability study of a visualization.

Metrics, Heuristics, and Models: Different from empirical evaluations are inspections of user interfaces by experts, such as with heuristics [13]. Examples of specific metrics for visualizations include expressiveness and effectiveness cri-teria [14], data density and data/ink [15], criteria for repre-sentation and interaction [16], high-level design guidelines [17], principles based on pre-attentive processing and per-ceptual independence [18], and rules for effectiveness of various visual properties [19]. Cognitive models, such as CAEVA [20], can be used to simulate visualization usage and thereby examine the low-level effects of various visu-alization techniques.

Longitudinal and Field Studies: A longitudinal study of information visualization adoption by data analysts is pre-sented in [21]. Their work suggests advantages when visu-alizations are used as complementary products rather than stand alone products. [22] examines users’ long-term ex-ploratory learning of new user interfaces, with ‘eureka re-ports’ to record learning events.

Thus, a range of evaluation methods has been used to measure effectiveness of visualizations [23]. In the litera-ture, controlled experiments are the most prevalent for identifying and validating more effective visualizations. Unfortunately, these studies evaluate visualizations based only on a set of predefined tasks. Test subjects are in-structed to use the visualizations to find answers to specific questions that are given by the test administrators. While this approach is useful, it is too narrow to evaluate the benefits of open-ended discovery as needed by biologists.

A primary purpose of visualization is to generate insight

[24]. A measure of an effective visualization is its ability to generate unpredictable new insights that might not be the result of a preplanned benchmark task. Visualization can enable biologists to not only find answers but to also find questions, to identify new hypotheses. To evaluate this ca-pability, visualizations could be measured in terms of in-sight generation. Hence, we developed an evaluation pro-tocol that focuses on recognition and quantification of in-sights gained from actual exploratory use of visualizations [38]. This paper presents a detailed explanation and dis-cussion of the methodology, as well as detailed results of applying the method to bioinformatics visualizations.

3P ILOT S TUDY

The main challenge we faced in designing the experiment was precisely defining insight and how to measure it. The word ‘insight’ in ordinary usage is vague and can mean different things to different people. However, for the pur-pose of our study we needed this term to be quantifiable and meaningfully reproducible. To do this we undertook an initial pilot study to observe how users’ recognized and categorized information obtained from microarray data using visualization tools. We used both GeneSpring? [25] and Spotfire? [7] to ascertain that these commercial tools were not too difficult to learn and could be used by novice as well as expert users.

As the pilot experiment was exploratory in nature, we presented no strict protocol as to how users ought to pro-ceed. We recruited five subjects at our institute to partici-pate. As our recruits had no prior experience using these particular tools, we reduced their initial learning time by offering a brief introduction to the tool they would use along with a summary of the different visualization tech-niques provided by the tool. Users were encouraged to think aloud and report any findings they had about the dataset. Pilot participants were supplied two datasets to work with, a table containing fake data that contained in-formation about just ten genes, and the Lupus Dataset used in the final experiment (Section 4.1). We selected the smaller dataset for training as we believed this would facili-tate users becoming familiar with the visualization tech-niques faster. Once comfortable with using the visualiza-tion tool, users were instructed to move onto the Lupus data.

Due to the volume and rapidity of observations re-ported, we concluded that we needed to record any future sessions on videotape. We also discovered that the users grew weary analyzing the practice dataset, despite us tell-ing them that it was just a learning aid. They tended to spend too much time on it and, by the time they began looking at actual data, they were already fatigued. We found that our test subjects could learn a visualization technique just as quickly from real data, hence, we decided to use just the real data for final experiments. From the us-ers’ comments we recognized the following quantifiable characteristics of ‘insight’.

3.1 Insight Characteristics

To measure insights gained from visualization, a rigorous definition and coding scheme is required. We recognized in the pilot that we could capture and characterize specific individual insights as they occurred in participants’ visual data analysis process. This provided more detailed infor-mation about the insight capabilities of the tools than sub-jective measures from post-experiment surveys.

We define an insight as an individual observation about the data by the participant, a unit of discovery. These can be recognized in a think-aloud protocol. The following quantifiable characteristics of each insight can then be en-coded for analysis. We applied this scheme in the main experiment. Although we present them here in the context of biological and microarray data, we believe that this can be applied to other data domains as well. The characteris-tics of each insight are:

Fact: The actual finding about the data. We counted only distinct facts for each participant.

Time: The amount of time taken to reach the insight. Ini-tial training time is not included.

Domain Value: The value, importance, or significance of the insight. Simple observations such as “Gene A is high in experiment B” are fairly trivial; whereas, more global observations of a biological pattern such as “deletion of the viral NS1 gene causes a major change in genes relating to cytokine expression” are more valuable. The domain value is coded on a scale of 1 to 5 by a biology expert familiar with the results of the data. In general, trivial observations

SARAIYA ET AL.: AN EVALUATION OF MICROARRAY VISUALIZATION TOOLS FOR BIOLOGICAL INSIGHT 3

earn 1-2 points, insights about a particular process earn an intermediate value of 3, and insights that confirm, deny, or create a hypothesis earn 4 or 5 points.

Hypotheses: Some insights lead users to identify a new biologically-relevant hypothesis and direction of research. These are most critical because they suggest an in-depth data understanding, relationship to biology, and inference. They lead biologists toward ‘continuing the feedback loop’ of the experimental process, in which data analysis feeds back into design of the next experimental iteration [26].

Breadth vs. Depth: Breadth insights present an over-view of biological processes, but not much detail; e.g., “there is a general trend of increasing variation in the gene expression patterns”. Depth insights are more focused and detailed; e.g., “gene A mirrors the up-down pattern of gene B, but is shifted in time”. This also is coded by a domain expert.

Directed vs. Unexpected: Directed insights are those that answer a specific question that the user was searching for. Unexpected insights are additional exploratory or ser-endipitous discoveries that were not specifically being searched for. This distinction is recognized by asking par-ticipants to identify specific questions they want to explore about the dataset at the beginning of the trial.

Correctness: Some insights are incorrect observations that result from misinterpreting the visualization. This is coded by an expert biologist and visualization expert to-gether.

Category: Insights are grouped into four main catego-ries: overview (overall distributions of gene expression), patterns (identification or comparison across data attrib-utes), groups (identification or comparison of groups of genes), and details (focused information about specific genes). These common categories were identified from the pilot experiment results after insights were collected.

4E XPERIMENT D ESIGN

The aim of the main study is to evaluate five popular bioin-formatics visualization tools in terms of the insight that they provide to the users. A 3x5 between-subjects design exam-ines these two independent variables:

1. Microarray dataset, 3 treatments

2. Microarray visualization tool, 5 treatments

4.1 Microarray Datasets

To examine a range of data scenarios, we used data from three common types of microarray experiments. The data-sets are all quantitative, multi-dimensional data. Values represent a gene’s measured activity level (or gene expres-sion) with respect to a control condition. Hence, higher (lower) values indicate an increased (decreased) gene activ-ity level. Since our study is focused on the interactive visu-alization portion of data analysis, the datasets were pre-processed, normalized, pre-filtered, and converted to the required formats (as discussed in [27] and [28]) in advance. In general, the biologists’ goal is to identify and understand the complex interactions among the genes and conditions, essentially to reverse engineer the genetic code. The follow-ing three datasets were used.

1) Time-series dataset: Users were given an unpublished dataset from Karen Duca’s lab [29].HEK293 cells, a human embryonic kidney cell line, were infected with the A/WSN/33 strain of influenza virus in vitro at an MOI of 5. At defined time points across the entire viral replication cycle in vitro, mRNA was extracted from infected and mock-infected cultures. The values in the columns were the log2 of the normalized ratios of experimental signal to con-trol signal. The dataset used for analysis had 1060 rows (genes) over 5 time points. Two additional columns repre-sent the gene name and standard ID.

Table 1: Time-series dataset used in the experiment GeneName GenBankId1.5 Hr 4 hr 6 Hr 8 Hr 12 Hr aquaporin 4AA001003 1.54 -0.21 1.49 -0.12 0.96 … … … … … … …

2) Viral dataset: Part of a published dataset from Michael Katze’s lab [30] was given to users. A549 cells, a human lung epithelial cell line, were infected with one of three in-fluenza viruses in vitro (wild type A/PR/8/34, recombi-nant strain of PR8 with the NS1 partially deleted, called NS1 (1-126), recombinant strain derived from PR8 with the NS1 gene completely deleted, called delNS). Other than in the NS1 gene, all three viruses are identical. At 8 hours post infection, mRNA was extracted from infected and mock-infected cultures. The dataset used for analysis had 3 col-umns (representing the 3 viral conditions) and 861 rows (genes). Two additional columns represent the gene name and standard ID.

Table 2: Viral dataset used in the experiment

Name Description wt PR8 NS1 (1-126)delNS1 ADCY9 adenylate-cyclase-9 0.54 0.91 5.8 … … … … …

3) Lupus dataset: Participants were presented a subset of published data from Timothy Behren’s lab [31]. In this study, after blood draw, peripheral blood mononuclear cells (PBMCs), comprising monocytes/macrophages, B and T lymphocytes, and NK cells, were isolated from control and Systemic Lupus Erythematosus (SLE) samples. mRNA was harvested for expression profiling using Affymetrix technology [32]. The column values represented expression values (average difference or AD) for each gene. Scaling was performed to allow comparison between chips. The dataset had 90 columns (consisting of gene expression from 48 SLE samples and 42 healthy control samples) and 170 rows (genes). Two additional columns represent the gene name and standard ID.

Table 3: Lupus dataset used in the experiment Accession # Gene Ctrl 1 … Ctrl 42 SLE 1… SLE 48 AB008775 Aquaporin

9

-63.7 …

100.1 4418.… 3433.2 … … … …

… … …

4 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS

4.2 Microarray Visualization tools

For practical reasons, we limited this study to five microar-ray visualization tools. We chose the tools based on their popularity and availability. We attempted to select a set of tools that would span a broad range of analytical and visual capabilities and techniques. Cluster/Treeview (Cluster-view) [33], TimeSearcher [34], and Hierarchical Clustering Explorer (HCE) [35] are free tools, while Spotfire? [7] and GeneSpring? [25] are commercial tools.

Clusterview (Figure 1) uses a heat-map visualization for both data overview and details. A compressed heat-map provides an overview of all values in the dataset, in row-column format. Users can select a part of the overview to study in more detail. It is standard practice in bioinformat-ics to visually encode increased gene-expression values with a red brightness scale, decreased gene-expression val-ues with a green brightness scale, and no-change as black. As a slight variation, some tools use a continuous red-yellow-green scale with yellow in the no-change region.

Figure 1: Cluster/Treeview (Clusterview) [33]

TimeSearcher (Figure2) uses a parallel-coordinate visu-alization for data overview. Line graphs and detailed in-formation are also provided for each individual data entity.

The views are tightly coupled using the concept of ‘brush-ing and linking’, selecting a gene in one view highlights it

in all views. TimeSearcher provides dynamic query widgets

directly in the parallel-coordinate overview to support in-teractive filtering based on user specified time-series pat-terns.

Figure 2: TimeSearcher [34]

HCE (Figure 3) provides several different visualizations:

scatter plots, histograms, heat maps, and parallel-coordinate displays for data. HCE’s primary display uses

dendrogram visualizations to present hierarchical cluster-ing results. This clusters similar data items near each other

in the tree display. HCE also provides histograms and scat-ter plots for data analysis. In a multidimensional dataset, the number of scatterplots possible is very large. HCE in-troduces a new concept of ‘rank by feature’ [36] to allow users to quickly find interesting Histograms and Scatter-plots. The visualizations are tightly coupled using the in-teractive concept of brushing and linking. Users can ma-nipulate various properties of the visualizations and also zoom into areas of interest.

Figure 3: Hierarchical Clustering Explorer (HCE) [35]

Spotfire? (Figure 4) offers a wide range of visualiza-tions: scatter plots, bar graphs, histograms, line charts, pie charts, parallel coordinates, heat maps, and spreadsheet views. Spotfire? presents clustering results in multiple views, placing each cluster in a separate parallel coordinate view. The visualizations are linked for brushing. Selecting data items in any view shows feedback in a common detail window. Users can zoom, pan, define data ranges, and cus-tomize visualizations. The fundamental interaction tech-nique in Spotfire? is the dynamic query sliders, which in-teractively filter data in all views.

Figure 4: Spotfire? [7]

GeneSpring? (Figure 5) provides the largest variety of

visualizations for microarray data analysis: parallel coordi-nates, heat-maps, scatter plots, histograms, bar charts, block

views, physical position on genomes, array layouts, path-ways, ontologies, spreadsheet views, and gene-to-gene

comparison. We could not use some of the visualizations,

such as physical position and array layout views, for this

experiment due to lack of sufficient data. The visualizations

are linked for brushing. Users can manipulate the visualiza-tions in several ways e.g., zooming, customizing visualiza-tions by changing the color, range, etc. GeneSpring? also

SARAIYA ET AL.: AN EVALUATION OF MICROARRAY VISUALIZATION TOOLS FOR BIOLOGICAL INSIGHT 5

includes data clustering capabilities.

Figure 5: GeneSpring? [25]

Table 4: Summarizes the visualization and interaction tech-niques supported by each tool (O+D = Overview+Detail; DQ =

Dynamic Queries).

Tool Visual Representations Interactions Cluster/ Treeview Heat-map, Clustered heat-map O+D Time-Searcher Parallel coordinates, line graph

Brushing, O+D, DQ HCE Cluster dendrogram, parallel coor-dinates, heat-map, scatterplot,

histogram

Brushing, Zooming, O+D, DQ Spotfire? 7.2 Functional Genomics Parallel coordinates, heat-map, scatterplots (2D/3D), histogram, bar/pie chart, tree view, spread-sheet view, Clustered parallel coor-dinates

Brushing, Zooming, O+D, DQ GeneSpring? 5.0

Parallel coordinate, heat-map, scat-terplots (2D/3D), histogram, bar chart, block view, physical position

view, array layout view, pathway view, spreadsheet view, compare gene to gene, Clusterested parallel coordinates

Brushing, Zooming

4.3 Participants

30 test subjects volunteered from the university commu-nity. We allotted six users per tool, with two per dataset per tool. We required all users to have earned at least a Bache-lor’s degree in a biological field and be familiar with mi-croarray concepts. To prevent undue advantage and also to measure learning time, we assigned users to a tool that they had never used before. Based on their profiles, the users fit into one of three categories summarized in Table 5.

Table 5: Participant background and number for each category

Category Participant Background N

Domain Expert Senior researchers with extensive experi-ence in microarray experiments and microar-ray data analysis. Possess a Ph.D. in a

biological field.

10

Domain Novice Lab technicians or graduate student re-search assistants, having an M.S. or B.S. in

a biological field. Some experience with microarray data analysis.

11

Software Developers Professionals who implement microarray software tools. Have an M.S. in a biological

field and also M.S. in computer science.

9

4.4 Protocol and Measures

To evaluate these tools in terms of their ability to generate insight, a new protocol and set of measures is used that combines elements of the controlled experiment and usabil-ity testing methodologies. This approach seeks to identify individual insight occurrences as well as overall amount of learning while participants analyze data in an open-ended think-aloud format. Also, we decided to focus on new us-ers of the tools with only minimal tool training. We have found that success in the initial usage period of a tool is critical for tool adoption by biologists.

Each user was assigned one dataset and one tool. Before starting their analysis, users were given a background de-scription about the dataset. To reduce initial learning time, the users were given a 15-minute tutorial about the visuali-zation and interaction techniques of the tool. Users then listed some analysis questions they would typically ask about such a dataset. Then, they were instructed to con-tinue to examine the data with the tool until they felt that they would not gain any additional insight. The entire ses-sion was videotaped for later analysis. Users were allowed to ask the administrator about using the tool if they could not understand a feature. The training in this protocol was intended to simulate how biologists often learn to use new tools from their colleagues.

While they were working, users were asked to comment on their observations, inferences and conclusions. Ap-proximately every 10-15 minutes, users were asked to esti-mate how much of the total potential insight they felt they had obtained so far about the data, on a scale of 0–100%. When they felt they were finished, users were asked to as-sess their overall experience with the tool, including any difficulties or benefits.

Later, we analyzed the videotapes to identify and codify all individual occurrences of insights, as described in the next subsection. Table 6 summarizes the dependent vari-ables.

Table 6: Dependent measures

1 User’s initial questions about the dataset

2 Total time spent with the tool

3 Amount learned (as a percentage), periodic and final

4 List of insights and characteristics

5 Visualization techniques used

6 Usability issues

7 Participant demographics

5 R ESULTS

Results are presented in terms of users’ data questions, in-sights, visualization usage, and user background.

5.1 Initial Questions At the start of each session, users were requested to for-mulate questions about the data that they expected the visualization to answer (Table 7). Almost all the users wanted to know how the gene expression changed and its statistical significance with each experimental condition, different expression patterns, and obtain pathway informa-tion and known literature for the genes of interest. More biologically specific questions focused on location of genes

6 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS of interest on chromosomes and pathways. They said that it

would be valuable to know what pathways show correla-tions.

The users working with time series data had questions that focused more on time related changes in gene expres-sion. Most expert users were interested in finding a set of genes that responded earlier to a treatment and was later followed by other genes. Rather than analyzing information for individual patients, the Lupus dataset users were more interested in comparing the overall expression between control and lupus groups. Most novice users wanted to start by taking averages of both the groups to see what genes changed the most from one group to another.

Table 7: Lists all the data questions asked by the participants. The number of participants who asked a particular question is also listed.

Information participants wanted from the data Num. Questions for Time series dataset

1 Change in overall expression with time 10/10

2 Different patterns of expression 10/10

3 Genes that responded early to a treatment and

were later followed by other genes

5/10

4 Functional details of genes showing high change 2/10

5 Genes showing similar expression pattern to a

specific gene of interest

1/10

6 Relate change in gene expressions to physiologi-

cal changes in the cells

1/10

7 Pathway information for genes having similar

expression patterns

2/10

8 Relate gene expressions to their position on

chromosomes

1/10

9 Retrieve known information for selected genes 10/10 Questions for Viral dataset

10 Difference in overall expression for three viruses 10/10

11 Genes that show similar/different behavior to the

experimental hypothesis

3/10

12 Expression patterns different from the hypothesis 3/10

13 Genes having high or low expression for each

viral strain

10/10

14 Different patterns of gene expression 10/10

15 Pathway information for genes showing a particu-

lar expression pattern

3/10

16 Correlations between different pathways 3/10

17 Chromosomal location of genes that show similar

change

3/10

18 Functional information of selected genes 1/10

19 Statistical significance in overall changes between

different viral strains

1/10 Questions for Lupus dataset

20 Difference in expression between 2 groups 10/10

21 Statistical significance of difference between 2

groups

3/10

22 Different patterns of gene expression 10/10

23 Relate expressions to severity of disease 1/10

24 The range of gene expression for each group 1/10

25 Statistical significance of variability of expression

for genes in each group

4/10

26 In case of variability, if this is based on patients’

age, sex, race, etc.

1/10

27 Analyses such as list all genes that show more

than 50% increase from control to lupus patients

1/10

28 A list of housekeeping genes to evaluate experi-

ment results

1/10

29 Patient characteristics such as those who used

some drug vs. those who did not use any drug,

males vs. females etc.

1/10

30 Behavior of Immune pathway genes 2/10

31 Calculate average expression for each group 6/10 There were collectively 31 distinct questions for all the datasets. It was not possible to answer some of the ques-tions during the experiment, due to insufficient data. GeneSpring? (31/31) and Spotfire? (27/31) can potentially address most of the questions posed by the participants. Clusterview (11/31), TimeSearcher (14/31), and HCE (15/31) answer more specific questions.

5.2 Evaluation on Insight Characteristics

We list here measures for each characteristic of the insight described earlier. Since this evaluation method is more qualitative and subjective than quantitative, and the num-ber of participants is limited, general comparison of ten-dencies in the results is most appropriate (Figure 6 and Ta-ble 6). However, we include some statistical analysis that provides useful indicators.

Facts: We counted the total number of facts i.e. distinct insights about the data for each participant. As shown in figure 6, the count of insights was highest for Spotfire? and lowest for HCE.

Time: The following two temporal characteristics sum-marize the time to acquire insights:

Average Time to First Insight: The average time into the session, in minutes, of the first insight occurrence of each participant. Lower times suggest that users are able to get immersed in the data more quickly, and thus may indi-cate a faster tool learning time. The participants using Clus-terview took a very short time to reach first insight. Time-Searcher and Spotfire? were also fairly quick to first in-sight, while HCE and GeneSpring? took twice as long on average. Clusterview users took significantly less time (p<0.01) to reach the first insight than the other users, while GeneSpring? took significantly longer (p<0.01).

Average Total Time: The average total time each user spent using the tool until they felt they could gain no more insight. Lower times indicate a more efficient tool, or possibly that users gave up on the tool due to lack of fur-ther insight. In general, Clusterview users finished quickly while GeneSpring? users took twice as long.

Total Domain Value: the sum of the domain value of all the insight occurrences. Insight value was highest for Spotfire?. Participants using Spotfire? gained significantly more insight value than with GeneSpring? (p<0.05). Though, numeric value was lowest for HCE, there were no significant differences between Spotfire? or other tools and HCE due to high variance in the performance of HCE users, explained later.

Hypotheses: Only a few insights led users to new bio-logical hypotheses. These insights are most vital because they suggest future areas of research and result in real sci-entific contributions. For example, one user commented that parts of the time series data showed a regular cyclic behavior. He searched for genes that showed similar behav-ior at earlier time points, but could not find any. He offered several alternative explanations for this behavior related to immune system regulation, and said that it would compel him to perform follow-up experiments to attempt to isolate

SARAIYA ET AL.: AN EVALUATION OF MICROARRAY VISUALIZATION TOOLS FOR BIOLOGICAL INSIGHT 7

this interesting periodicity in the data. For viral dataset two users commented that there were two patterns of gene ex-pression that showed negative correlation. They inquired whether this means that the transcription factors of these genes have inhibitory or stimulatory effects on each other. They said that they wanted more information about the functions and pathways these genes belong to and relate all this biology to the data. Spotfire? resulted in one hypothe-sis for each dataset, thus a total of three. Clusterview also led users to a hypothesis for the Viral and Lupus datasets.

cates significantly better/worse performance differences. Y-axis arrows indicate direction of better performance.

Breadth vs. Depth: Though we had initially thought this to be an interesting criterion, on data analysis we found that most user comments were of the type ‘breadth’.

Directed vs. Unexpected Insights: The participants us-ing HCE with the Viral dataset noticed several facts about the data that were completely unrelated to their initial list of questions. Clusterview provided a few unexpected in-sights from the Lupus dataset. TimeSearcher provided un-expected insights about the time series data, whereas Spot-fire? had one each for time series and Lupus

Incorrect Insights (Correctness): HCE proved very helpful to users working with the viral dataset. However, users working with the time series or Lupus datasets did not gain much insight from the data. When prompted to report their data findings, they stated some observations about the data that were incorrect. None of the other tools resulted in incorrect findings.

Table 6: shows the total number of unexpected insights, hy-potheses generated, and incorrect insights from the insight

occurrences for each tool

Cluster Time- Gene-

View Searcher HCE Spotfire? Spring?Visualization

Tool Unexpected

Insights

Hypotheses

Generated

Incorrect

Insights

Clusterview 3 2 0 TimeSearcher 3 1 0

HCE 5

1

2 Spotfire? 2

3 0 GeneSpring? 0 0 0

Together, higher total value and count indicate a more effective tool for providing useful insight. Lower time to

first insight indicates a faster learning curve for a tool. Ide-

ally a visualization tool should provide maximum amount

of information in shortest possible time.

Overall, Spotfire? resulted in the best general perform-

ance, with higher insight levels and rapid insight pace. Clusterview and TimeSearcher appear to specialize in rapid

insight generation, but to a limit. Using GeneSpring?, us-

ers could infer the overall behavior of the data and the pat-

terns of gene expressions. However because the users

found the tool complicated to use, most of them were

overly consumed with learning the tool rather than analyz-

ing the data. They had difficulty getting beyond simple insights. HCE’s strengths will become clear in the next two sections.

5.3 Insight per Dataset

Now we compare the tools within each dataset.

Time series data: In general, Spotfire? and Time-Searcher performed the best of the 5 tools in this dataset. Participants using Spotfire? and TimeSearcher felt they learned significantly more (p<0.05) from time series data

than the other tools. Participants using Spotfire? felt they learned more from the data (73%) compared to Time-Searcher (53%). Both Spotfire? and TimeSearcher had nearly equivalent performance in terms of value and num-

ber of insights. Time to first insight was slightly lower for TimeSearcher (4 min) as compared to Spotfire? (6 min). At

the bottom, participants using HCE took significantly longer (p<0.01) to reach the first insight than the other tools. Participants using GeneSpring? took significantly longer (p<0.05) than TimeSearcher and Clusterview.

Virus data: HCE proved to be the best tool for this data-

set. Participants using HCE had better performance in

terms of insight value as compared to other users. How-

ever, there were no significant differences between the

other users. HCE provided 5 unexpected insights that were different than the initial information users were searching

for in this dataset.

8 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS

Lupus data: Participants using Clusterview and Spot-fire? had more insight value as compared to the other tools (p<0.05) in this data

5.4 Tools vs. Datasets

This section examines individual tools across the three datasets. TimeSearcher and HCE had interesting differ-ences among the datasets (Figure 7), while the other tools were well rounded.

TimeSearcher: Participants using TimeSearcher per-formed comparatively best with the time series data as compared to the other two datasets. With time series data, they had over double the value and number of insights than the participants using Viral and Lupus datasets.

HCE: In contrast, participants using HCE did best on the Viral dataset. On Viral dataset, they had a significant better performance advantage on insight value (p<0.01), number of insights (p<0.05) and time to first insight (p<0.05) as compared to the other datasets. They also felt they learned much more from the data. Participants using Lupus data spent significantly less overall time with the tool (p<0.05) as they felt they could not learn much from the data using HCE.

Figure 7: TimeSearcher and HCE specialize in the Time series and Viral datasets respectively.

5.5 Insight Categories

Though a wide variety of insights were made, most could be categorized into a few basic groups. Table 7 summarizes the number of each type of insight by tool.

Overall Gene Expression: These described and com-pared overall expression distributions for a particular ex-perimental condition. For example, a user analyzing time series data reported that “at time points 4 and 8 a lot of genes are up regulated, but at time point 6 a lot are down regulated”. Several users analyzing the virus dataset com-mented that more genes showed a higher expression level for delNS1 virus as compared to wt virus, and the gene expression seems to be increasing with the deletion. Most users working with the Lupus dataset reported that gene expression for SLE patients appeared higher than the con-trol group.

Expression Patterns: Most users considered the ability

to search for patterns of gene expressions very valuable.

Most started by using different clustering algorithms (e.g.,

K-Means, SOMS, Hierarchical Clustering) provided by the

tools to extract the primary patterns of expression. They compared genes showing different patterns. For example,

some users noted that while most genes showed higher expression value for Lupus group as compared to Control group, there were other genes that were less expressed for

the Lupus group. They thought it would be interesting to

obtain more information about these genes in terms of their functions and the pathways they belong to.

Grouping: Some users, mainly those working with Spotfire? and GeneSpring?, grouped genes based on some criteria. For example, a user working with Spotfire? wanted to know all genes expressed similarly to the gene

HSP70. Users working with GeneSpring? used gene ontol-

ogy categories to group genes. GeneSpring? provides dif-

ferent ways in which users can group their data. They

found this functionality very helpful. Also most of the users

were very pleased to learn that they could link the biologi-

cal information, such as gene functions, with the groups.

Detail Information: A few users wanted detailed infor-

mation about particular genes that were familiar to them.

For Time series data, a user noticed about 5% of genes high

at 1.5 hr were also high at 12 hr and followed a regular cy-

cle. He looked up the annotations for a few of these genes

and tried to obtain more information about them to see if

they could be responsible for the cyclic nature of the data.

Table 7: Total number of insights in each category

Tool Overview Patterns Groups Detail Clusterview 9 10 0 2 TimeSearcher 10 8 0 3

HCE 6

5

1 Spotfire? 13 10 1 1 GeneSpring? 5 8 4 1

5.6 Learning Curves

During the course of the experiment, users were asked

every 10-15 minutes after they began data analysis about

how much they felt they learned about the data using the

tool. The amount learned is a percentage of total potential insight, as perceived by users. In contrast to other parame-

ters reported earlier, this metric gauges users’ belief about insight gained, and about how much the tool is or is not enabling them to discover. Figure 8 presents the average learning curves for all the three datasets for each tool.

The findings reported earlier are further strengthened by

the graphs in Figure 8. As shown, participants using the

Lupus dataset felt they learned more using Clusterview as compared to the other participants, though they spent al-

most the same amount of time in the study. Participants

using the timeseries dataset felt they learned more as com-

pared to the others with TimeSearcher, participants using

the Viral dataset felt they learned the most using HCE. Also, these users spent more time in the study analyzing

SARAIYA ET AL.: AN EVALUATION OF MICROARRAY VISUALIZATION TOOLS FOR BIOLOGICAL INSIGHT 9

data as compared to the participants who worked with

other datasets. Participants using Lupus dataset spent less time on average in the study for both HCE and Time-Searcher. On average, participants did not report much learning difference across the datasets for Spotfire?. Though, participants analyzing timeseries dataset spent more time in the study as compared to the others. Partici-pants using Virus dataset felt they learned most using GeneSpring?, whereas participants using Lupus dataset felt they learned the least. Also, the participants analyzing time series data spent the least amount of time in the study.

Figure 8: The average learning curves for each dataset for the tools, showing users’ estimated insight percentage over time.

A verage Final Amount Learned: Figure 9 shows the

average of the participants’ final stated amount learned for

all the datasets for each tool.

Cluster Time-

Gene-

indicate significantly better/worse differences. Y-axis arrow indicate direction of better performance

5.7 Visual Representations and Interaction

Spotfire? users preferred the heat-map visual representa-tion, whereas GeneSpring? users preferred the parallel coordinate view. This is despite the fact that both of these tools offer both representations. Most of these users per-formed the same analyses, but using different views.

Though there were no particular preferences of visualiza-tions for particular the datasets, we noticed that for the Lu-pus dataset Spotfire? and Clusterview users preferred the heat-map visualization. The heat-map allowed them to group Control and Lupus data neatly into two distinct groups and they could easily infer patterns within and across both groups. Participants using these tools showed a higher performance on these datasets using these visualiza-tions. This finding is strengthened by the fact that both TimeSearcher and GeneSpring? users showed average per-formance on this data set. Users of these tools used parallel coordinate visualizations to analyze the datasets.

We noticed that even though tools like Spotfire? and GeneSpring? provides a wide range of visualizations to users, only a few of these were used significantly during the study. Most users preferred visualizations showing outputs of clustering algorithms, such as provided by Clus-terview, Spotfire?, and GeneSpring?. These enabled the users to easily see different patterns in the data. However, many said that it would be more helpful to them if the in-teraction capabilities of this representation were increased, e.g. to better enable comparison of the groups, subdividing, etc.

HCE’s primary overview presents the data in a dendo-gram heat-map that is re-ordered based on the results of hierarchical clustering algorithms. Columns and samples with the most similar expression values are placed together. Thus, for both the Time series and Lupus datasets, where a particular column arrangement is useful to recognize changes across the experimental conditions, HCE showed poorer performance. Users focused primarily on the clus-tering, and apparently did not consider the potential bene-fits of turning off that feature.

10 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS

5.8 Participant Comments on Visualization Tools

At the end of each experiment, users were requested to summarize their experience with the tool they used. The following sections summarize users’ comments.

Clusterview: Users felt that the tool was extremely sim-ple to use. Some users (3/6) required a brief explanation of the heat-map view of the data. The users felt that the in-formation provided by Clusterview is very basic, and they will need to perform additional analysis with other meth-ods to get further information from the data. The users who worked with timeseries data commented that heat map was not a very efficient way to represent data and they pre-ferred visualizations similar to parallel-coordinates.

TimeSearcher: Feedback on TimeSearcher varied for different datasets. The users found the parallel-coordinate visualization provided by TimeSearcher easy to under-stand. Users working with the timeseries data found the tool very helpful. They were able to easily identify trends and patterns in the data. Users working with Lupus dataset said that it was very difficult for them to see all the 90 data points clearly. Some participants found a few features of TimeSearcher such as ‘Angular Queries’ and ‘Variable Time-Boxes’ difficult to interpret. As TimeSearcher does not provide any clustering capabilities, users have to manually search for every pattern in the data using ‘timeboxes’ as shown in Figure 10, which can prove tedious in a large dataset.

Figure 10: In TimeSearcher, users search for individual patterns of expression using time-boxes.

HCE: Most users were impressed with HCE. The tool provides a wide variety of features for data analysis. HCE was more helpful to participants working with viral data-set. The users working with Lupus dataset gave up data analysis within 20 minutes, complaining that it was very difficult for them to analyze data using HCE.

Spotfire?: Users working with Spotfire? were im-pressed with it. They did not require any special assistance to understand the tool. They said that most visualization were easy to understand. Most users preferred the heat-map visualization of the Spotfire over its parallel coordi-nate or Profile chart display (Figure 11). Though, the user found the visualization displaying different clusters in the data helpful, they said that it should be easier to interact with. They found it annoying that they could not select and focus on a particular cluster of interest.

Figure 11: The Heat-map and Parallel Coordinate visualizations in Spotfire [7].

GeneSpring?: Users felt that they will have to spend a long time learning GeneSpring?. A few users (2/6), spent an initial 45 minutes just trying to get familiar with GeneSpring? after which they gave up the data analysis saying that it will take them too long to comprehend what

the tool does. A few users commented that it will be great to have some sort of automation that would show them which visualization to begin the data analysis and how to change the visualization properties. One user said that the basic things should be easy, and visualizing an already normalized dataset should not be so difficult. None of the users could change different properties of visualization such as color, scale, amount of data to be visualized with-out help. Users were pleased to know that GeneSpring? provided features to make lists of genes based on different criteria. The users commented that such features could prove to be very helpful. Also, features that allow users to add pathway information to gene lists were considered very useful.

5.9 Participants’ Background

One might conjecture that users with more domain experi-ence or software development experience would gain more insight from the data. Yet, we found that the insight value and total number of insights did not appear to depend on participant background. Averages were similar, and no significant difference between user categories was detected. However, software developers on average felt that they learned less from the data as compared to others, whereas domain novices felt they learned more from the data. Nov-ices also spent comparatively more time in the study as compared to others. A noticeable difference was in the us-ers’ behavior during the experiment. Novice users needed more prompting to make comments about the datasets. They were less confident to report their findings.

6 D ISCUSSION OF R ESULTS

Commercial vs. Free: Both Spotfire? and Clusterview users resulted in equivalent insight from the Lupus dataset. However, participants using Spotfire? felt they learned much more from the data as compared to Clusterview. Analyzing data in multiple visual representations gave Spotfire? users more confidence that they did not miss any information. Whereas, Clusterview users were more skepti-

SARAIYA ET AL.: AN EVALUATION OF MICROARRAY VISUALIZATION TOOLS FOR BIOLOGICAL INSIGHT 11 cal about their progress, believing that they must be miss-

ing something. A simple visualization tool used on an ap-

propriate dataset can have performance comparable to

more comprehensive software containing many different

visualizations and features.

Free research software like TimeSearcher and HCE tend

to address a smaller set of closely related tasks. Hence,

they provide excellent insight on certain datasets. Also,

since they are focused on specific tasks, they have simpler

user interfaces that emphasize a certain interaction model.

This reduces the learning time and enables users to gener-

ate insights quickly. Spotfire?, despite having a large fea-

ture set, has a learning time almost equivalent to the simple

tools, which is commendable. This is likely due to Spot-

fire’s? unified interaction model. The brushing and dy-

namic query concepts were quickly learned by users, and

resulted in early rapid insight generation.

Domain Relevance: A serious shortcoming of the tools

is that they did do not adequately link the data to biological

meaning. The fact that domain experts performed on par

with domain novices, and the small numbers of hypotheses

generated, indicates that the tools did not leverage the do-

main expertise well. Before we conducted the study, we

believed that users with more expertise in biology would

gain more from visualizations than a beginner. We were

also curious about whether software development experi-

ence would lead to better usage of the tools. However,

these background differences did not reveal themselves in

the actual insights generated. The difference was only in

the users’ believed insight, in which novices were overcon-

fident and developers were skeptical.

If the tools could provide a more information-rich envi-

ronment, such as linking data directly to public gene data-

bases or literature sources, expert biologists could better

exploit their domain knowledge to construct higher level,

biologically relevant hypotheses. In this experiment, the tools helped users identify patterns in the data, but did not enable them to connect these numerical patterns to the un-derlying biological phenomena. A critical need is for highly integrated visualization environments that excel at domain relevance and inference. In this case, understand-ing gene expression patterns must lead to inference of un-derlying pathways that model the interactions of the genes (Figure 12). Visualization must support this level of infer-ence.

Figure 12: Visualizations must support domain-relevant inference, from microarray dataset to pathway models describing interactions within a cell [37].

Interaction Design: The design of interaction mecha-nisms in visualization is critically important. Usability can outweigh the choice of visual representation. Spotfire? us-ers mainly focused on the heat-map representation, while GeneSpring? users focused on the parallel coordinates, even though both tools support both representations. The primary reason for this, based on comments from users, was that users preferred parallel coordinates but Spot-fire?’s parallel coordinates view employs a poorly de-signed selection mechanism. Selected lines in its parallel coordinates results in unusual and occluding visual high-light feedback that made it very difficult for users to deter-mine which genes were selected and what other genes were nearby.

The ability to select and group genes was the most com-mon interaction that users performed. The grouping of genes into semantic groups is a fundamental need in bioin-formatics visualization tools. GeneSpring? provided use-ful grouping features that enabled more insights in the ‘groups’ category. More tools need better support for grouping items, based on interactive selections as well as computational clustering, and managing groups.

GeneSpring? is the most feature-rich tool of the five, and therefore perhaps the most difficult to learn. However, even though users tended to focus on a small number of basic visualization features, usability issues (such as the higher quantity of clicks required to accomplish tasks) re-duced their overall insight performance.

User Preferences: Certain visualizations, such as the clustering vsualizations for both Spotfire? and GeneSpring? were the most widely used in the study. The users commented that it would be very helpful if the inter-action techniques for these were improved, so that they were better integrated into the overall interaction model. Figure 13: Clustering visualizations were the most widely used in the study. GeneSpring? left, Spotfire? right.

Clustering was a very useful feature throughout, but care should be taken to provide non-clustered overviews first. As in HCE, clustering can potentially bias users into a particular line of thought too quickly. In comparing Spot-fire? and Clusterview, users were also more confident when they could confirm their findings between clustered and non-clustered views of Spotfire?.

User Motivation: We noticed that an important factor in gaining insight is user motivation. Clearly, participants in our study did not analyze the data with as much care as they would if the data were from their own experiments. They mainly focused on discovering the overall effects in the data, but were not sufficiently motivated to extreme

Inference

12 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS

details. Most of the insights generated were classified as breadth rather than depth. However, the visualizations were able to provide sizeable number of breadth insights in spite of low motivation levels.

7 D ISCUSSION OF M ETHODOLOGY

This study takes as its major premise the belief that insight can be measured. We defined insight based on our observa-tions of scientists doing visual data analysis. We recog-nized, in their comments and actions, characteristics that revealed insight. This measurement process also enables recognition of qualitative aspects of user behavior.

The main purpose of visualization is to provide insight. This can be difficult to measure. Although our definition of insight is not comprehensive, it does provide an approxi-mation of users’ learning. This, in turn, enabled us as evaluators to gain insight into the effectiveness of these visualization tools. The definition of insight and the meth-odology presented are domain independent and can be applied for similar data analysis scenarios in other do-mains. The technique evaluates users’ findings from the data. More, valuable, faster, and deeper data findings cor-respond to more effective visualizations as it suggests users can gain more insight from the data.

The methodology succeeded in measuring open-ended insight generation, by not restricting users to a set of pre-planned benchmark tasks. This approach closely matches the purpose of visualization – to discover unforeseen in-sights, rather than to perform routine tasks. This provided a good analysis of the insight capabilities of these visualiza-tion tools. However, this method does not replace the need for controlled experimentation, which is still useful for de-tailed testing of specific targeted tasks.

This new methodology has shown promise, but some difficulties remain to be overcome:

?Labor intensive. It is time consuming for the ex-perimenters to capture and code insights.

?Requires domain expert. The available population of capable experts in the bioinformatics domain for cod-

ing the value of insights is not large. This coder must

also be removed from the subject pool.

?Requires motivated subjects. Since benchmark tasks are not given, subjects must self motivate to accom-

plish anything.

?Training and trial time: Longer time periods would better reflect more realistic visualization usage.

The study reported here measures insight from short term usage, typically under 2 hours per user. In real world scenarios, users spend days, weeks and even months ana-lyzing data. Moreover, the participants in the study were unfamiliar with the data. The only background knowledge they had was what we provided during the course of study. It is very difficult to appreciate the biological relevance of the microarray data they were analyzing. In this case, the hypotheses they reported were more speculative. Yet, they were not trivial. This suggests that the visualizations are provoking the users to think deeply about the data and also to apply the insight in their domain. Once a user is familiar-ized with a visualization, the method in which it is used may change. Furthermore, the long-term insight may be very different than short term insight. Long term insight could be broader understanding that guides biologists through multiple cycles of microarray experiments.

We now recognize that it would be very valuable to con-duct a longitudinal study that records each and every find-ing of the users over a longer period of time to see how the visualization tools influence and adapt to their knowledge acquisition. These studies should be conducted with re-searchers analyzing their own experimental results for the first time, and preferably through multiple experimental cycles. This could be done using long-term ethnographic methods or subjects’ self-reporting. [21] and [22] present such longitudinal studies that included frequent user inter-views, diary studies and ‘Eureka’ reports. Such studies can help to identify the broader information needs, and develop more meaningful tools that leverage users’ domain knowl-edge and expertise.

8 C ONCLUSIONS

This study suggests the following major conclusions for life scientists, visualization designers, and evaluators.

Biologists: A visualization tool clearly influences the in-terpretation of the data and insight gained. Hence, it is im-perative that the appropriate tool be chosen for a given dataset. We sought to answer the question of which is the best tool to use. Some tools work more effectively with cer-tain types of data. Both TimeSearcher and HCE performed better with the Time series and viral datasets respectively, for others they provided below average results. Thus, data-set dictates which tool is best to use. Additionally, larger software packages like Spotfire? and GeneSpring? work consistently across different datasets. If a researcher needs to work with multiple kinds of data, software like Spotfire? and GeneSpring? would be better. But, if a researcher needs to work with just one kind of data, more focused tools can provide better results in a much faster time frame. Spotfire? proved to be an excellent tool all around for rapid insight generation.

Visualization Designers: Interaction techniques play a key role in determining visualization effectiveness. Design-ers should emphasize consistent usable interaction design models with clear visual feedback. Grouping and clustering is a must. It would be helpful to identify which visualiza-tion technique in a given software package is used the most by users and improve it. It is imperative that users are able to access and link biological information to their data. Visualizations should strive to support higher-level domain relevant inference.

Evaluators: The main purpose of visualization is to pro-vide insight. This can be difficult to measure with con-trolled experiments or other methods. Our insight defini-tion allowed us to quantify insight generation using a vari-ety of insight characteristics, which enabled us to gauge the open-ended insight capability of bioinformatics visualiza-tion tools. This methodology can prove helpful for future studies for analyzing the effectiveness visualizations in many domains.

SARAIYA ET AL.: AN EVALUATION OF MICROARRAY VISUALIZATION TOOLS FOR BIOLOGICAL INSIGHT 13

10 A CKNOWLEDGEMENTS

We would like to thank all participants without whom the

study would not have been possible. We also thank Hideo

Goto, Yoshi Kawaoka, and John Yin for assistance related to mRNA extraction, as well as Jeff Furlong and Emily Baechler for assistance with data sets. We also extend our

thanks to Lee LeCuyer and Edward Fulton for their help with the study. R EFERENCES [1] D. Duggan, B. Bittner, Y. Chen, P. Meltzer, and J. Trent, “Expres-sion profiling using cDNA microarrays”, Nature Genetics, vol. 21,

pp. 11-19, Jan. 1999.

[2] L. Shi, “DNA Microarray – Genome Chip”, https://www.doczj.com/doc/c811585846.html,/GeneChips.html#What, 2002.

[3] D. Bassett, M. Eisen, and M. Boguski, “Gene Expression Informat-ics – its all in your mine”, Nature Genetics Supplement, Vol 21.

Jan 1999.

[4] N. Bolshakova, “Microarray Software Catalogue”,

http://www.cs.tcd.ie/Nadia.Bolshakova/softwaretotal.html,

2004.

[5] Y. Leung, “Functional Genomics”,

https://www.doczj.com/doc/c811585846.html,.https://www.doczj.com/doc/c811585846.html,.hk/%7Eb400

559/, 2004.

[6] A. Robinson, “Bioinformatics Visualization”,

https://www.doczj.com/doc/c811585846.html,/~alan/, 2002.

[7] SPOTFIRE? Decisionsite? for functional Genomics,

https://www.doczj.com/doc/c811585846.html,.

[8] C. Chen and M. Czerwinski, “Empirical evaluation of information

visualizations: an introduction”, Int. J. Human- Computer Stud-ies, vol 53, pp 631-635, 2000

[9] C. Chen and Y. Yu, “Empirical studies of information visualiza-tion: a meta-analysis”, Int. J. Human- Computer Studies, vol 53,

PP 851-866, 2000

[10] A. Kobsa, “An Empirical Comparison of Three Commercial In-formation Visualization Systems”, Proceedings of InfoVis 2001,

pp: 123-130, 2001.

[11] H. Hartson and D. Hix, “Developing User Interfaces: Ensuring

Usability Through Product and Process”, John Wiley, 1993.

[12] G. Rao and D. Mingay, “Report on usability testing of census

bureau’s dynamaps CD-ROM product”,

https://www.doczj.com/doc/c811585846.html,/cs5764/papers/dynamapsUsability.pdf,

2001.

[13] J. Nielsen, “Finding usability problems through heuristic evalua-

tion” Proceedings of ACM CHI’92, pp 373-380, 1992. [14] J. D. Mackinlay, “Automating the design of graphical presenta-tions of relational information”, ACM Transactions on Graphics, vol 5, pp 110 – 141, 1986. [15] E. Tufte, “The Visual Display of Quantitative Information.”, 1983. [16] C. Freitas, P. Luzzardi, R. Cava, M. Pimenta, A. Winckler and L. Nedel, “Evaluating Usability of Information Visualization Tech-niques”. Proc. Advanced Visual Interfaces – AVI’02, poster, pp. 373-374, 2002. [17] B. Shneiderman, “The eyes have it: a task by data type taxon-omy”, Proc. IEEE Symp. on Visual Languages '96, pp 336-343, 1996. [18] C. Ware, Information Visualization: Perception for Design , Morgan Kaufmann, (2004). [19] W. S. Cleveland, The Elements of Graphing Data, Wadsworth Ad-vanced Books and Software, Monterey, California, 93940, 1980 [20] O. Juarez, “CAEVA: Cognitive Architecture to Evaluate Visuali-zation Applications, Proc. Intl. Conf on Information Visualiza-tion-- IV’03, pp: 589-595, 2003.

[21] V. Gonzales and A. Kobsa, “A workplace study of the adoption of

information visualization systems”, Proceedings of I-KNOW'03: 3rd International Conference on KnowledgeManagement, Graz, Austria, pp 92-102, 2003.

[22] J. Rieman, “A field study of exploratory learning Strategies, ACM

Transactions on Computer-Human Interaction, vol 3, 189-218, 1996.

[23] C. Plaisant, “The Challenge of Information Visualization Evalua-tion, Proc. of Advanced Visual Interfaces --AVI’04”, 2004.

[24] R. Spence, Information Visualization, Addison-Wesley, 2001.

[25] GENESPRING?, Cutting-edge tools for expression analysis,

https://www.doczj.com/doc/c811585846.html,. [26] L. Heath and N. Ramakrishnan, “The Emerging Landscape of Bioinformatics Software Systems”, IEEE Computer 35(7), 41-45,

2002

[27] G. Churchill, “Fundamentals of experimental design for cDNA

microarrays”, Nature Genetics, vol 32, pp 490-495, 2002 [28]

J. Quackenbush, “Microarray data normalization and Transfor-mation”, Nature Genetics, vol 32, 496-501, 2002

[29] K. A. Duca, H. Goto, Y. Kawaoka, and J. Yin, “Time-Resolved

mRNA Profiling During Influenza Infection: Extracting Informa-tion from a Challenging Experimental System”. American Society

for Virology, 20th Annual Meeting, Madison, WI. Data Website:

https://www.doczj.com/doc/c811585846.html,/cs5764/ fall2003/ideas/influenza.doc.,

2001

[30] G. Geiss, M. Salvatore, T. Tumpey, V. Carter, X. Wang, C.Basler,

J. Taubenberger, R. Bumbarner, P. Palese, M. Katze, and A. Gar-cia-Sastre, “Cellular transcriptional profiling in influenza A virus-infected lung epithelial cells: The role of the nonstructural NS1

protein in the evasion of the host innate defense and its potential

contribution to pandemic influenza”, PNAS vol 99, Issue 16,

10736–41, 2002.

[31] E. Baechler, F Batliwala, G. Karvpis, P. Gaffney, W. Ortmann, K.

Espe, K. Shark, W. Grande, K Hughes, K Kapur, P. Gregersen,

and Behrens T., “Interferon-inducible gene expression signature

in peripheral blood cells of patients with severe SLE” PNAS vol

100, Issue 5, 2610-5. 2003.

[32] Affymetrix Technologiyes, https://www.doczj.com/doc/c811585846.html,

[33] M. Eisen, P Shellman, P Brown, and D Bostein, “Cluster analysis

and display of genome-wide expression” patterns, PNAS vol 95,

Issue 25, 14963-68, 1998.

[34] H. Hochheriser, E. H. Baehrecke, S. M. Mount, and B. Shneider-man, “Dynamic Querying for Pattern Identification in Microarray

and Genomic Data, Proc. of IEEE International Conference on

Multimedia and Expo. 2003.

[35] J. Seo and B. Shneiderman, “Interactively Exploring Hierarchical

Clustering Results”, IEEE Computer, vol 35, 80-86, 2002.

[36] J. Seo and B. Shneiderman, “A Rank-by-Feature Framework for

Unsupervised Multidimensional Data Exploration Using Low

Dimensional Projections”, Proceedings of InfoVis 2004.

[37] Biocarta?, Charting Pathways of Life,

https://www.doczj.com/doc/c811585846.html,/genes/index.asp

[38] P. Saraiya, C. North, K. Duca, “An Evaluation of Microarray

Visualization Tools for Biological Insight”, Proceedings of InfoVis

2004.

Purvi Saraiya received the B.E. degree in Com-puter Engineering from L.D. College of Engineer-ing, Ahmedabad, India in 2000. She received the MS degree in Computer Science from Virginia Tech in 2002. She is a Ph.D. candidate in Computer Science at Virginia Tech. Her research

interests are Design and evaluation of

Information Visualization software, Human computer Interaction, and User Interface Software.

Chris North

Karen Duca

关于举办 IEEE 会议

关于举办IEEE会议 1.注册成为IEEE会议的收益 会议一经IEEE会议部门审核通过,该会议将有资格: ‐使用IEEE商标 ‐该会议可被纳入IEEE会议列表供搜索,会议的征文启事可通过IEEE平台发布 ‐可申请IEEE会议出版服务 ‐IEEE资金支持的会议,可获得责任保险 ‐为会议组织者提供多种免费的相关服务 2.会议启动 ‐确定会议主题等 ‐确定会议地点及会场 ‐确定潜在的会议赞助机构 3.开始IEEE会议的申请工作 ‐确定IEEE对会议的支持 ‐指定会议各组委会的主席 ‐提交会议信息、日程及组委会名单 ‐(如果需要)签署谅解协议——技术支持/资金支持 ‐告知当地分会或与该会议学科领域相关的IEEE协会此次会议的相关信息 4.关于IEEE对会议的支持类型和组织 ‐要成为合格的IEEE会议,首先必须至少获得一个IEEE相关机构组织的支持(如分会、协会等),因此获得IEEE会议支持的第一步就是直接向相关的IEEE机构提交申 请。详细的机构组织信息可访问https://www.doczj.com/doc/c811585846.html,/societies_communities/index.html,其 中技术组织包括:协会、技术委员会等

地方组织包括:10个大区、分会、当地的技术委员会、学生分会和附属团体具体每

具体每一类组织的参考图片如下:

‐IEEE相关机构组织为会议提供的支持包括:资金支持、技术支持、管理支持、宣传支持 资金支持又包括IEEE独办会议(全部由IEEE出资)和IEEE合办会议(IEEE出资率小于100%);非资金支持会议又包括技术合办和技术合作,这种会议支持类型不涉及到资金问题 5.关于谅解备忘录 谅解备忘录将明确规定签约双方的权利义务,减少或消除不必要的误解,同时有助于保护IEEE的非盈利组织身份和品牌。 ‐首先,确定支持类型:资金支持或技术支持 资金支持:签约双方都需参与会议的技术、资金、宣传和管理等各环节。(补充说明:如果某一团体组织只是单纯的提供资金支持会议,这并不是IEEE所指的资金支持,因此在这种情况下,该团体组织也不能以会议主办方的身份被列入会议信息列表或者是谅解备忘录中) 技术支持:IEEE直接参与会议的技术环节工作,不涉及会议的任何资金问题。 6.会议信息列表 ‐通过登陆会议组织者网页了解更多会议组织的相关信息(https://www.doczj.com/doc/c811585846.html,/conferences_events/conferences/organizers/index.html),然后通过网络平台提交会议信息列表(https://www.doczj.com/doc/c811585846.html,/InfoSched/Welcome.aspx),开始申请举办IEEE会议

ieee会议论文格式

ieee会议论文格式 【篇一:sci、ieee会议论文模板】 paper title* (use style: paper title) subtitle as needed (paper subtitle) authors name/s per 1st affiliation (author) line 1 (of affiliation): dept. name of organization line 2-name of organization, acronyms acceptable line 3-city, country line 4-e-mail address if desired authors name/s per 2nd affiliation (author) line 1 (of affiliation): dept. name of organization line 2-name of organization, acronyms acceptable line 3-city, country line 4-e-mail address if desired abstract—this electronic document is a “live” template and already defines the components of your paper [title, text, heads, etc.] in its style sheet. *critical: do not use symbols, special characters, or math in paper title or abstract. (abstract) keywords—component; formatting; style; styling; insert (key words) i. introduction (heading 1) “word 97this template, modified in ms word 2007 and saved as a -2003 document” for the pc, provides authors with most of the formatting specifications needed for preparing electronic versions of their papers. all standard paper components have been specified for three reasons: (1) ease of use when formatting individual papers, (2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products, and (3) conformity of style throughout a conference proceedings. margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. the formatter will need to create these components, incorporating the applicable criteria that follow. ii. ease of use

ieee会议word模版

Paper Title* (use style: paper title) Subtitle as needed (paper subtitle) Rujun Wei Institute of Image Communication and Network Engineering Shanghai Jiao Tong University Shanghai, China E-mail: iujean@https://www.doczj.com/doc/c811585846.html, Authors Name/s per 2nd Affiliation (Author) line 1 (of Affiliation): dept. name of organization line 2-name of organization, acronyms acceptable line 3-City, Country line 4-e-mail address if desired Abstract—This electronic document is a “live”template and already defines the components of your paper [title, text, heads, etc.] in its style sheet. *CRITICAL: Do Not Use Symbols, Special Characters, or Math in Paper Title or Abstract. (Abstract) Keywords—component; formatting; style; styling; insert (key words) I.I NTRODUCTION (H EADING 1) This template, modified in MS Word 2007 and saved as a “Word 97-2003 Document” for the PC, provides authors with most of the formatting specifications needed for preparing electronic versions of their papers. All standard paper components have been specified for three reasons: (1) ease of use when formatting individual papers, (2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products, and (3) conformity of style throughout a conference proceedings. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. Some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow. II.E ASE OF U SE A.Selecting a Template (Heading 2) First, confirm that you have the correct template for your paper size. This template has been tailored for output on the A4 paper size. If you are using US letter-sized paper, please close this file and download the file “MSW_USltr_format”. B.Maintaining the Integrity of the Specifications The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. III.P REPARE Y OUR P APER B EFORE S TYLING Before you begin to format your paper, first write and save the content as a separate text file. Keep your text and graphic files separate until after the text has been formatted and styled. Do not use hard tabs, and limit use of hard returns to only one return at the end of a paragraph. Do not add any kind of pagination anywhere in the paper. Do not number text heads-the template will do that for you. Finally, complete content and organizational editing before formatting. Please take note of the following items when proofreading spelling and grammar: A.Abbreviations and Acronyms Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. B.Units ?Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as “3.5-inch disk drive.” ?Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. ?Do not mix complete spellings and abbreviations of units: “Wb/m2”or “webers per square meter,”not “webers/m2.” Spell units when they appear in text: “...a few henries,” not “...a few H.” Identify applicable sponsor/s here. If no sponsors, delete this text box (sponsors).

IEEE 国际会议模板

Paper Title (use style: paper title) Subtitle as needed Authors Name/s per 1st Affiliation (Author) line 1 (of Affiliation): dept. name of organization line 2: name of organization, acronyms acceptable line 3: City, Country line 4: e-mail: name@https://www.doczj.com/doc/c811585846.html, Authors Name/s per 2nd Affiliation (Author) line 1 (of Affiliation): dept. name of organization line 2: name of organization, acronyms acceptable line 3: City, Country line 4: e-mail: name@https://www.doczj.com/doc/c811585846.html, Abstract—This electronic document is a “live” template. T he various components of your paper [title, text, heads, etc.] are already defined on the style sheet, as illustrated by the portions given in this document. DO NOT USE SPECIAL CHARACTERS, SYMBOLS, OR MATH IN YOUR TITLE OR ABSTRACT. (Abstract) Keywords-component; formatting; style; styling; insert (key words) I.I NTRODUCTION (H EADING 1) All manuscripts must be in English. These guidelines include complete descriptions of the fonts, spacing, and related information for producing your proceedings manuscripts. Please follow them and if you have any questions, direct them to the production editor in charge of your proceedings at Conference Publishing Services (CPS): Phone +1 (714) 821-8380 or Fax +1 (714) 761-1784. This template provides authors with most of the formatting specifications needed for preparing electronic versions of their papers. All standard paper components have been specified for three reasons: (1) ease of use when formatting individual papers, (2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products, and (3) conformity of style throughout a conference proceedings. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. PLEASE DO NOT RE-ADJUST THESE MARGINS. Some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow. II.T YPE S TYLE AND F ONTS Wherever Times is specified, Times Roman or Times New Roman may be used. If neither is available on your word processor, please use the font closest in appearance to Times. Avoid using bit-mapped fonts if possible. True-Type 1 or Open Type fonts are preferred. Please embed symbol fonts, as well, for math, etc. III.E ASE OF U SE A.Selecting a Template (Heading 2) First, confirm that you have the correct template for your paper size. This template has been tailored for output on the US-letter paper size. If you are using A4-sized paper, please close this template and download the file for A4 paper format called ―CPS_A4_format‖. B.Maintaining the Integrity of the Specifications The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. IV.P REPARE Y OUR P APER B EFORE S TYLING Before you begin to format your paper, first write and save the content as a separate text file. Keep your text and graphic files separate until after the text has been formatted and styled. Do not use hard tabs, and limit use of hard returns to only one return at the end of a paragraph. Do not add any kind of pagination anywhere in the paper. Do not number text heads-the template will do that for you. Finally, complete content and organizational editing before formatting. Please take note of the following items when proofreading spelling and grammar: A.Abbreviations and Acronyms Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. B.Units Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception

IEEE会议论文模板

Author Guidelines for 8.5 x 11-inch Proceedings Manuscripts Author(s) Name(s) Author Affiliation(s) E-mail Abstract The abstract is to be in fully-justified italicized text, at the top of the left-hand column as it is here, below the author informati on. Use the word “Abstract” as the title, in 12-point Times, boldface type, centered relative to the column, initially capitalized. The abstract is to be in 10-point, single-spaced type, and up to 150 words in length. Leave two blank lines after the abstract, then begin the main text. 1. Introduction All manuscripts must be in English. These guidelines include complete descriptions of the fonts, spacing, and related information for producing your proceedings manuscripts. Please follow them and if you have any questions, direct them to the production editor in charge of your proceedings at the IEEE Computer Society Press: Phone (714) 821-8380 or Fax (714) 761-1784. 2. Formatting your paper All printed material, including text, illustrations, and charts, must be kept within a print area of 6-1/2 inches (16.51 cm) wide by 8-7/8 inches (22.51 cm) high. Do not write or print anything outside the print area. All text must be in a two-column format. Columns are to be 3-1/16 inches (7.85 cm) wide, with a 3/8 inch (0.81 cm) space between them. Text must be fully justified. A format sheet with the margins and placement guides is available as both Word and PDF files as and . It contains lines and boxes showing the margins and print areas. If you hold it and your printed page up to the light, you can easily check your margins to see if your print area fits within the space allowed. 3. Main title The main title (on the first page) should begin 1-3/8 inches (3.49 cm) from the top edge of the page, centered, and in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two 12-point blank lines after the title. 4. Author name(s) and affiliation(s) Author names and affiliations are to be centered beneath the title and printed in Times 12-point, non-boldface type. Multiple authors may be shown in a two- or three-column format, with their affiliations italicized and centered below their respective names. Include e-mail addresses if possible. Author information should be followed by two 12-point blank lines. 5. Second and following pages The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for 8.5 x 11-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. 6. Type-style and fonts Wherever Times is specified, Times Roman or Times New Roman may be used. If neither is available on your word processor, please use the font closest in appearance to Times. Avoid using bit-mapped fonts if possible. True-Type 1 fonts are preferred.

IEEE会议论文版权问题

IEEE COPYRIGHT AND CONSENT FORM To ensure uniformity of treatment among all contributors, other forms may not be substituted for this form, nor may any wording of the form be changed. This form is intended for original material submitted to the IEEE and must accompany any such material in order to be published by the IEEE. Please read the form carefully and keep a copy for your files. TITLE OF PAPER/ARTICLE/REPORT, INCLUDING ALL CONTENT IN ANY FORM, FORMAT, OR MEDIA (hereinafter, “the Work”): COMPLETE LIST OF AUTHORS: IEEE PUBLICATION TITLE (Journal, Magazine, Conference, Book):CNMT2010 Copyright Transfer 1. The undersigned hereby assigns to The Institute of Electrical and Electronics Engineers, Incorporated (the “IEEE”) all rights under copyright that may exist in and to: (a) the above Work, including any revised or expanded derivative works submitted to the IEEE by the undersigned based on the Work; and (b) any associated written or multimedia components or other enhancements accompanying the Work. Consent and Release 2. In the event the undersigned makes a presentation based upon the Work at a conference hosted or sponsored in whole or in part by the IEEE, the undersigned, in consideration for his/her participation in the conference, hereby grants the IEEE the unlimited, worldwide, irrevocable permission to use, distribute, publish, license, exhibit, record, digitize, broadcast, reproduce and archive, in any format or medium, whether now known or hereafter developed: (a) his/her presentation and comments at the conference; (b) any written materials or multimedia files used in connection with his/her presentation; and (c) any recorded interviews of him/her (collectively, the “Presentation”). The permission granted includes the transcription and reproduction of the Presentation for inclusion in products sold or distributed by IEEE and live or recorded broadcast of the Presentation during or after the conference. 3. In connection with the permission granted in Section 2, the undersigned hereby grants IEEE the unlimited, worldwide, irrevocable right to use his/her name, picture, likeness, voice and biographical information as part of the advertisement, distribution and sale of products incorporating the Work or Presentation, and releases IEEE from any claim based on right of privacy or publicity. 4. The undersigned hereby warrants that the Work and Presentation (collectively, the “Materials”) are original and that he/she is the author of the Materials. To the extent the Materials incorporate text passages, figures, data or other material from the works of others, the undersigned has obtained any necessary permissions. Where necessary, the undersigned has obtained all third party permissions and consents to grant the license above and has provided copies of such permissions and consents to IEEE. P lease check this box if you do not wish to have video/audio recordings made of your conference presentation. See reverse side for Retained Rights/Terms and Conditions, and Author Responsibilities. General Terms ? The undersigned represents that he/she has the power and authority to make and execute this assignment. ? The undersigned agrees to indemnify and hold harmless the IEEE from any damage or expense that may arise in the event of a breach of any of the warranties set forth above. ? In the event the above work is not accepted and published by the IEEE or is withdrawn by the author(s) before acceptance by the IEEE, the foregoing copyright transfer shall become null and void and all materials embodying the Work submitted to the IEEE will be destroyed. ? For jointly authored Works, all joint authors should sign, or one of the authors should sign as authorized agent for the others. (1)_____________________________________________________________ ___________________________________ Author/Authorized Agent for Joint Authors Date U.S. Government Employee Certification (where applicable) This will certify that all authors of the Work are U.S. government employees and prepared the Work on a subject within the scope of their official duties. As such, the Work is not subject to U.S. copyright protection. (2)_____________________________________________________________ ___________________________________ Authorized Signature Date (Authors who are U.S. government employees should also sign signature line (1) above to enable the IEEE to claim and protect its copyright in international jurisdictions.) Crown Copyright Certification (where applicable) This will certify that all authors of the Work are employees of the British or British Commonwealth Government and prepared the Work in connection with their official )_____________________________________________________________ ___________________________________ h Commonwealth Government employees should also sign line (1) above to indicate their acceptance of all terms other than the duties. As such, the Work is subject to Crown Copyright and is not assigned to the IEEE as set forth in the first sentence of the Copyright Transfer Section above. The undersigned acknowledges, however, that the IEEE has the right to publish, distribute and reprint the Work in all forms and media. (3Authorized Signature Date (Authors who are British or Britis copyright transfer.) rev. 032409 IEEE COPYRIGHT FORM (continued) Wireless Streaming Media Forward Platform Based on Mixed QoS Control Xianping Fu Yugang Men

相关主题
相关文档 最新文档