当前位置:文档之家› Modeling and storing context-aware preferences

Modeling and storing context-aware preferences

Modeling and storing context-aware preferences
Modeling and storing context-aware preferences

Modeling and Storing Context-Aware

Preferences

Kostas Stefanidis,Evaggelia Pitoura,and Panos Vassiliadis

Department of Computer Science,University of Ioannina,Greece

{kstef,pitoura,pvassil}@cs.uoi.gr

Abstract.Today,the overwhelming volume of information that is avail-

able to an increasingly wider spectrum of users creates the need for per-

sonalization.In this paper,we consider a database system that supports

context-aware preference queries,that is,preference queries whose result

depends on the context at the time of their submission.We use data

cubes to store the associations between context-dependent preferences

and database relations and OLAP techniques for processing context-

aware queries,thus allowing the manipulation of the captured context

data at di?erent levels of abstractions.To improve query performance,

we use an auxiliary data structure,called context tree,which indexes the

results of previously computed preference-aware queries based on their

associated context.We show how these cached results can be used to

process both exact and approximate context-aware preference queries.

1Introduction

The increased amount of available information creates the need for personalized information processing[1].Instead of overwhelming the user with all available data,a personalized query returns only the relevant to the user information.In general,to achieve personalization,users express their preferences on speci?c pieces of data either explicitly or implicitly.The result of their queries are then ranked based on these preferences.However,most often users may have di?erent preferences under di?erent circumstances.For instance,a user visiting Athens may prefer to visit Acropolis in a nice sunny summer day and the archaeological museum in a cold and rainy winter afternoon.In other words,the results of a preference query may depend on context.

Context is a general term used to capture any information that can be used to characterize the situation of an entity[2].Common types of context include the computing context(e.g.,network connectivity,nearby resources),the user context (e.g.,pro?le,location),the physical context(e.g.,noise levels,temperature), and time[3].A context-aware system is a system that uses context to provide relevant information and/or services to its users.In this paper,we consider a context-aware preference database system that supports preference queries whose results depend on context.In particular,users express their preferences on speci?c attributes of a relation.Such preferences depend on context,that is, they may have di?erent values depending on context.

We model context as a?nite set of special-purpose attributes,called con-text https://www.doczj.com/doc/1214382346.html,ers express their preferences on speci?c database instances

based on a single context parameter.Such basic preferences,i.e.,preferences as-sociating database relations with a single context parameter,are combined to compute aggregate preferences that include more than one context parameter. Context parameters may take values for hierarchical domains,thus di?erent lev-els of abstraction for the captured context data are introduced.For instance, this allows us to represent preference along the location context parameter at di?erent levels of detail,for example,by grouping together preferences for all cities of a speci?c country.Basic preferences are stored in data cubes,following the OLAP paradigm.

Although,aggregate preferences are not explicitly stored,we cache the results of previously computed preference queries using a data structure called context tree.The context tree indexes the results of queries based on their associated context.The cached results are re-used to speed up the processing of queries that refer to the exact context of a previously computed query as well as of queries whose context is similar enough to those of some previously computed ones.We provide initial experimental results that characterize the quality of the approximation attained by using preferences computed at similar context states.

In summary,in this paper,we make the following contributions:

–We provide a logical model for context-aware preferences that is based on a multidimensional model of context.

–We propose storing the results of previously computed preference queries using a data structure,the context tree,that indexes these results based on the values of the context parameters.

–We show how cached results can be used to compute both exact and approx-imate context-aware preference queries.

2A Logical Model for Context and Preferences

2.1Reference Example

Consider a database schema with information about points interest and users(Fig.1).The points interest may for example be museums,mon-uments,archaeological places,zoos.We consider three context parameters as relevant to this application:location,temperature and accompanying

of

of

people that might be friends,family,and none.For example,a zoo may be a better place to visit than a brewery in the context of family.

Points Interest(pid of cost)

User(uid

people}.As usual,a domain is an in?nitely countable set of values.A context state corresponds to assigning to each context parameter a value from its domain. For instance,a context state may be:CS(current)={P laka,warm,friends}. The result of a context-aware preference query depends on the context state of its execution.It is possible for some context parameters to participate in an associated hierarchy of levels of aggregated data,i.e.,they can be viewed from di?erent levels of detail.Formally,an attribute hierarchy is a lattice of attributes –called levels for the purpose of the hierarchy–L=(L1,...,L n,ALL).We re-quire that the upper bound of the lattice is always the level ALL,so that we can group all values into the single value‘all’.The lower bound of the lattice is called the detailed level of the parameter.In our running example,we consider location to be such an attribute as shown in Fig.2(left).Levels of location are Region,City,Country,and ALL.Region is the most detailed level,while level ALL is the most coarse level.

2.3Contextual Preferences

In this section,we de?ne how context a?ects the results of a query.Each user expresses her preference for an item in a speci?c context by providing a numeric score between0and1.This score expresses a degree of interest:value1indicates extreme interest,while value0indicates no interest.We distinguish preferences into basic(involving a single context parameter)and aggregate ones(involving a combination of context parameters).

Basic Preferences.Each basic preference is described by(a)a value of a context parameter c i∈dom(C i),1≤i≤n,(b)a set of values of non-context parameters a i∈dom(A i),and(c)a degree of interest,i.e.,a real number between 0and1.So,for a context parameter c i,we have:

(c i,a k+1,...,a m)=interest

preference basic

i

people),the set of non-context parameters are attributes about points interest and users that are stored in the database. For example,assume user Mary and the point-of-interest Acropolis.When Mary is in the P laka area,she likes to visit Acropolis and gives it score0.8. Similarly,she prefers to visit Acropolis when the weather is warm and gives Acropolis score0.9.Finally,if she is with friends,Mary gives Acropolis score 0.6.So,the basic preferences for Acropolis and Mary are:

preference basic

(P laka,Acropolis,Mary)=0.8,

1

preference basic

(warm,Acropolis,Mary)=0.9,

2

(friends,Acropolis,Mary)=0.6.

preference basic

3

For context values not appearing explicitly in a basic preference,we consider a default interest score of0.5.

Aggregate Preferences.Each aggregate preference is derived from a combi-nation of basic ones.An aggregate preference involves(a)a set of of n values x i,one for each context parameter C i,where either x i=c i for some value c i∈dom(C i)or x i=?,which means that the value of the context parameter C i is irrelevant,i.e.,the corresponding context parameter should not a?ect the aggre-gate preference,and(b)a set of values of non-context parameters a i∈dom(A i), and has a degree of interest:

preference(x1,...x n,a k+1,...,a m)=interest

people is0.1,preference(P laka,warm,friends,Acropolis, Mary)gets score0.81.

We describe next two approaches for computing the aggregate scores when the value for some parameters in the preference is‘*’.The?rst one assumes a score of0.5for those context parameters whose values in the preference is‘*’. Then,the interest

score= n i=1w i×y i

(x i,a k+1,...,a m),if x i=c i and y i=0.5,if x i=*. where y i=preference basic

i

The other approach includes in the computation only the interest scores of those context parameters whose values are speci?ed in the preference and ignores those speci?ed as irrelevant.In this case,the interest

score= k i=1w i×y i

where w i=w i

of

i.e.,are not irrelevant.In the following,we assume that the second approach is used.

To facilitate the procedure of expressing interests,the system may provide sets of pre-speci?ed pro?les with speci?c context-dependent preference values for the non-context parameters as well as default weights for computing the aggregate scores.In this case,instead of explicitly specifying basic and aggregate preferences for the non-context parameters,users may just select the pro?le that best matches their interests from the set of the available ones.By doing so,the user adopts the preferences speci?ed by the selected pro?le.

2.4Preferences for Hierarchical Context Parameters

When the context parameter of a basic preference participates in di?erent levels of a hierarchy,users may express their preference in any level,as well in more than one level.For example,Mary can denote that the monument of Acropolis has interest score 0.8when she is at P laka and 0.6when she is in Athens .

For a parameter L ,let L 1,L 2,...,L n ,ALL be the di?erent levels of the hierarchy.There is a hierarchy tree,for each combination of non-context param-eters.In our reference example,there is a hierarchy tree for each user pro?le and for a speci?c point interest that represents the interest scores of the user for the points interest ,according to the location parameter hierarchy.In Fig.2(right),the root of the tree corresponds to level ALL with the single value all .The values of a certain dimension level L are found in the same level of the tree.Each node is characterized by a score for the preference concerning the combination of the non-context attributes with the context value of the node.

Athens Greece Region City Perama Ioannina ALL

all Kifisia

Plaka 0.80.7Fig.2.Hierarchies on location (left)and the hierarchy tree of location (right).If the context in a query refers to a level of the tree in which there is no explicit score given by the user,there are three ways to compute the appropriate score for a preference.In the ?rst approach,we traverse the tree upwards until we ?nd the ?rst predecessor for which a score is speci?ed.In this case,we assume that a user that de?nes a score for a speci?c level,implicitly de?nes the same score for all the levels below.In the second approach,we compute the average score of all the successors of the immediately following level,if such scores are available,else we follow the ?rst approach.Finally,we can combine both approaches by computing a weighted average score of the scores from both the predecessor and the successors.In any case,we assume a default score of 0.5at level all ,if no score is given.

3Storing Basic Preferences

We store basic user preferences in hypercubes,or simply cubes.The number of data cubes is equal to the number of context parameters,i.e.,we have one cube for each context parameter.Formally,a cube schema is de?ned as a?nite set of attributes Cube=(C i,A1,...,A n,M),where C i is a context parameter, A1,...,A n are non-context attributes and M is the interest score.The cubes for our running example are depicted in Fig.3.In each cube,there is one dimension for the points interest,one dimension for the users and one dimension for the context parameter.In each cell of the cube,we store the degree of interest for a speci?c preference.

Accompanying_People Location Temperature

User

User User

Points_of_Interest Points_of_Interest Points_of_Interest

Fig.3.Data cubes for each context parameter.

A relational table implements such a cube in a straightforward fashion.The

primary key of the table is C i,A1,...,A n.If dimension tables representing hi-erarchies exist(see next),we employ foreign keys for the attributes correspond-ing to these dimensions.The schema for our running example which is based on the classical star schema is depicted in Fig.4.There are three fact tables, T emperature,Location and Accompanying

of

Regarding hierarchical context attributes,the typical way to store them is shown in Fig.5(left).In this modeling,we assign an attribute for each level in the hierarchy.We also assign an arti?cial key to e?ciently implement refer-ences to the dimension table.The denormalized tables of this kind su?er from the fact that there exists exactly one row for each value of the lowest level of the hierarchy,but no rows explicitly representing values of higher levels of the hierarchy.Therefore,if we want to express preferences at a higher level of the hierarchy,we need to extend this modeling (assume for example that we wish to express the preferences of Mary when she is in the city of T hessaloniki ,in-dependently of the speci?c region of T hessaloniki she is found at).To this end,we use an extension of this approach,as shown in the right of Fig.5.In this kind of dimension tables,we introduce a dedicated tuple for each value at any level of the hierarchy.We populate attributes of lower levels with NULL s.To explain the particular level that a value participates at,we also introduce a level indicator attribute.Dimension levels are assigned attribute numbers through a topological sort of the lattice.

To compute aggregate preferences from simple ones we need also to store the weights used in this computation.Weights are stored in a special purpose table AggScores (w C 1,...,w Ck ,A k +1).The value for each context parameter w Ci is the weight for the respective interest score and the value A k +1spec-i?es the user who gives these weights.For instance,in our running example,the table AggScores has the attributes Location weight ,Accompanying weight ,and User .A record in this table can be (0.6,0.3,0.1,Mary ).

123G_ID Region City Country Greece Greece Greece Kefalari Athens Athens ... Perama Ioannina Plaka 1

2

3G_ID

Region City Country Greece Greece Greece Acropolis Kefalari Polichni

Athens Athens ...101102120NULL

NULL

NULL

NULL Salonica Athens Greece Greece Greece Level 111223...3Cyprus NULL NULL 121...

Thessaloniki Fig.5.A typical (left)and an extended dimension table (right).

Aggregate preferences are not explicitly stored.The main reason is space and time e?ciency,since this would require maintaining a context cube for each context state and for each combination of non-context attributes.Assume that the context environment CE X has n context parameters {C 1,C 2,...,C n }and that the cardinality of the domain dom (C i )of each parameter C i is (for simplicity)m .This means that there are m n potential context states,leading to a very large number of context cubes and prohibitively high costs for their maintenance.Instead,we store only previously computed aggregate scores,using an auxiliary data structure (described in Section 4).

An advantage of using cubes to store user preferences is that they provide the capability of using hierarchies to introduce di?erent levels of abstractions of the captured context data through the drill-down and roll-up operators[5]. The roll-up operation provides an aggregation on one dimension.Assume,for example,that the user has executed a query about Mary’s most preferable point-of-interests in P laka.However,this query has returned an unsatisfactory small number of answers.Then,Mary may decide that is worth broadening the scope of the search and investigate the broader Athens area for interesting places to visit. In this case,a roll-up operation on location can generate a cube that uses cities instead of regions.Similarly,drill-down is the reverse of roll-up and allows the de-aggregation of information moving from higher to lower levels of granularity. 4Caching Context-Aware Queries

In this section,we present a scheme for storing results of previous queries exe-cuted at a speci?c context,so that these results can be re-used by subsequent queries.

4.1The Context Tree

Assume that the context environment CE X has n context parameters{C1,C2, ...,C n}.A way to store aggregate preferences uses the context tree,as shown in Fig.6.There is one context tree per user.The maximum height of the con-text tree is equal to the number of context parameters plus one.Each context parameter is mapped to one of the levels of the tree and there is one additional level for the leaves.For simplicity,assume that context parameter C i is mapped to level i.A path in the context tree denotes a context state,i.e.,an assignment of values to context parameters.At the leaf nodes,we store a list of ids,e.g., points interest ids,along with their aggregate scores for the associated con-text state,that is,for the path from the root leading to them.Instead of storing aggregate score values for all the ids,to be storage-e?cient,we just store the top?k ids(keys),that is the ids of the items having the k-highest aggregate scores for the path leading to them.The motivation is that this allows us to pro-vide users with a fast answer with the data items that best match their query. Only if more than k-results are needed,additional computation will be initiated. The list of ids is sorted in decreasing order according to their scores.

The context tree is used to store aggregate preferences that were computed as results of previous queries,so that these results can be re-used by subsequent queries.Thus,it is constructed incrementally each time a context-aware query is computed.Each non-leaf node at level k contains cells of the form[key,pointer], where key is equal to c kj∈dom(C k)for a value of the context parameter C k that appeared in some previously computed context query.The pointer of each cell points to the node at the next lower level(level k+1)containing all the distinct values of the next context parameter(parameter C k+1)that appeared in the same context query with c kj.In addition,key may take the special value any,which corresponds to the lack of the speci?cation of the associated context parameter in the query(i.e.,to the use of the special symbol‘*’).

In summary,a context tree for n context parameters satis?es the following properties:

12n

C C

C query1: friends/ warm/ Plaka query2: family/ warm/ Plaka query3: friends/ cold/ Plaka

query4: friends/ warm/ Kifisia

query5: friends/ warm/ *

Fig.6.A context tree (left)and a set of aggregate preferences and the corresponding context tree (right).

–It is a directed acyclic graph with a single root node.

–There are at most n +1levels,each one of the ?rst n of them corresponding

to a context parameter and the last one to the level of the leaf nodes.

–Each non-leaf node at level k maintains cells of the form [key,pointer ]where

key ∈dom (C k )for some value of c k that appeared in a query or key =any .No two cells within the same node contain the same key value.The pointer points to a node at level k +1having cells with key values which appeared in the same query with the key.

–Each leaf node stores a set of pointers to data sorted by their score.

For example,Fig.6(right)shows a set of context states expressed in ?ve pre-viously submitted queries and the corresponding context tree.Assume that the three context parameters are assigned to levels as follows:accompanying

of

number is as small as possible,when m0≤m1≤...≤m k,thus,it is better to place context parameters with domains with higher cardinalities lower in the context tree.

Finally,there are two additional issues related to managing the context tree: replacement and update.To bound the space occupied by the tree,standard cache replacement policies,such as LRU or LFU,may be employed to replace the entry,that is the path,in the tree that is the least frequently or the least recently used one.Regarding cache updates,stored results may become obsolete, either because there is an update in the contextual preferences or because entries (points-of-interests,in our running example)are deleted,inserted or updated.In the case of a change in the contextual preferences,we update the context tree by deleting the entries that are associated with paths,that is context states,that are involved in the update.In the case of updates in the database instance,we do not update the context tree,since this would induce high maintenance costs. Consequently,some of the scores of the entries cached in the tree may be invalid. Again,standard techniques,such periodic cache refreshment or associating a time-out with each cache entry,may be used to control the deviation between the cached and the actual scores.

4.2Querying with Approximate Results

We consider ways of extending the use of the context tree to not only provid-ing answers in the case of queries in exactly the same context state,but also providing approximate answers to queries whose context state is“similar”to a stored one.One such case involves the‘*’operator.If the value of some context parameter is“any”(i.e.,‘*’),we check whether results for enough values of this parameter are already stored in the tree.In particular,if the number of the existing values of this parameter in the same node of the context tree is larger than a threshold value,we do not compute the query from scratch,but instead, merge the stored results for the existing values of the parameter.We call this threshold,coverage approximation threshold(ct).Its value may be either system de?ned or given as input by the user.

Another case in which we can avoid recomputing the results of a query is when the values of its context parameters are“similar”with those of some stored context state.For example,if one considers the near-by locations Thisio and Plaka as similar,then the query friends/warm/Thisio can use the results associated with the stored query friends/warm/Plaka(Fig.6(right)).

To express when two values of a context parameter are similar,we introduce a neighborhood approximation threshold(nt).In particular,two values c i and c i of a context parameter C i are consider similar up to nt i,if and only if for any tuple t with score d i for C i=c1and d i for C i=c i,it holds:

|d i?d i|≤nt i(1) for a small constant nt,0≤nt≤1.

The threshold nt i may take di?erent values for each context parameter C i depending for instance,on the type of its domain.As before,the threshold may

be either determined by the user or the system.To estimate the quality of an approximation,we are interested in how much the results for two queries in two similar context states di?er,that is how much di?erent is the rating of the results in the two states,thus leading to a di?erent set of top-k answers.We have proved[4]the following intuitive property that states that for any two tuples, the di?erence between their aggregate scores in two states,s and s that di?er only at the value of one context parameter,C i,is bounded,if the two values of C i are similar.This indicates that the relative order of the results in states s and s is rather similar.

Property1.Let t1,t2be two tuples that have aggregate scores d1,d2in a context state s and d 1,d 2in a context state s respectively.If s,s di?er only in the values of one context parameter,C i,and these two values of C i are similar up to nt i,then if|d1?d2|≤ε,|d 1?d 2|≤ε+2?w1?nt,where w i is the weight of the context parameter C i.

Property1is easily generalized for the case in which two states di?er in more than one similar up to nt i context parameter.In particular:

Property2.Let t1,t2be two tuples that have aggregate scores d1,d2in a context state s and d 1,d 2in a context state s respectively.If s,s di?er only in the

value of m context parameters,C j

k ,1≤k≤m,and these two values of C j

k

are

similar up to nt j

k ,then if|d1?d2|≤ε,then,|d 1?d 2|≤ε+2?(w j

1

?nt j

1

+

w j

2?nt j

2

...+w j

m

?nt j

m

),where w j

k

is the weight of a context parameter C j

k

.

5Performance Evaluation

In this section,we evaluate the expected size of the context tree as well as the accuracy of the two approximation methods.We divide the input parameters into three categories:context parameters,query workload parameters,and query approximation parameters.In particular,we use three context parameters and thus,the context tree has three levels(plus one for the top?k lists).There are two di?erent types regarding the cardinalities of the domains of the context parameters:the small domain with10values and the large one with50values.

We performed our experiments with various numbers of queries stored at the context tree varying from50to200,while the number of tuples is10000.10% of the values in the queries are‘*’.The other90%are either selected uniformly from the domain of the corresponding context parameter,or follow a zipf data distribution.The coverage approximation threshold ct refers to the percentage of values that need to be stored for a context parameter,to compute the top?k list by combining their corresponding top?k lists when there is the‘*’value at the corresponding level in a new query.The neighborhood approximation threshold nt refers to how similar are the scores for two“similar”values of a context parameter.Our input parameters are summarized in Table1.

5.1Size of the Context Tree

In the?rst set of experiments,we study how the mapping of the context param-eters to the levels of the context tree a?ects its size.In particular,we count the

total number of cells in the tree as a function of the number of stored queries, taking into consideration the di?erent orderings of the parameters.For a context tree with three parameters,we call ordering1the ordering of the context pa-rameters in which the parameter whose domain has10values is assigned to the ?rst level,the parameter with10to the second one,and the parameter with50 values to the last one.Ordering2is the ordering when the domains have10,50, 10values respectively,and for the ordering3the domains have50,10,10values. As discussed in Section3,the mapping of the context parameters to levels that is expected to result in a smaller sized tree,is the one that places the context parameters with the large domains lower in the tree.

Table1.Input Parameters

Number of Context Parameters3

Cardinality of the Context Parameters Domains

Small10

Large50

Number of Tuples10000

Number of Stored Queries50-200 Percentage of‘*’values10%

Data Distributions uniform

zipf-a=1.5a=0.0-3.5 Top-k results10

Coverage Approximation Threshold(ct)≥40%,60%,80%

Neighborhood Approximation Threshold(nt i)0.080.04,0.08,0.12 Weights0.5,0.3,0.2

8).We performed this experiment 50times with 200queries.The values of the context parameters with small domains are selected using a uniform data dis-tribution and the values of the context parameter with the large domain are selected using a zipf data distribution with various values for the parameter a ,varying from 0(corresponding to the uniform distribution)to 3.5(correspond-ing to a very high skew).050

100150200250300350

400

450

050100

150200N u m b e r o f C e l l s Queries ordering 1ordering 2ordering 3050100

150

200250300350400450050100150200

N u m b e r o f C e l l s Queries ordering 1ordering 2ordering 3Fig.7.Uniform (left)and zipf data distribution with a =1.5(right).050

100

150

200250300350

400

450

00.51 1.52

2.53

3.5

N u m b e r o f C e l l s Parameter a ordering 1ordering 2ordering https://www.doczj.com/doc/1214382346.html,bined uniform and zipf data distributions.

5.2Accuracy of the Approximations

In this set of experiments,we evaluate the accuracy of the approximation when using the coverage and the neighborhood approximation thresholds.In both cases,we report how many of the top-k tuples computed using the results stored in the tree actually belong to the top-k results.

Using the Coverage Approximation Threshold.A coverage approximation threshold of ct %means that at least ct %of the required values are available,i.e.,are already computed and stored in the context tree.We use three values for ct ,namely,40%,60%,and 80%.All weights take the value 0.33.In Fig.9,we present the percentage of di?erent results in the top-k list for each ct value,when a ‘*’value is given for a parameter with a small domain or a large domain,respectively.To compute the actual aggregate preference scores,we use the second of the two approaches presented in Section 2.2.The approximation is better when ‘*’refers to the context parameter with the large domain.This happens because in this case,more related paths of the context tree are available,and so,more top-k lists of results are merged to produce the new top-k list.

Using the Neighborhood Approximation Threshold.We consider ?rst that a query is similar with another one,when they have the same values for all the context parameters except one,and the values of this parameter are similar up to nt .We use three values for the parameter nt :0.04,0.08,and 0.12.The weights have the values 0.5,0.3,and 0.2.We count ?rst,the number of di?erent results between two similar queries that di?er at the value of one context parameter,as a function of the weight of this parameter,taking into consideration the di?erent values of nt .The results are depicted in Fig.10(left).Then,we examine the case in which the values of two context parameters are di?erent (Fig.10(right)).In this case,the accuracy of the results depens on both the weights that correspond to the context parameters whose values are similar.As expected,the smaller the value of the parameter nt ,the smaller the di?erence between the results in the top-k list of two similar queries.Note further,that the value of the weight that corresponds to the similar context parameter also a?ects the number of di?erent results:the smaller the value of the weight,the smaller the number of di?erent results.010

20

304050607080

4060

80P e r c e n t a g e o f D i f f e r e n t R e s u l t s Coverage Approximation Threshold (%)small large

Fig.9.Di?erent results when the ct has values 40%,60%,80%.05

10

1520253035404550

w=0.5w=0.3

w=0.2P e r c e n t a g e o f D i f f e r e n t R e s u l t s Weights relative to the parameter with similar values nt=0.04nt=0.08nt=0.1205101520253035404550(w:=0.5, 0.3)(w=0.5, 0.2)(w:0.3, 0.2)P e r c e n t a g e o f D i f f e r e n t R e s u l t s Weights relative to the parameters with similar values

nt=0.04nt=0.08nt=0.12Fig.10.Di?erent results between two similar queries when nt =0.04,0.08,0.12.6Related Work

Although there has been a lot of work on developing a variety of context infras-tructures and context-aware middleware and applications (such as,the Context Toolkit [6]and the Dartmouth Solar System [7]),there has been only little work on the integration of context into databases.Next,we discuss work related to context-aware queries and preference queries.A preliminary version of the model without the context tree and the performance evaluation part appears in [8].

Context and Queries.Although,there is much research on location-aware query processing in the area of spatio-temporal databases,integrating other forms of context in query processing is less explored.In the context-aware querying processing framework of[9],there is no notion of preferences,instead context attributes are treated as normal attributes of relations.Storing context data using data cubes,called context cubes,is proposed in[5]for developing context-aware applications that archive sensor data.In this work,data cubes are used to store historical context data and to extract interesting knowledge from large collections of context data.In our work,we use data cubes for storing context-dependent preferences and answering queries.The Context Relational Model (CR)[10]is an extended relational model that allows attributes to exist under some contexts or to have di?erent values under di?erent contexts.CR treats con-text as a?rst-class citizen at the level of data models,whereas in our approach, we use the traditional relational model to capture context as well as context-dependent preferences.Context as a set of dimensions(e.g.,context parameters) is also considered in[11]where the problem of representing context-dependent semistructured data is studied.A similar context model is also deployed in[12] for enhancing web service discovery with contextual parameters. Preferences in Databases.In this paper,we use context to con?ne database querying by selecting as results the best matching tuples based on the user preferences.The research literature on preferences is extensive.In particular,in the context of database queries,there are two di?erent approaches for expressing preferences:a quantitative and a qualitative one.With the quantitative approach, preferences are expressed indirectly by using scoring functions that associate a numeric score with every tuple of the query answer.In our work,we have adapted the general quantitative framework of[13],since it is more easy for users to employ.In the quantitative framework of[1],user preferences are stored as degrees of interest in atomic query elements(such as individual selection or join conditions)instead of interests in speci?c attribute values.Our approach can be generalized for this framework as well,either by including contextual parameters in the atomic query elements or by making the degree of interest for each atomic query element depend on context.In the qualitative approach (for example,[14]),the preferences between the tuples in the answer to a query are speci?ed directly,typically using binary preference relations.This framework can also be readily extended to include context.

7Summary

The use of context allows users to receive only relevant information.In this paper,we consider integrating context in expressing preferences,so that when a user poses a preference query in a database,the result also depends on context. In particular,each user indicates preferences on speci?c attribute values of a relation.Such preferences depend on context and are stored in data cubes.To allow re-using results of previously computed preference queries,we introduce a hierarchical data structure,called context tree.This tree can be used further to produce approximate results,using similar stored results.Our future work

includes exploring context information in answering additional queries,not just preference ones.

References

1.Koutrika,G.,Ioannidis,Y.:Personalization of Queries in Database Systems.In:

Proc.of ICDE.(2004)

2.Dey,A.K.:Understanding and Using Context.Personal and Ubiquitous Computing

5(2001)

3.Chen,G.,Kotz,D.:A Survey of Context-Aware Mobile Computing Research.

Dartmouth Computer Science Technical Report TR2000-381(2000)

4.Stefanidis,K.,Pitoura,E.,Vassiliadis,P.:Modeling and Storing Context-Aware

Preferences(extended version).University of Ioannina,Computer Science Departe-ment,TR2006-06(2006)

5.Harvel,L.,Liu,L.,Abowd,G.D.,Lim,Y.X.,Scheibe,C.,Chathamr,C.:Flexible

and E?ective Manipulation of Sensed Contex.In:Proc.of the2nd Intl.Conf.on Pervasive Computing.(2004)

6.Salber,D.,Dey,A.K.,Abowd,G.D.:The Context Toolkit:Aiding the Develop-

ment of Context-Enabled Applications.CHI Conference on Human Factors in Computing Systems(1999)

7.Chen,G.,Li,M.,Kotz,D.:Design and implementation of a large-scale context

fusion network.International Conference on Mobile and Ubiquitous Systems:Net-working and Services(2004)

8.Stefanidis,K.,Pitoura,E.,Vassiliadis,P.:On Supporting Context-Aware Pref-

erences in Relational Database Systems.International Workshop on Managing Context Information in Mobile and Pervasive Environments(2005)(extended ver-sion to appear in JPCC).

9.Feng,L.,Apers,P.,Jonker,W.:Towards Context-Aware Data Management for

Ambient Intelligence.In:Proc.of the15th Intl.Conf.on Database and Expert Systems Applications(DEXA).(2004)

10.Roussos,Y.,Stavrakas,Y.,Pavlaki,V.:Towards a Context-Aware Relational

Model.In the proceedings of the International Workshop on Context Representa-tion and Reasoning(CRR’05)(2005)

11.Stavrakas,Y.,Gergatsoulis,M.:Multidimensional Semistructured Data:Repre-

senting Context-Dependent Information on the Web.International Conference on Advanced Information Systems Engineering(CAiSE2002)(2002)

12.Doulkeridis,C.,Vazirgiannis,M.:Querying and Updating a Context-Aware Service

Directory in Mobile Environments.Web Intelligence(2004)562–565

13.Agrawal,R.,Wimmers,E.L.:A Framework for Expressing and Combining Pref-

erences.In:Proc.of SIGMOD.(2000)

14.Chomicki,J.:Preference Formulas in Relational Queries.TODS28(2003)

船舶原理

1.什么是船舶的浮性? 船舶在各种装载情况下具有漂浮在水面上保持平衡位置的能力 2.什么是静水力曲线?其使用条件是什么?包括哪些曲线?怎样用静水力曲线查某一吃水时的排水量和浮心位置? 船舶设计单位或船厂将这些参数预先计算出并按一定比例关系绘制在同一张图中;漂心坐标曲线、排水体积曲线;当已知船舶正浮或可视为正浮状态下的吃水时,便可在静水力曲线图中查得该吃水下的船舶的排水量、漂心坐标及浮心坐标等 3.什么是漂心?有何作用?平行沉浮的条件是什么? 船舶水线面积的几何中心称为漂心;根据漂心的位置,可以计算船舶在小角度纵倾时的首尾吃水;船舶在原水线面漂心的铅垂线上少量装卸重量时,船舶会平行沉浮;(1)必须为少量装卸重物(2)装卸重物p的重心必须位于原水线面漂心的铅垂线上 4.什么是TPC?其使用条件如何?有何用途? 每厘米吃水吨数是指船在任意吃水时,船舶水线面平行下沉或上浮1cm时所引起的排水量变化的吨数;已知船舶在吃水d时的tpc数值,便可迅速地求出装卸少量重物p之后的平均吃水变化量,或根据吃水的改变量求船舶装卸重物的重量 5.什么是船舶的稳性? 船舶在使其倾斜的外力消除后能自行回到原来平衡位置的性能。 6.船舶的稳性分几类? 横稳性、纵稳性、初稳性、大倾角稳性、静稳性、动稳性、完整稳性、破损稳性 7.船舶的平衡状态有哪几种?船舶处于稳定平衡状态、随遇平衡状态、不稳定平衡状态的条件是什么? 稳定平衡、不稳定平衡、随遇平衡 当外界干扰消失后,船舶能够自行恢复到初始平衡位置,该初始平衡状态称为稳定平衡当外界干扰消失后,船舶没有自行恢复到初始平衡位置的能力,该初始平衡状态称为不稳定平衡 当外界干扰消失后,船舶依然保持在当前倾斜状态,该初始平衡状态称为随遇平衡8.什么是初稳性?其稳心特点是什么?浮心运动轨迹如何? 指船舶倾斜角度较小时的稳性;稳心原点不动;浮心是以稳心为圆心,以稳心半径为半径做圆弧运动 9.什么是稳心半径?与吃水关系如何? 船舶在小角度倾斜过程中,倾斜前、后的浮力作用线的交点,与倾斜前的浮心位置的线段长,称为横稳性半径!随吃水的增加而逐渐减少 10.什么是初稳性高度GM?有何意义?影响GM的因素有哪些?从出发港到目的港整个航行过程中有多少个GM? 重心至稳心间的距离;吃水和重心高度;许多个 11.什么是大倾角稳性?其稳心有何特点? 船舶作倾角为10°-15以上倾斜或大于甲板边缘入水角时点的稳性 12.什么是静稳性曲线?有哪些特征参数? 描述复原力臂随横倾角变化的曲线称为静稳性曲线;初稳性高度、甲板浸水角、最大静复原力臂或力矩、静稳性曲线下的面积、稳性消失角 13.什么是动稳性、静稳性? 船舶在外力矩突然作用下的稳性。船舶在外力矩逐渐作用下的稳性。 14.什么是自由液面?其对稳性有何影响?减小其影响采取的措施有哪些? 可自由流动的液面称为自由液面;使初稳性高度减少;()减小液舱宽度(2)液舱应

船舶原理及结构课程教学大纲

文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持. 《船舶原理与结构》课程教学大纲 一、课程基本信息 1、课程代码:NA325 2、课程名称(中/英文):船舶原理与结构/Principles of Naval Architecture and Structures 3、学时/学分:60/3.5 4、先修课程:《高等数学》《大学物理》《理论力学》 5、面向对象:轮机工程、交通运输工程 6、开课院(系)、教研室:船舶海洋与建筑工程学院船舶与海洋工程系 7、教材、教学参考书: 《船舶原理》,刘红编,上海交通大学出版社,2009.11。 《船舶原理》(上、下册),盛振邦、刘应中主编,上海交通大学出版社,2003、2004。 《船舶原理与结构》,陈雪深、裘泳铭、张永编,上海交通大学出版社,1990.6。 《船舶原理》,蒋维清等著,大连海事大学出版社,1998.8。 相关法规和船级社入级规范。 二、课程性质和任务 《船舶原理与结构》是轮机工程和交通运输工程专业的一门必修课。它的主要任务是通过讲课、作业和实验环节,使学生掌握船舶静力学、船舶阻力、船舶推进、船舶操纵性与耐波性和船体结构的基本概念、基本原理、基本计算方法,培养学生分析、解决问题的基本能力,为今后从事工程技术和航运管理工作,打下基础。 本课程各教学环节对人才培养目标的贡献见下表。

三、教学内容和基本要求 课程教学内容共54学时,对不同的教学内容,其要求如下。 1、船舶类型 了解民用船舶、军用舰船、高速舰船的种类、用途和特征。 2、船体几何要素 了解船舶外形和船体型线力的一般特征,掌握船体主尺度、尺度比和船型系数。 3、船舶浮性 掌握船舶的平衡条件、船舶浮态的概念;掌握船舶重量、重心位置、排水量和浮心位置的计算方法;掌握近似数值计算方法—梯形法和辛普森法。 2

2020年研究生入学考试考纲

武汉理工大学交通学院2020年研究生入学考试大纲 2020年研究生入学考试考纲 《船舶流体力学》考试说明 一、考试目的 掌握船舶流体力学相关知识是研究船舶水动力学问题的基础。本门考试的目的是考察考生掌握船舶流体力学相关知识的水平,以保证被录取者掌握必要的基础知识,为从事相关领域的研究工作打下基础。 考试对象:2020年报考武汉理工大学交通学院船舶与海洋工程船舶性能方向学术型和专业型研究生的考生。 二、考试要点 (1)流体的基本性质和研究方法 连续介质概念;流体的基本属性;作用在流体上的力的特点和表述方法。 (2)流体静力学 作用在流体上的力;流体静压强及特性;压力体的概念,及静止流体对壁面作用力计算。 (3)流体运动学 研究流体运动的两种方法;流体流动的分类;流线、流管等基本概念。 (4)流体动力学 动量定理;伯努利方程及应用。 (5)船舶静力学 船型系数;浮性;稳性分类;移动物体和液舱内自由液面对船舶稳性的影响;初稳性高计算。 (6)船舶快速性 边界层概念;船舶阻力分类;尺度效应,及船模阻力换算为实船阻力;方尾和球鼻艏对阻力的影响;伴流和推力减额的概念;空泡现象对螺旋桨性能的影响;浅水对船舶性能的影响。 三、考试形式与试卷结构 1. 答卷方式:闭卷,笔试。 2. 答题时间:180分钟。 3. 试卷分数:总分为150分。 4. 题型:填空题(20×1分=20分);名词解释(10×3分=30分);简述题(4×10分=40分);计算题(4×15分=60分)。 四、参考书目 1. 熊鳌魁等编著,《流体力学》,科学出版社,2016。 2. 盛振邦、刘应中,《船舶原理》(上、下册)中的船舶静力学、船舶阻力、船舶推进部分,上海交通大学出版社,2003。 1

《船舶原理与结构》课程教学提纲

《船舶原理与结构》课程教学大纲 一、课程基本信息 1、课程代码: 2、课程名称(中/英文):船舶原理与结构/Principles of Naval Architecture and Structures 3、学时/学分:60/3.5 4、先修课程:《高等数学》《大学物理》《理论力学》 5、面向对象:轮机工程、交通运输工程 6、开课院(系)、教研室:船舶海洋与建筑工程学院船舶与海洋工程系 7、教材、教学参考书: 《船舶原理》,刘红编,上海交通大学出版社,2009.11。 《船舶原理》(上、下册),盛振邦、刘应中主编,上海交通大学出版社,2003、2004。 《船舶原理与结构》,陈雪深、裘泳铭、张永编,上海交通大学出版社,1990.6。 《船舶原理》,蒋维清等著,大连海事大学出版社,1998.8。 相关法规和船级社入级规范。 二、课程性质和任务 《船舶原理与结构》是轮机工程和交通运输工程专业的一门必修课。它的主要任务是通过讲课、作业和实验环节,使学生掌握船舶静力学、船舶阻力、船舶推进、船舶操纵性与耐波性和船体结构的基本概念、基本原理、基本计算方法,培养学生分析、解决问题的基本能力,为今后从事工程技术和航运管理工作,打下基础。 本课程各教学环节对人才培养目标的贡献见下表。

三、教学内容和基本要求 课程教学内容共54学时,对不同的教学内容,其要求如下。 1、船舶类型 了解民用船舶、军用舰船、高速舰船的种类、用途和特征。 2、船体几何要素 了解船舶外形和船体型线力的一般特征,掌握船体主尺度、尺度比和船型系数。 3、船舶浮性 掌握船舶的平衡条件、船舶浮态的概念;掌握船舶重量、重心位置、排水量和浮心位置的计算方法;掌握近似数值计算方法—梯形法和辛普森法。

2019年上海交通大学船舶与海洋工程考研良心经验

2019年上海交通大学船舶与海洋工程考研良心经验 我本科是武汉理工大学的,学的也是船舶与海洋工程,成绩属于中等偏上吧,也拿过两次校三等奖学金,六级第二次才考过。 由于种种原因,我到了8月份才终于下定决心考交大船海并开始准备,只有4个多月,时间比较紧迫。但只要你下定决心,什么时候开始都不算晚,也不要因为复习得不好,开始的晚了就降低学校的要求,放弃了自己的名校梦。每个人情况不一样,自己好好做决定,即使暂时难以决定,也要早点开始复习。决定是在可以在学习过程中做的,学习计划也是可以根据自己的情况更改的。所以即使不知道考哪,每天学习多久,怎样安排学习计划,那也要先开始,这样你才能更清楚学习的难度和量。万事开头难,千万不要拖。由于准备的晚怕靠个人来不及,于是在朋友推荐下我报了新祥旭专业课的一对一,个人觉得一对一比班课好,新祥旭刚好之专门做一对一比较专业,所以果断选择了新祥旭,如果有同学需要可以加卫:chentaoge123 上交船海考研学硕和专硕的科目是一样的,英语一、数学一、政治、船舶与海洋工程专业基础(801)。英语主要是背单词和刷真题,我复习的时间不多,背单词太花时间,就慢慢放弃了,就只是刷真题,真题中出现的陌生单词,都抄到笔记本上背,作文要背一下,准备一下套路,最好自己准备。英语考时感觉着超级简单,但只考了65分,还是很郁闷的。数学是重中之重,我八月份开时复习,直接上手复习全书,我觉得没有必要看课本,毕竟太基础,而且和考研重点不一样,看了课本或许也觉得很难,但是和考研不沾边。计划的是两个月复习一遍,开始刷题,然后一边复习其他的,可是计划跟不上变化,数学基础稍差,复习的较慢,我又不想为了赶进度而应付,某些地方掌握多少自己心里有数,若是只掌握个大概,也不利于后面的学习。所以自打复习开始,我就没放下过数学,期间也听一些网课,高数听张宇、武忠祥的,线代肯定是李永乐,概率论听王式安,课可以听,但最主要还是自己做题,我只听了一些强化班,感觉自己复习不好的地方听了一下。我真题到了11月中旬才开始做,实在是太晚,我8月开始复习时网上就有人说真题刷两遍了,能不慌吗,但再慌也要淡定,不要因此为了赶进度而自欺欺人,做什么事外界的声音是一回事,自己的节奏要自己把握好,不然

船舶原理整理资料,名词解释,简答题,武汉理工大学

第一章 船体形状 三个基准面(1)中线面(xoz 面)横剖线图(2)中站面(yoz 面)总剖线图(3) 基平面 (xoy 面)半宽水线图 型线图:用来描述(绘)船体外表面的几何形状。 船体主尺度 船长 L 、船宽(型宽)B 、吃水d 、吃水差t 、 t = dF – dA 、首吃水dF 、尾吃水dA 、平均吃水dM 、dM = (dF + dA )/ 2 } 、型深 D 、干舷 F 、(F = D – d ) 主尺度比 L / B 、B / d 、D / d 、B / D 、L / D 船体的三个主要剖面:设计水线面、中纵剖面、中横剖面 1.水线面系数Cw :船舶的水线面积Aw 与船长L,型宽B 的乘积之比。 2.中横剖面系数Cm :船舶的中横剖面积Am 与型宽B 、吃水d 二者的乘积之比值。 3.方型系数Cb :船舶的排水体积V,与船长L,型宽B 、吃水d 三者的乘积之比值。 4. 棱形系数(纵向)Cp :船舶排水体积V 与中横剖面积Am 、船长L 两者的乘积之比值。 5. 垂向棱形系数 Cvp :船舶排水体积V 与水线面积Aw 、吃水d 两者的乘积之比值。 船型系数的变化区域为:∈( 0 ,1 ] 第二章 船体计算的近似积分法 梯形法则约束条件(限制条件):(1) 等间距 辛氏一法则通项公式 约束条件(限制条件):(1)等间距 (2)等份数为偶数 (纵坐标数为奇数 )2m+1 辛氏二法则 约束条件(限制条件)(1)等间距 (2)等份数为3 3m+1 梯形法:(1)公式简明、直观、易记 ;(2)分割份数较少时和曲率变化较大时误差偏大。 辛氏法:(1)公式较复杂、计算过程多; (2)分割份数较少时和曲率变化较大时误差相对较小。 第三章 浮性 船舶(浮体)的漂浮状态:(1 )正浮(2)横倾(3)纵倾(4)纵横倾 排水量:指船舶在水中所排开的同体积水的重量。 平行沉浮条件:少量装卸货物P ≤ 10 ℅D 每厘米吃水吨数: TPC = 0.01×ρ×Aw {指使船舶吃水垂向(水平)改变1厘米应在船上施加的力(或重量) }{或指使船舶吃水垂向(水平)改变1厘米时,所引起的排水量的改变量 } (1)船型系数曲线 (2)浮性曲线 (3)稳性曲线 (4)邦金曲线 静水力曲线图:表示船舶正浮状态时的浮性要素、初稳性要素和船型系数等与吃水的关系曲线的总称,它是由船舶设计部门绘制,供驾驶员使用的一种重要的船舶资料。 第四章 稳性 稳性:是指船受外力作用离开平衡位置而倾斜,当外力消失后,船能回复到原平衡位置的能力。 稳心:船舶正浮时浮力作用线与微倾后浮力作用线的交点。 稳性的分类:(1)初稳性;(2)大倾角稳性;(3)横稳性;(4)纵稳性;(5)静稳性;(6)动稳性;(7)完整稳性;(8)非完整稳性(破舱稳性) 判断浮体的平衡状态:(1)根据倾斜力矩与稳性力矩的方向来判断;(2)根据重心与稳心的 相对位置来判断 浮态、稳性、初稳心高度、倾角 B L A C w w ?=d B A C m m ?=d V C ??=B L b L A V C m p ?=d A V C w vp ?=b b p vp m w C C C C C C ==, 002n n i i y y A l y =+??=-????∑[]012142...43n n l A y y y y y -=+++++[]0123213332...338n n n l A y y y y y y y --=++++++P D ?= P f P f x = x y = y = 0 ()P P d= cm TPC q ?= m g b g b g GM = z z = z BM z = z r z -+-+-

船舶原理题库

船舶原理试题库 第一部分船舶基础知识 (一)单项选择题 1.采用中机型船的是 A. 普通货船 C.集装箱船D.油船 2.油船设置纵向水密舱壁的目的是 A.提高纵向强度B.分隔舱室 C.提高局部强度 3. 油船结构中,应在 A.货油舱、机舱B.货油舱、炉舱 C. 货油舱、居住舱室 4.集装箱船的机舱一般设在 A. 中部B.中尾部C.尾部 5.需设置上下边舱的船是 B.客船C.油船D.液化气体船 6 B.杂货船C.散货船D.矿砂船 7 A. 散货船B.集装箱船C.液化气体船 8.在下列船舶中,哪一种船舶的货舱的双层底要求高度大 A.杂货船B.集装箱船C.客货船 9.矿砂船设置大容量压载边舱,其主要作用是 A.提高总纵强度B.提高局部强度 C.减轻摇摆程度 10 B.淡水舱C.深舱D.杂物舱 11.新型油船设置双层底的主要目的有 A.提高抗沉性B.提高强度 C.作压载舱用 12.抗沉性最差的船是 A.客船B.杂货船C散货船 13.横舱壁最少和纵舱壁最多的船分别是 B.矿砂船,集装箱船 C.油船,散货船D.客船,液化气体船 14.单甲板船有 A. 集装箱船,客货船,滚装船B.干散货船,油船,客货船 C.油船,普通货船,滚装船 15.根据《SOLASl974 A.20 B.15 D.10 16 A. 普通货船C.集装箱船D.散货船17.关于球鼻首作用的正确论述是 A. 增加船首强度

C.便于靠离码头D.建造方便 18.集装箱船通常用______表示其载重能力 A.总载重量B.满载排水量 C.总吨位 19. 油船的______ A. 杂物舱C.压载舱D.淡水舱 20 A. 集装箱船B,油船C滚装船 21.常用的两种集装箱型号和标准箱分别是 B.40ft集装箱、30ft集装箱 C.40ft集装箱、10ft集装箱 D.30ft集装箱、20ft集装箱 22.集装箱船设置双层船壳的主要原因是 A.提高抗沉性 C.作为压载舱 D. 作为货舱 23.结构简单,成本低,装卸轻杂货物作业效率高,调运过程中货物摇晃小的起货设备是 B.双联回转式 C.单个回转式D.双吊杆式 24. 具有操作与维修保养方便、劳动强度小、作业的准备和收尾工作少,并且可以遥控操 作的起货设备是 B.双联回转式 D.双吊杆式 25.加强船舶首尾端的结构,是为了提高船舶的 A.总纵强B.扭转强度 C.横向强度 26. 肋板属于 A. 纵向骨材 C.连接件D.A十B 27. 在船体结构的构件中,属于主要构件的是:Ⅰ.强横梁;Ⅱ.肋骨;Ⅲ.主肋板;Ⅳ. 甲板纵桁;Ⅴ.纵骨;Ⅵ.舷侧纵桁 A.Ⅰ,Ⅱ,Ⅲ,ⅣB.I,Ⅱ,Ⅲ,Ⅴ D.I,Ⅲ,Ⅳ, Ⅴ 28.船体受到最大总纵弯矩的部位是 A.主甲板B.船底板 D.离首或尾为1/4的船长处 29. ______则其扭转强度越差 A.船越长B.船越宽 C.船越大 30 A.便于检修机器B.增加燃料舱 D.B+C 31

考试大纲-重庆交通大学知识交流

硕士生入学复试考试《船舶原理与结构》 考试大纲 1考试性质 《船舶原理》和《船舶结构设计》均是船舶与海洋工程专业学生重要的专业基础课。它的评价标准是优秀本科毕业生能达到的水平,以保证被录取者具有较好的船舶原理和结构设计理论基础。 2考试形式与试卷结构 (1)答卷方式:闭卷,笔试 (2)答题时间:180分钟 (3)题型:计算题50%;简答题35%;名词解释15% (4)参考数目: 《船舶原理》,盛振邦、刘应中,上海交通大学出版社,2003 《船舶结构设计》,谢永和、吴剑国、李俊来,上海交通大学出版社,2011年 3考试要点 3.1 《船舶原理》 (1)浮性 浮性的一般概念;浮态种类;浮性曲线的计算与应用;邦戎曲线的计算与应用;储备浮力与载重线标志。 (2)船舶初稳性 稳性的一般概念与分类;初稳性公式的建立与应用;重物移动、

增减对稳性的影响;自由液面对稳性的影响;浮态及初稳性的计算;倾斜试验方法。 (3)船舶大倾角稳性 大倾角稳性、静稳性与动稳性的概念;静、动稳性曲线的计算及其特性;稳性的衡准;极限重心高度曲线;IMO建议的稳性衡准原则;提高稳性的措施。 (4)抗沉性 抗沉性的概念;安全限界线、渗透率、可浸长度、分舱因数的概念;可浸长度计算方法;船舶分舱制;提高抗沉性的方法。 (5)船舶阻力的基本概念与特点 船舶阻力的分类;阻力相似定律;阻力(摩擦阻力、粘压阻力、兴波阻力)产生的机理和特性。 (6)船舶阻力的确定方法 船模阻力试验方法;阻力换算方法;阻力近似计算的概念及方法;艾尔法、海军系数法等。 (7)船型对阻力的影响 船型变化及船型参数,主尺度及船型系数的影响,横剖面面积曲线形状的影响,满载水线形状的影响,首尾端形状的影响。 (8)浅水阻力特性 浅水对阻力影响的特点;浅窄航道对船舶阻力的影响。 (9)船舶推进器一般概念 推进器的种类、传送效率及推进效率;螺旋桨的几何特性。

计算机辅助船舶制造(考试大纲)

课程名称:计算机辅助船舶制造课程代码:01234(理论) 第一部分课程性质与目标 一、课程性质与特点 《计算机辅助船体建造》是船舶与海洋工程专业的一门专业必修课程,通过本课程各章节不同教学环节的学习,帮助学生建立良好的空间概念,培养其逻辑推理和判断能力、抽象思维能力、综合分析问题和解决问题的能力,以及计算机工程应用能力。 我国社会主义现代化建设所需要的高质量专门人才服务的。 在传授知识的同时,要通过各个教学环节逐步培养学生具有抽象思维能力、逻辑推理能力、空间想象能力和自学能力,还要特别注意培养学生具有比较熟练的计算机运用能力和综合运用所学知识去分析和解决问题的能力。 二、课程目标与基本要求 通过本课程学习,使学生对船舶计算机集成制造系统有较全面的了解,掌握计算机辅助船体建造的数学模型建模的思路和方法,培养计算机的应用能力,为今后进行相关领域的研究和开发工作打下良好的基础。 本课程基本要求: 1.正确理解下列基本概念: 计算机辅助制造,计算机辅助船体建造,造船计算机集成制造系统,船体型线光顺性准则,船体型线的三向光顺。 2.正确理解下列基本方法和公式: 三次样条函数,三次参数样条,三次B样条,回弹法光顺船体型线,船体构件展开计算的数学基础,测地线法展开船体外板的数值表示,数控切割的数值计算,型材数控冷弯的数值计算。 3.运用基本概念和方法解决下列问题: 分段装配胎架的型值计算,分段重量重心及起吊参数计算。 三、与本专业其他课程的关系 本课程是船舶与海洋工程专业的一门专业课,该课程应在修完本专业的基础课和专业基础课后进行学习。 先修课程:船舶原理、船体强度与结构设计、船舶建造工艺学 第二部分考核内容与考核目标 第1章计算机辅助船体建造概论 一、学习目的与要求 本章概述计算机辅助船体建造的主要体系及技术发展。通过对本章的学习,掌握计算机辅助制造的基本概念,了解计算机辅助船体建造的特点、造船计算机集成制造系统的基本含义和主要造船集成系统及其发展概况。 二、考核知识点与考核目标 (一)计算机辅助制造的基本概念(重点)(P1~P5) 识记:计算机在工业生产中的应用,计算机在产品设计中的应用,计算机在企业管理中的应用,计算机应用一体化。 理解:CIM和CIMS概念,计算机辅助制造的概念和组成 (二)造船CAM技术的特点(重点)P6~P8 识记:船舶产品和船舶生产过程的特点,造船CAM技术的特点 理解:船舶产品和船舶生产过程的特点与造船CAM技术之间的联系 应用:造船CAM技术应用范围 (三)计算机集成船舶制造系统概述(次重点)(P8~P11)

船舶维修技术实用手册

船舶维修技术实用手册》出版社:吉林科学技术出版社 出版日期:2005年 作者:张剑 开本:16开 册数:全四卷+1CD 定价:998.00元 详细介绍: 第一篇船舶原理与结构 第一章船舶概述 第二章船体结构与船舶管系 第三章锚设备 第四章系泊设备 第五章舵设备 第六章起重设备 第七章船舶系固设备 第八章船舶抗沉结构与堵漏 第九章船舶修理 第十章船舶人级与检验 第二篇现代船舶维修技术 第一章故障诊断与失效分析 第二章油液监控技术 第三章新材料、新工艺与新技术 第三篇船舶柴油机检修 第一章柴油机概述 第二章柴油机主要机件检修 第三章配气系统检修 第四章燃油系统检修 第五章润滑系统检修 第六章冷却系统检修 第七章柴油机操纵系统检修 第八章实际工作循环 第九章柴油机主要工作指标及其测定 第十章柴油机增压 第十一章柴油机常见故障及其应急处理

第四篇船舶电气设备检修 第一章船舶电气设备概述 第二章船舶常用电工材料 第三章船舶电工仪表及测量 第四章船舶常用低压电器及其检修 第五章船舶电机维护检修 第六章船舶电站维护检修 第七章船舶辅机电气控制装置维护检修第八章船舶内部通信及其信号装置检修第九章船舶照明系统维护检修 第五篇船舶轴舵系装置检修 第一章船舶轴系检修 第二章船舶舵系检修 第三章液压舵机检修 第四章轴舵系主要设备与要求 第五章轴舵系检测与试验 第六篇船舶辅机检修 第一章船用泵概述 第二章往复泵检修 第三章回转泵检修 第四章离心泵和旋涡泵 第五章喷射泵检修 第六章船用活塞空气压缩机检修 第七章通风机检修 第八章船舶制冷装置检修 第九章船舶空气调节装置检修 第十章船用燃油辅机锅炉和废气锅炉检修第十一章船舶油分离机检修 第十二章船舶防污染装置检修 第十三章海水淡化装置检修 第十四章操舵装置检修 第十五章锚机系缆机和起货机检修 第七篇船舶静电安全检修技术 第一章船舶静电起电机理 第二章舱内静电场计算 第三章船舶静电安全技术研究 第四章静电放电点燃估算 第五章船舶静电综合分析防治对策

船舶原理 名词解释啊

1长宽比L/B 快速性、操纵性 宽吃水比B/d 稳性、摇荡性、快速性、操纵性 深吃水比D/d 稳性、抗沉性、船体强度 宽深比B/D 船体强度、稳性 长深比L/D 船体强度、稳性 2船长:船舶的垂线间长代表船长,即沿设计夏季载重水线,由首柱前缘至舵柱后缘或舵杆中心线的长度 3型宽:在船体最宽处,沿设计水线自一舷的肋骨外缘量至另一舷的肋骨外缘之间的水平距离 4型表面:不包括船壳板和甲板板厚度在内的船体表面 5型深:在船长中的处,由平板龙骨上缘量至上甲板边线下缘的垂直距离 6型吃水:在船长中点处由平板龙骨上缘量至夏季载重水线的垂直距离 7型线图是表示船体型表面形状的图谱,由纵剖线图、横剖线图、半宽水线图和型值表组成; 8浮性:船舶在给定载重条件下,能保持一定的浮态的性能; 9平衡条件:作用在浮体上的重力与浮力大小相等、方向相反并作用于同一铅垂线上; 10净载重量NDW:指船舶在具体航次中所能装载货物质量的最大值 11漂浮条件:满足平衡条件,且船体体积大于排水体积; 12浮心:浮心是船舶所受浮力的作用中心,也是排水体积的几何中心; 13漂心:船舶水线面积的几何中心; 14平行沉浮:船舶装卸货物前后水线面保持平行的现象; 15每厘米吃水吨数(TPC):船舶吃水d每变化1cm,排水量变化的吨数,称为TPC。 16储备浮力:满载吃水以上的船体水密容积所具有的浮力 17干舷:在船长中点处由夏季载重水线量至上甲板边线上缘的垂直距离 18船舶稳性:船舶在外力(矩)作用下偏离其初始平衡位置,当外力(矩)消失后船舶能自行恢复到初始平衡状态的能力 19静稳性曲线:稳性力臂GZ或稳性力矩Ms随横倾角?变化曲线 20动稳性曲线:稳性力矩所做的功Ws或动稳性力臂I d随横倾角?变化的曲线 21吃水差比尺:是一种少量载荷变动时核算船舶纵向浮态变化的简易图表,它表示在船上任意位置加载100t后,船舶首、尾吃水该变量的图表 22最小倾覆力矩(力臂):船舶所能承受动横倾力矩(力臂)的极限 23进水角:船舶横倾至最低非水密度开口开始进水时的横倾角 24可浸长度:船舶进水后的水线恰与限界线相切时的货仓最大许可舱长称为可浸长度 25稳性衡准数K是指船舶最小倾覆力矩(臂)与风压倾侧力矩(臂)之比 26稳性的调整方法:船内载荷的垂向移动及载荷横向对称增减 27静稳性力臂的表达式:1)基点法2)假定重心法3)初稳心点法 28船体强度:为保证船舶安全,船体结构必须具有抵抗各种内外作用力使之发生极度形变和破坏的能力 29局部强度表示方法:①均布载荷;②集中载荷;③车辆甲板载荷;④堆积载荷 30MTC为每形成1cm吃水差所需的纵倾力矩值,称为每厘米纵倾力矩 31载荷纵向移动包括配载计划编制时不同货舱货物的调整及压载水、淡水或燃油的调拨等情况 32重量增减包括中途港货物装卸、加排压载水、油水消耗和补给、破舱进水等情况 33抗沉性:是指船舶在一舱或数舱破损进水后,仍能保持一定浮性和稳性,使船舶不致沉没或延缓沉没时间,以确保人命和财产安全的性能

船舶概论课程总结

《船舶概论》课程总结 本学期安管1101班的《船舶原理》课程已经考试结束,现将教学及考试情况小结如下: 一、教学情况。 本门课程采用多媒体课件进行教学,通过大量图片和视频,结合适度的现场教学,有利于学生对理论知识、设备和工艺的理解。本门课程使用的教材为人民交通出版社出版、邓召庭主编的《船舶概论》,内容涉及船舶类型、基本性能、船体结构、设备与系统和建造工艺。课程内容涉及面广,但内容深度不大,没有难以理解的理论和复杂的计算,但需要学生认真听讲、认真复习。教学评价采用形成性评价,因此在教学过程中注重对学生作业、测验等日常学习的把握,抓好课堂出勤和课堂控制,努力使学生掌握相关知识要点。 在教学过程当中,教师发现本门课程内容涉及面很广,但学生之前缺乏相关的知识基础,加之本门课程教学时数不多、进度较快,因此学生在学习的时候对知识的消化有一定的困难,也在一定程度上影响了同学们学习的兴趣和对知识的掌握。 二、考试分析。 此次考试是由任课教师统一出题,试卷标准符合学院要求。 试卷难易程度中等,题目范围涉及教材的全部内容,有船舶分类与用途、船舶几何形状、船舶航行性能、船体结构、船舶舱室设计、船舶设备与系统和船舶建造工艺。试题内容结合学生专业特点及日后工作的实际情况,题型有选择、判断题、名词解释、简答题和识图体,分值分别为:1,1,2,5和1分。 安管1101班共有29名学生中参加期末考试,其中卷面分数90~100分无;80~89分7人,占24.1%;70~79分11人,占37.9%;60~69分8人,占到27.6%;60分以下3人,占10.3%,最高分85,最低分24分。结合平日的各项考核成绩,形成性评价的总评平均分为75分。 三、改进教学及考核方法的建议。 安全管理专业的学生缺少船舶专业相关知识的积累,因此提高教学质量应该从研究教学方法和提高学生学习积极性两方面入手,进一步增加教学用实图、实物设备,尽量多的通过现场教学的方法展开教学;认真研究学生的学习心理,适当增加船舶方面专业知识的了解,激发他们的学习兴趣。 船体教研室:卢永然 2012.12.10

船舶原理习题20151116

《船舶原理》习题 船体形状 1.名词解释:垂线间长;设计水线长;型宽;型深;型吃水;干舷;水线面系数;中横剖面系数;方形系数;棱形系数。 2.已知某船船长L=100m;吃水d=5.0m;排水体积V=7500m3,方形系数C b=0.6;中横剖面系数 C m=0.9,试求:①中横剖面面积A M;②纵向棱形系数C p。 近似计算方法 1.已知某船吃水为4.2m的水线分为10等分,其纵坐标间距l=3.5m,自首向尾的纵坐标值(半宽,m)分别为: 站号012345678910 半宽值0 3.3 5.3 5.9 5.9 5.9 5.9 5.85 5.22 3.66 1.03 用梯形法则计算其水线面积? 浮性 1.名词解释:浮性,排水量,正浮,横倾,纵倾,每厘米吃水吨数,水线面面积曲线。 2.简述浮体的平衡条件。 3.简述水的密度变化对浮态的影响。 稳性 1.名词解释:稳性,静稳性,初稳性,稳心,初稳性高度,纵倾1cm力矩。 2.简述稳性分类 3.简述根据初稳性公式判断船舶的平衡状态? 4.三心(浮心、重心、稳性)之间的关系 5.静稳性图的特征 6.提高船舶稳性的措施 7.某船正浮时初稳性高GM=0.6m,排水量△=10000t,把船内100t货物向上移动3m,再向右舷横向移动10m,求货物移动后船的横倾角φ。 8.某船排水量D=17000t,稳性高GM=0.76m。若要GM达到1m,需从甲板间舱向底舱移动货物,设垂向移动距离l z=8m,问应移动多少吨货物? 抗沉性 1.名词解释:抗沉性,渗透率,限界线,可浸长度 2.根据船舱进水情况可将船舱分为三类舱,分别作出简略介绍 快速性 1.名词解释:船舶阻力,推进器,螺旋桨直径,螺距比,盘面比 2.简述阻力分类及其产生原因 3.简述螺旋桨产生推力的原因 4.简述船体伴流产生原因及分类 5.简述螺旋桨产生空泡原因 6.简述空泡对螺旋桨的影响 7.某船装有功率为2000马力的主机,正常航速为12节,因锅炉故障,使主机功率下降10%,试估算航速将下降多少? 操纵性与耐波性 1.名词解释:船舶操纵性,船舶耐波性,航向稳定性,回转性,反横距,正横距,纵距,战术直径,定常回转直径。

船舶结构力学+流体力学+专业英语+船舶原理与结构回忆版

2011 上海交通大学 船舶结构力学+流体力学+专业英语+船舶原理与结构回忆版 必选题(45’) 一、填空题(每空1分,共20分) 1,已知某船L, B, T,▽, 求Cb= 。 2,已知海水比重,船的▽,求△= 。 3,已知L, L/B, L/d, L/T, Cb,Aw,求▽= ,Cwp= 干舷= 4,船舶按用途分类, 5,(英语)写出高性能船舶的英语单词:气垫船、水翼艇、多体船、远洋渡船、豪华班船 6,三种船长的用法:静水力性能计算一般采用,分析阻力性能时常用,而在船进坞、靠码头或通过船闸时应该注意它的。 7,民船的两种载重量、。 8,三种平衡状态:,,。浮心在重心一下对应什么平衡状态 二、多选题(共20分,每题2分,回忆的不全) 1,一般货船包括哪几种(散货船、干货船等等) 2,B/d影响船舶的哪几种性能 3,(英语)船舶的几种运动:3种平动,3种转动的单词(首摇、垂荡等)4,(英语)船舶阻力的分类 5,提高稳定性的几种方法 6,静稳性曲线的特征量有哪几个 …… 三、简答(5分) 解释中拱中垂的概念 船舶结构力学(52.5’) 一、(1)随着高强度钢的应用,为什么要更重视船舶稳定性问题,需要校核稳性的构件有哪些?(5分) (2)矩阵位移法为什么要进行约束处理?如果是外力平衡体也需要约束处理吗?(5分)

二、用力法或位移法(任选1种)解图示结构,求2点挠度,画出2-3杆弯矩图,各杆长度均为L (22.5分) 三、李兹法求2处转角(基函数自取) (20分) 水动力学(52.5’) 四、小题每题2.5分:(选择、填空) 1,动力粘性系数的量纲 2,波长相同的深水波与浅水波比较波速 3,涡量输运与什么有关,速度、涡量 4,圆柱体的附加质量 5,势函数、流函数存在的条件 6,边界层分离发生在逆压区还是顺压区 7,欧拉方程运用的条件 8,速度水头、压力水头的公式 9,记不清了 五、参见《工程流体力学习题解析》P2旋转锥那题,原题(6分)

阻力名词解释

A相当平板假定,即船体的摩擦阻力与同速度、同长度、同湿面积的平板摩擦阻力相等。忽略了船体外表面弯曲度和粗糙度。 B由于船体弯曲表面影响使其摩擦阻力与相当平板计算所得结果的差别称为形状效应。 C长度(尺度)效应:模型与实船之间的绝对尺度不同,不可能做完全相似试验。引起力、力矩、等系数的差别称为尺度效应。 D粘压阻力与摩擦阻力系数之比为一常数k。K为形状系数。(k+1) 为形状因子。 E相当速度:是指形似船之间,为了保证傅汝德数Fr相等,它们的速度必须满足一定的对应关系,对船模与实船—— F不利干扰:首尾横波波谷叠加,使合成波波幅增大,导致波能增大,使兴波阻力增大,产生不利干扰。 G有利干扰:首波波峰在船尾与尾波波谷叠加,使合成波波幅减小,导致波能减小,使兴波阻力减小,产生有利干扰。 H破波阻力:低速丰满船型,在船首附近观察到破浪,使阻力有所增加,增加的阻力称为破波阻力。 I兴波长度:船首横波的第一个波峰和船尾横波第一个波峰之间的距离称为兴波长度。 J附体阻力的主要成分是摩擦阻力和粘压阻力 K由于波浪阻力增值的存在,如保持静水中相同功率时,航速必然会有所下降,这种航速的减小称为速度损失或简称失速。 L考虑到波浪中的阻力增值,如要维持静水中的相同航速,则必须较原静水功率有所增加,所增加的功率称为储备功率。 M汹涛阻力:船舶在风浪中的阻力将较静水中的为大,所增加的阻力称为波浪中阻力的增值,为。。。与风浪的大小、方向、船型及航速有关。 N回流速度:当流体流经船体时,由于船体曲率的影响,除首尾两端外,船体周围水的流速将比原来的流速有所增大,平均增量与船速的方向相反,所以称为回流速度。 O孤立波:高于船速向前传的波 R平行中体:指在船体中部附近与横剖面完全相同的一段船体。 S海军系数 总的摩擦阻力系数可取为光滑平板摩擦阻力系数Cf再加上一个与雷诺数无关的粗糙度补贴系数ΔCf 。我国取ΔCf = 0.4×10-3。

船舶与海洋工程专业综合

《船舶与海洋工程专业综合》 2015年研究生入学考试考纲 第一部分考试说明 一、考试性质 《船舶与海洋工程专业综合》是2013年武汉理工大学船舶与海洋工程专业硕士研究生入学考试科目。通过本科目综合考查考生是否熟练地掌握了《船舶与海洋工程专业综合》所涵盖的基本理论和基本知识,以满足硕士生阶段专业学习的需要。 考试对象为报考武汉理工大学船舶与海洋工程专业的2013年全国硕士研究生入学考试的准考考生。 二、考试学科范围及考试中所占比例 考试范围:船舶静力学、船舶设计原理、船体强度与结构设计、船舶建造工艺学 以上考试范围包含的四部分在《船舶与海洋工程专业综合》考试中均占25%的比例。 三、评价标准 1.掌握船体近似计算的方法,定积分及变上限积分的实际应用; 2.掌握船舶静水力曲线的计算原理及方法; 3.掌握船舶浮性、稳性的基本概念、稳性衡准值的计算方法及浮态计算方法; 4.掌握动稳性的基本概念及评价方法;掌握船舶稳性的基本衡准的基本原理及方法; 5.掌握非完整稳性的概念及破舱稳性的计算方法; 6.掌握船舶设计基本原理,并能用基本原理及技术方法,分析和解决有关船舶设计中的实际问题; 7.能对一般排水型船舶进行总强度计算; 8.掌握船体结构设计的一般方法; 9.掌握船舶建造的基本理论、工艺原则、工艺装备和工艺方法,分析和解决船舶建造工艺问题。 四、考试形式与试卷结构 1.答卷方式:闭卷,笔试; 2.答题时间:180分钟; 3.试卷分数:总分为150分;其中,船舶静力学、船舶设计原理、船体强度与结构设计、船舶建造工 艺学各部分试题的分数均占总分的25%左右。 4.题型比例: (1)名词解释题约16%;

船舶原理第一、五~十章整理

一.船舶类型 1.船舶类型: 按用途分:军用舰艇和民用船舶 商船: 1)集装箱船:以运载集装箱为主的专门运输船舶,可分为全集装箱船和半集装箱船。 2)散货船:专门用于载运粉末状、颗粒状、块状等废包装类大宗货物的运输船舶。主要有普通散货船、专用散货船、兼用散货船及特种散货船等。 (轻便型:20000~35000载重t;灵便型:40000~47000载重t;巴拿马型:5~8万t; 好望角型:10~18万t) 3)油船:专门用于载运散装石油及成品油的液货船。(分为原油船和成品油船) 4)滚装船:把装有集装箱及其他拣货的半挂车或装有货物的带轮子的托盘作为货运单元,由牵引车或叉车直接进出货舱进行装卸的船舶。 5)载驳船:用来运送装载驳船的运输船舶,又名子母船。 6)冷藏船:使易腐货物处于冻结状态或某种低温条件下进行载运的专用船舶。 2.运输船舶的发展趋势:大型化(限制:航程长,航道、港口限制,货源充沛,码头作业效率)、高速化、自动化、专用化。 3.军用船舶与民用船舶的界定: 1)吨位概念不同,商船所说的吨位是载货量非本身自重,而军舰所指的是排水量就是本身的自重。 2)技术含量不同。 3)速度不同。 五.船舶抗沉性 1.抗沉性:船舶在一舱或数舱破损进水后,仍能漂浮于水面,并保持一定浮态和稳性的能力。 2.抗沉性的保证方法:使船舶具有足够的储备浮力和稳性、船舶分舱。 3.安全界限线:在船侧由舱壁甲板上表面向下量76mm处画一条与舱壁甲板边线平行的线。 4.渗透率:船舱内实际进水体积与空舱的型体积之比。 5.可浸长度:公约和规范规定的两水密横舱壁间的极限长度,意味着对应干舷最小。(即船舱破损后,海损水线恰好与安全限界线相切的进水舱的舱长) 6.许可舱长:可浸长度与分舱因数的乘积。 7.分舱因数F:由船长和船舶的业务衡准数决定的,用以确定许可舱长的因数。 8.1.00>=F>0.5为一舱制船舶,任一舱破损都不致沉没;0.5>=F.0.33为两舱制船舶,任何相邻两舱破损不沉没;0.33>=F>0.25为三舱制船舶。 9.增加重量法:将破舱进水后的进水看作是故意加载的液体。 10.损失浮力法:将破舱进水的那部分舱内体积看成是脱离船体的一部分,即该部分的浮力已经损失,将他从排水量中扣除,使船舶浮力减少。 六.船舶阻力 1.船舶航速由阻力大小、主机功率大小、螺旋桨(推进器)效率高低三个因素决定。 2.快速性:一:对一定排水量的船舶,在给定主机功率消耗的条件下能达到的航速高低。航速高,快速性好。二:对一定排水量的船舶,为维持一定的航速所需的主机功率大小。功率小,快速性好。可以说快速性的优劣取决于船舶阻力和推进性能。

船舶原理名词解释

1.舷弧:船舶的甲板边线自船中向首尾逐渐升高,称为“舷弧”。 2.梁拱:甲板中线比其左右舷的甲板边线高,其高度差称为梁拱。 3.舷弧和梁拱作用:有利于甲板上浪, 上浪后使甲板积水自首尾向船中,且自甲板中线流向船尾。 4.型线图:表示船体几何形状的图。 5.型表面:钢船型表面为外板的内表面,水泥船和木质船的型表面为船壳的外表面。 6.型线图的三个基准面:中线面,中站面,基平面 7.中线面:将船体分为左右舷两个对称部分的纵向垂直平面。 8.中站面:在船长中点处垂直于中线面和基平面的横向平面。 9.基平面:过龙骨线和中站面的交点O,并平行于设计水线面的平面。 10.平行中体:在船中前后有一段横剖面形状和中横剖面相同的船体,称为平行中体 11.船型系数:表示船体水下部分面积和体积肥瘦程度的无因次系数,这些系数的大小对分析船型和船舶性能等有很大作用。 12主尺度:根据《钢制海船入籍规范》定义的船型尺度。它位于吨位证上 13.最大尺度:包括各种附体结构在内的,从一端点到另一端点的总尺度。 14.登记尺度:根据《1966年国际船舶吨位丈量公约》中定义的,是主管机关在登记和计算船舶总吨位,净吨时所用的尺度。位于吨位证书上 15.主要剖面:中纵剖面,中横剖面,设计水线面这三个大致反映出船体几何形状特征 11.船长:首尾垂线间长 12.附体:桨、舵、舭龙骨、轴支架。 13.型长:沿设计水线,由首柱前缘量至舵柱后缘的水平间距,无舵柱的量至舵杆中心线。 14.型宽:在船舶最宽处,由一舷的肋骨外缘至另一舷外缘之间的水平间距。 15.型吃水:在船长中点处,由平板龙骨上缘量至夏季满载水线的垂直距离。 16.型深:在船的中横剖面处,沿船舷自平板龙骨上缘至上层连续甲板横梁上缘的垂直距离。 17.吃水差:首尾吃水的差值。 18.船体系数:水线面Aw,船中剖面面积Am,水线面系数Cw,中

武汉理工大学船舶推进期末考试习题

名词解释: 1. 进程、滑脱、滑脱比、伴流、推力减额、标称伴流、实效伴流、推进特性 曲线、螺旋桨有效推力曲线、结构效率、实效螺距等。 2. 空泡问题 3. 尾部形状对快速性的影响 4. 叶元体速度三角形及力的多边形,叶元体的效率表达式及提高效率的措施 5. 等推力法测量实效伴流 6. 船桨机平衡配合问题 计算问题 已知某沿海高速客船当航速kn V S =,船体有效功率kW P E =,螺旋桨直径m D 84.0=,主 机额定功率为kW ?2?,额定转速xxxrpm N =,齿轮箱减速比为x i =,轴系效率和减速箱效率均为?,螺旋桨的敞水性征曲线为 J D C K J B A K Q T ?-=?-=10 若取相对旋转效率0.1=ηR ,水密度3 1025m kg =ρ,试求: 1,敞水收到功率和敞水转矩? 2,该船的伴流分数? 3,该船的推力减额分数 ? 4,螺旋桨敞水效率,推进效率 ? 5 该桨的实效螺距比和零转矩螺距比 ? 解答: 已知条件: s m kn V S 861.1225==, kN V P R S E 806.53861.12/692/=== rps n 1515.1560/2.2/2000== 1,求0 0Q P D 和?(4分) kW P P R G S S D 35.5320.1975.0975.05600=???=???=ηηη m kN n P Q D ?=??= = 592.51515 .15235.532200ππ 2,求 w ?(4分)

9604 .0096.0149.00568.084 .01515 .15025.1592 .55 2 5 20 =??-==??= = J J D n Q K Q ρ 0496 .0861 .1284 .01515.159604.011=??- =- =S V JnD w 3,求 t ?(4分) KN D n D n K T J K T T 33.28025.12418.02418.05.0722.04 24 2 =???===?-=ρ 0504 .033 .28903.2612/1=- =- =T R t 4,6507 .020== Q T K K J πη 650.035.532/2/692==D η 5, 444 .10 1===T K J D P , 552 .10 2===Q K J D P 已知内河双桨推轮的螺旋桨直径D=2.0米,主机额定功率kW P DB 736=,额定转速 rpm N 240=,直接传动,该船的推进因子为05.0,1.0==t w ,0.1=R η,轴系效率 取1.0, 该系列螺旋桨的敞水性征曲线如下 J D P K J D P K Q T ?-+-=?-?+-=494.0/35.010382.0/448.000361.0 请解答下列问题: 1,在设计拖速h km V T /18=时,充分吸收主机功率条件下确定螺距比。(5分) 2,该船的系柱拖力及所需的主机功率。(5分) 3,若船体阻力为 2 209.0)(V kN R =,船速单位为h km /,求该船的自由航速及所需的主 机功率。(5分) 解答: 已知条件: s m h km V S 0.5/18==, kW P P S D 7360==,rps n 460/240== m D 0.2=,水密度 3 1000m kg =ρ 1,在设计拖速下有

相关主题
文本预览
相关文档 最新文档