当前位置:文档之家› rfc4128.Bandwidth Constraints Models for Differentiated Services (Diffserv)-aware MPLS Traffic Engin

rfc4128.Bandwidth Constraints Models for Differentiated Services (Diffserv)-aware MPLS Traffic Engin

rfc4128.Bandwidth Constraints Models for Differentiated Services (Diffserv)-aware MPLS Traffic Engin
rfc4128.Bandwidth Constraints Models for Differentiated Services (Diffserv)-aware MPLS Traffic Engin

Network Working Group W. Lai Request for Comments: 4128 AT&T Labs Category: Informational June 2005 Bandwidth Constraints Models for

Differentiated Services (Diffserv)-aware MPLS Traffic Engineering:

Performance Evaluation

Status of This Memo

This memo provides information for the Internet community. It does

not specify an Internet standard of any kind. Distribution of this

memo is unlimited.

Copyright Notice

Copyright (C) The Internet Society (2005).

IESG Note

The content of this RFC has been considered by the IETF (specifically in the TE-WG working group, which has no problem with publication as an Informational RFC), and therefore it may resemble a current IETF

work in progress or a published IETF work. However, this document is an individual submission and not a candidate for any level of

Internet Standard. The IETF disclaims any knowledge of the fitness

of this RFC for any purpose, and in particular notes that it has not had complete IETF review for such things as security, congestion

control or inappropriate interaction with deployed protocols. The

RFC Editor has chosen to publish this document at its discretion.

Readers of this RFC should exercise caution in evaluating its value

for implementation and deployment. See RFC 3932 for more

information.

Abstract

"Differentiated Services (Diffserv)-aware MPLS Traffic Engineering

Requirements", RFC 3564, specifies the requirements and selection

criteria for Bandwidth Constraints Models. Two such models, the

Maximum Allocation and the Russian Dolls, are described therein.

This document complements RFC 3564 by presenting the results of a

performance evaluation of these two models under various operational conditions: normal load, overload, preemption fully or partially

enabled, pure blocking, or complete sharing.

Lai Standards Track [Page 1]

Table of Contents

1. Introduction (3)

1.1. Conventions used in this document (4)

2. Bandwidth Constraints Models (4)

3. Performance Model (5)

3.1. LSP Blocking and Preemption (6)

3.2. Example Link Traffic Model (8)

3.3. Performance under Normal Load (9)

4. Performance under Overload (10)

4.1. Bandwidth Sharing versus Isolation (10)

4.2. Improving Class 2 Performance at the Expense of Class 3 (12)

4.3. Comparing Bandwidth Constraints of Different Models (13)

5. Performance under Partial Preemption (15)

5.1. Russian Dolls Model (16)

5.2. Maximum Allocation Model (16)

6. Performance under Pure Blocking (17)

6.1. Russian Dolls Model (17)

6.2. Maximum Allocation Model (18)

7. Performance under Complete Sharing (19)

8. Implications on Performance Criteria (20)

9. Conclusions (21)

10. Security Considerations (22)

11. Acknowledgements (22)

12. References (22)

12.1. Normative References (22)

12.2. Informative References (22)

Lai Standards Track [Page 2]

1. Introduction

Differentiated Services (Diffserv)-aware MPLS Traffic Engineering

(DS-TE) mechanisms operate on the basis of different Diffserv classes of traffic to improve network performance. Requirements for DS-TE

and the associated protocol extensions are specified in references

[1] and [2] respectively.

To achieve per-class traffic engineering, rather than on an aggregate basis across all classes, DS-TE enforces different Bandwidth

Constraints (BCs) on different classes. Reference [1] specifies the requirements and selection criteria for Bandwidth Constraints Models (BCMs) for the purpose of allocating bandwidth to individual classes. This document presents a performance analysis for the two BCMs

described in [1]:

(1) Maximum Allocation Model (MAM) - the maximum allowable bandwidth usage of each class, together with the aggregate usage across all classes, are explicitly specified.

(2) Russian Dolls Model (RDM) - specification of maximum allowable

usage is done cumulatively by grouping successive priority

classes recursively.

The following criteria are also listed in [1] for investigating the

performance and trade-offs of different operational aspects of BCMs:

(1) addresses the scenarios in Section 2 of [1]

(2) works well under both normal and overload conditions

(3) applies equally when preemption is either enabled or disabled

(4) minimizes signaling load processing requirements

(5) maximizes efficient use of the network

(6) minimizes implementation and deployment complexity

The use of any given BCM has significant impacts on the capability of a network to provide protection for different classes of traffic,

particularly under high load, so that performance objectives can be

met [3]. This document complements [1] by presenting the results of a performance evaluation of the above two BCMs under various

operational conditions: normal load, overload, preemption fully or

partially enabled, pure blocking, or complete sharing. Thus, our

focus is only on the performance-oriented criteria and their

Lai Standards Track [Page 3]

implications for a network implementation. In other words, we are

only concerned with criteria (2), (3), and (5); we will not address

criteria (1), (4), or (6).

Related documents in this area include [4], [5], [6], [7], and [8].

In the rest of this document, the following DS-TE acronyms are used: BC Bandwidth Constraint

BCM Bandwidth Constraints Model

MAM Maximum Allocation Model

RDM Russian Dolls Model

There may be differences between the quality of service expressed and obtained with Diffserv without DS-TE and with DS-TE. Because DS-TE

uses Constraint Based Routing, and because of the type of admission

control capabilities it adds to Diffserv, DS-TE has capabilities for traffic that Diffserv does not. Diffserv does not indicate

preemption, by intent, whereas DS-TE describes multiple levels of

preemption for its Class-Types. Also, Diffserv does not support any means of explicitly controlling overbooking, while DS-TE allows this. When considering a complete quality of service environment, with

Diffserv routers and DS-TE, it is important to consider these

differences carefully.

1.1. Conventions used in this document

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",

"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.

2. Bandwidth Constraints Models

To simplify our presentation, we use the informal name "class of

traffic" for the terms Class-Type and TE-Class, defined in [1]. We

assume that (1) there are only three classes of traffic, and that (2) all label-switched paths (LSPs), regardless of class, require the

same amount of bandwidth. Furthermore, the focus is on the bandwidth usage of an individual link with a given capacity; routing aspects of LSP setup are not considered.

The concept of reserved bandwidth is also defined in [1] to account

for the possible use of overbooking. Rather than get into these

details, we assume that each LSP is allocated 1 unit of bandwidth on a given link after establishment. This allows us to express link

bandwidth usage simply in terms of the number of simultaneously

established LSPs. Link capacity can then be used as the aggregate

constraint on bandwidth usage across all classes.

Lai Standards Track [Page 4]

Suppose that the three classes of traffic assumed above for the

purposes of this document are denoted by class 1 (highest priority), class 2, and class 3 (lowest priority). When preemption is enabled, these are the preemption priorities. To define a generic class of

BCMs for the purpose of our analysis in accordance with the above

assumptions, let

Nmax = link capacity; i.e., the maximum number of simultaneously

established LSPs for all classes together

Nc = the number of simultaneously established class c LSPs,

for c = 1, 2, and 3, respectively.

For MAM, let

Bc = maximum number of simultaneously established class c LSPs.

Then, Bc is the Bandwidth Constraint for class c, and we have

Nc <= Bc <= Nmax, for c = 1, 2, and 3

N1 + N2 + N3 <= Nmax

B1 + B2 + B3 >= Nmax

For RDM, the BCs are specified as:

B1 = maximum number of simultaneously established class 1 LSPs

B2 = maximum number of simultaneously established LSPs for classes 1 and 2 together

B3 = maximum number of simultaneously established LSPs for classes 1, 2, and 3 together

Then, we have the following relationships:

N1 <= B1

N1 + N2 <= B2

N1 + N2 + N3 <= B3

B1 < B2 < B3 = Nmax

3. Performance Model

Reference [8] presents a 3-class Markov-chain performance model to

analyze a general class of BCMs. The BCMs that can be analyzed

include, besides MAM and RDM, BCMs with privately reserved bandwidth that cannot be preempted by other classes.

Lai Standards Track [Page 5]

The Markov-chain performance model in [8] assumes Poisson arrivals

for LSP requests with exponentially distributed lifetime. The

Poisson assumption for LSP requests is relevant since we are not

dealing with the arrivals of individual packets within an LSP. Also, LSP lifetime may exhibit heavy-tail characteristics. This effect

should be accounted for when the performance of a particular BCM by

itself is evaluated. As the effect would be common for all BCMs, we ignore it for simplicity in the comparative analysis of the relative performance of different BCMs. In principle, a suitably chosen

hyperexponential distribution may be used to capture some aspects of heavy tail. However, this will significantly increase the complexity of the non-product-form preemption model in [8].

The model in [8] assumes the use of admission control to allocate

link bandwidth to LSPs of different classes in accordance with their respective BCs. Thus, the model accepts as input the link capacity

and offered load from different classes. The blocking and preemption probabilities for different classes under different BCs are generated as output. Thus, from a service provider’s perspective, given the

desired level of blocking and preemption performance, the model can

be used iteratively to determine the corresponding set of BCs.

To understand the implications of using criteria (2), (3), and (5) in the Introduction Section to select a BCM, we present some numerical

results of the analysis in [8]. This is intended to facilitate

discussion of the issues that can arise. The major performance

objective is to achieve a balance between the need for bandwidth

sharing (for increasing bandwidth efficiency) and the need for

bandwidth isolation (for protecting bandwidth access by different

classes).

3.1. LSP Blocking and Preemption

As described in Section 2, the three classes of traffic used as an

example are class 1 (highest priority), class 2, and class 3 (lowest priority). Preemption may or may not be used, and we will examine

the performance of each scenario. When preemption is used, the

priorities are the preemption priorities. We consider cross-class

preemption only, with no within-class preemption. In other words,

preemption is enabled so that, when necessary, class 1 can preempt

class 3 or class 2 (in that order), and class 2 can preempt class 3. Each class offers a load of traffic to the network that is expressed in terms of the arrival rate of its LSP requests and the average

lifetime of an LSP. A unit of such a load is an erlang. (In

packet-based networks, traffic volume is usually measured by counting the number of bytes and/or packets that are sent or received over an interface during a measurement period. Here we are only concerned

Lai Standards Track [Page 6]

with bandwidth allocation and usage at the LSP level. Therefore, as a measure of resource utilization in a link-speed independent manner, the erlang is an appropriate unit for our purpose [9].)

To prevent Diffserv QoS degradation at the packet level, the expected number of established LSPs for a given class should be kept in line

with the average service rate that the Diffserv scheduler can provide to that class. Because of the use of overbooking, the actual traffic carried by a link may be higher than expected, and hence QoS

degradation may not be totally avoidable.

However, the use of admission control at the LSP level helps minimize QoS degradation by enforcing the BCs established for the different

classes, according to the rules of the BCM adopted. That is, the BCs are used to determine the number of LSPs that can be simultaneously

established for different classes under various operational

conditions. By controlling the number of LSPs admitted from

different classes, this in turn ensures that the amount of traffic

submitted to the Diffserv scheduler is compatible with the targeted

packet-level QoS objectives.

The performance of a BCM can therefore be measured by how well the

given BCM handles the offered traffic, under normal or overload

conditions, while maintaining packet-level service objectives. Thus, assuming that the enforcement of Diffserv QoS objectives by admission control is a given, the performance of a BCM can be expressed in

terms of LSP blocking and preemption probabilities.

Different BCMs have different strengths and weaknesses. Depending on the BCs chosen for a given load, a BCM may perform well in one

operating region and poorly in another. Service providers are mainly concerned with the utility of a BCM to meet their operational needs. Regardless of which BCM is deployed, the foremost consideration is

that the BCM works well under the engineered load, such as the

ability to deliver service-level objectives for LSP blocking

probabilities. It is also expected that the BCM handles overload

"reasonably" well. Thus, for comparison, the common operating point we choose for BCMs is that they meet specified performance objectives in terms of blocking/preemption under given normal load. We then

observe how their performance varies under overload. More will be

said about this aspect later in Section 4.2.

Lai Standards Track [Page 7]

3.2. Example Link Traffic Model

For example, consider a link with a capacity that allows a maximum of 15 LSPs from different classes to be established simultaneously. All LSPs are assumed to have an average lifetime of 1 time unit. Suppose that this link is being offered a load of

2.7 erlangs from class 1,

3.5 erlangs from class 2, and

3.5 erlangs from class 3.

We now consider a scenario wherein the blocking/preemption

performance objectives for the three classes are desired to be

comparable under normal conditions (other scenarios are covered in

later sections). To meet this service requirement under the above

given load, the BCs are selected as follows:

For MAM:

up to 6 simultaneous LSPs for class 1,

up to 7 simultaneous LSPs for class 2, and

up to 15 simultaneous LSPs for class 3.

For RDM:

up to 6 simultaneous LSPs for class 1 by itself,

up to 11 simultaneous LSPs for classes 1 and 2 together, and

up to 15 simultaneous LSPs for all three classes together.

Note that the driver is service requirement, independent of BCM. The above BCs are not picked arbitrarily; they are chosen to meet

specific performance objectives in terms of blocking/preemption

(detailed in the next section).

An intuitive "explanation" for the above set of BCs may be as

follows. Class 1 BC is the same (6) for both models, as class 1 is

treated the same way under either model with preemption. However,

MAM and RDM operate in fundamentally different ways and give

different treatments to classes with lower preemption priorities. It can be seen from Section 2 that although RDM imposes a strict

ordering of the different BCs (B1 < B2 < B3) and a hard boundary

(B3 = Nmax), MAM uses a soft boundary (B1+B2+B3 >= Nmax) with no

specific ordering. As will be explained in Section 4.3, this allows RDM to have a higher degree of sharing among different classes. Such a higher degree of coupling means that the numerical values of the

BCs can be relatively smaller than those for MAM, to meet given

performance requirements under normal load.

Lai Standards Track [Page 8]

Thus, in the above example, the RDM BCs of (6, 11, 15) may be thought of as roughly corresponding to the MAM BCs of (6, 6+7, 6+7+15). (The intent here is just to point out that the design parameters for the

two BCMs need to be different, as they operate differently; strictly speaking, the numerical correspondence is incorrect.) Of course,

both BCMs are bounded by the same aggregate constraint of the link

capacity (15).

The BCs chosen in the above example are not intended to be regarded

as typical values used by any service provider. They are used here

mainly for illustrative purposes. The method we used for analysis

can easily accommodate another set of parameter values as input.

3.3. Performance under Normal Load

In the example above, based on the BCs chosen, the blocking and

preemption probabilities for LSP setup requests under normal

conditions for the two BCMs are given in Table 1. Remember that the BCs have been selected for this scenario to address the service

requirement to offer comparable blocking/preemption objectives for

the three classes.

Table 1. Blocking and preemption probabilities

BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3

MAM 0.03692 0.03961 0.02384 0 0.02275 0.03961 0.04659

RDM 0.03692 0.02296 0.02402 0.01578 0.01611 0.03874 0.04013

In the above table, the following apply:

PB1 = blocking probability of class 1

PB2 = blocking probability of class 2

PB3 = blocking probability of class 3

PP2 = preemption probability of class 2

PP3 = preemption probability of class 3

PB2+PP2 = combined blocking/preemption probability of class 2

PB3+PP3 = combined blocking/preemption probability of class 3

First, we observe that, indeed, the values for (PB1, PB2+PP2,

PB3+PP3) are very similar one to another. This confirms that the

service requirement (of comparable blocking/preemption objectives for the three classes) has been met for both BCMs.

Lai Standards Track [Page 9]

Then, we observe that the (PB1, PB2+PP2, PB3+PP3) values for MAM are very similar to the (PB1, PB2+PP2, PB3+PP3) values for RDM. This

indicates that, in this scenario, both BCMs offer very similar

performance under normal load.

From column 2 of Table 1, it can be seen that class 1 sees exactly

the same blocking under both BCMs. This should be obvious since both allocate up to 6 simultaneous LSPs for use by class 1 only. Slightly better results are obtained from RDM, as shown by the last two

columns in Table 1. This comes about because the cascaded bandwidth separation in RDM effectively gives class 3 some form of protection

from being preempted by higher-priority classes.

Also, note that PP2 is zero in this particular case, simply because

the BCs for MAM happen to have been chosen in such a way that class 1 never has to preempt class 2 for any of the bandwidth that class 1

needs. (This is because class 1 can, in the worst case, get all the bandwidth it needs simply by preempting class 3 alone.) In general, this will not be the case.

It is interesting to compare these results with those for the case of a single class. Based on the Erlang loss formula, a capacity of 15

servers can support an offered load of 10 erlangs with a blocking

probability of 0.0364969. Whereas the total load for the 3-class BCM is less with 2.7 + 3.5 + 3.5 = 9.7 erlangs, the probabilities of

blocking/preemption are higher. Thus, there is some loss of

efficiency due to the link bandwidth being partitioned to accommodate for different traffic classes, thereby resulting in less sharing.

This aspect will be examined in more detail later, in Section 7 on

Complete Sharing.

4. Performance under Overload

Overload occurs when the traffic on a system is greater than the

traffic capacity of the system. To investigate the performance under overload conditions, the load of each class is varied separately.

Blocking and preemption probabilities are not shown separately for

each case; they are added together to yield a combined

blocking/preemption probability.

4.1. Bandwidth Sharing versus Isolation

Figures 1 and 2 show the relative performance when the load of each

class in the example of Section 3.2 is varied separately. The three series of data in each of these figures are, respectively,

Lai Standards Track [Page 10]

class 1 blocking probability ("Class 1 B"),

class 2 blocking/preemption probability ("Class 2 B+P"), and

class 3 blocking/preemption probability ("Class 3 B+P").

For each of these series, the first set of four points is for the

performance when class 1 load is increased from half of its normal

load to twice its normal. Similarly, the next and the last sets of

four points are when class 2 and class 3 loads are increased

correspondingly.

The following observations apply to both BCMs:

1. The performance of any class generally degrades as its load

increases.

2. The performance of class 1 is not affected by any changes

(increases or decreases) in either class 2 or class 3 traffic,

because class 1 can always preempt others.

3. Similarly, the performance of class 2 is not affected by any

changes in class 3 traffic.

4. Class 3 sees better (worse) than normal performance when either

class 1 or class 2 traffic is below (above) normal.

In contrast, the impact of the changes in class 1 traffic on class 2 performance is different for the two BCMs: It is negligible in MAM

and significant in RDM.

1. Although class 2 sees little improvement (no improvement in this

particular example) in performance when class 1 traffic is below

normal when MAM is used, it sees better than normal performance

under RDM.

2. Class 2 sees no degradation in performance when class 1 traffic is above normal when MAM is used. In this example, with BCs 6 + 7 < 15, class 1 and class 2 traffic is effectively being served by

separate pools. Therefore, class 2 sees no preemption, and only

class 3 is being preempted whenever necessary. This fact is

confirmed by the Erlang loss formula: a load of 2.7 erlangs

offered to 6 servers sees a 0.03692 blocking, and a load of 3.5

erlangs offered to 7 servers sees a 0.03961 blocking. These

blocking probabilities are exactly the same as the corresponding

entries in Table 1: PB1 and PB2 for MAM.

3. This is not the case in RDM. Here, the probability for class 2 to be preempted by class 1 is nonzero because of two effects. (1)

Through the cascaded bandwidth arrangement, class 3 is protected Lai Standards Track [Page 11]

somewhat from preemption. (2) Class 2 traffic is sharing a BC

with class 1. Consequently, class 2 suffers when class 1 traffic increases.

Thus, it appears that although the cascaded bandwidth arrangement and the resulting bandwidth sharing makes RDM work better under normal

conditions, such interaction makes it less effective to provide class isolation under overload conditions.

4.2. Improving Class 2 Performance at the Expense of Class 3

We now consider a scenario in which the service requirement is to

give better blocking/preemption performance to class 2 than to class 3, while maintaining class 1 performance at the same level as in the previous scenario. (The use of minimum deterministic guarantee for

class 3 is to be considered in the next section.) So that the

specified class 2 performance objective can be met, class 2 BC is

increased appropriately. As an example, BCs (6, 9, 15) are now used for MAM, and (6, 13, 15) for RDM. For both BCMs, as shown in Figures 1bis and 2bis, although class 1 performance remains unchanged, class 2 now receives better performance, at the expense of class 3. This is of course due to the increased access of bandwidth by class 2 over

class 3. Under normal conditions, the performance of the two BCMs is similar in terms of their blocking and preemption probabilities for

LSP setup requests, as shown in Table 2.

Table 2. Blocking and preemption probabilities

BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3 MAM 0.03692 0.00658 0.02733 0 0.02709 0.00658 0.05441 RDM 0.03692 0.00449 0.02759 0.00272 0.02436 0.00721 0.05195 Under overload, the observations in Section 4.1 regarding the

difference in the general behavior between the two BCMs still apply, as shown in Figures 1bis and 2bis.

The following are two frequently asked questions about the operation of BCMs.

(1) For a link capacity of 15, would a class 1 BC of 6 and a class 2 BC of 9 in MAM result in the possibility of a total lockout for

class 3?

This will certainly be the case when there are 6 class 1 and 9 class 2 LSPs being established simultaneously. Such an offered load (with 6 class 1 and 9 class 2 LSP requests) will not cause a lockout of

class 3 with RDM having a BC of 13 for classes 1 and 2 combined, but Lai Standards Track [Page 12]

will result in class 2 LSPs being rejected. If class 2 traffic were considered relatively more important than class 3 traffic, then RDM

would perform very poorly compared to MAM with BCs of (6, 9, 15).

(2) Should MAM with BCs of (6, 7, 15) be used instead so as to make

the performance of RDM look comparable?

The answer is that the above scenario is not very realistic when the offered load is assumed to be (2.7, 3.5, 3.5) for the three classes, as stated in Section 3.2. Treating an overload of (6, 9, x) as a

normal operating condition is incompatible with the engineering of

BCs according to needed bandwidth from different classes. It would

be rare for a given class to need so much more than its engineered

bandwidth level. But if the class did, the expectation based on

design and normal traffic fluctuations is that this class would

quickly release unneeded bandwidth toward its engineered level,

freeing up bandwidth for other classes.

Service providers engineer their networks based on traffic

projections to determine network configurations and needed capacity. All BCMs should be designed to operate under realistic network

conditions. For any BCM to work properly, the selection of values

for different BCs must therefore be based on the projected bandwidth needs of each class, as well as on the bandwidth allocation rules of the BCM itself. This is to ensure that the BCM works as expected

under the intended design conditions. In operation, the actual load may well turn out to be different from that of the design. Thus, an assessment of the performance of a BCM under overload is essential to see how well the BCM can cope with traffic surges or network

failures. Reflecting this view, the basis for comparison of two BCMs is that they meet the same or similar performance requirements under normal conditions, and how they withstand overload.

In operational practice, load measurement and forecast would be

useful to calibrate and fine-tune the BCs so that traffic from

different classes could be redistributed accordingly. Dynamic

adjustment of the Diffserv scheduler could also be used to minimize

QoS degradation.

4.3. Comparing Bandwidth Constraints of Different Models

As is pointed out in Section 3.2, the higher degree of sharing among the different classes in RDM means that the numerical values of the

BCs could be relatively smaller than those for MAM. We now examine

this aspect in more detail by considering the following scenario. We set the BCs so that (1) for both BCMs, the same value is used for

class 1, (2) the same minimum deterministic guarantee of bandwidth

for class 3 is offered by both BCMs, and (3) the blocking/preemption Lai Standards Track [Page 13]

probability is minimized for class 2. We want to emphasize that this may not be the way service providers select BCs. It is done here to investigate the statistical behavior of such a deterministic

mechanism.

For illustration, we use BCs (6, 7, 15) for MAM, and (6, 13, 15) for RDM. In this case, both BCMs have 13 units of bandwidth for classes 1 and 2 together, and dedicate 2 units of bandwidth for use by class 3 only. The performance of the two BCMs under normal conditions is

shown in Table 3. It is clear that MAM with (6, 7, 15) gives fairly comparable performance objectives across the three classes, whereas

RDM with (6, 13, 15) strongly favors class 2 at the expense of class 3. They therefore cater to different service requirements.

Table 3. Blocking and preemption probabilities

BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3 MAM 0.03692 0.03961 0.02384 0 0.02275 0.03961 0.04659 RDM 0.03692 0.00449 0.02759 0.00272 0.02436 0.00721 0.05195 By comparing Figures 1 and 2bis, it can be seen that, when being

subjected to the same set of BCs, RDM gives class 2 much better

performance than MAM, with class 3 being only slightly worse.

This confirms the observation in Section 3.2 that, when the same

service requirements under normal conditions are to be met, the

numerical values of the BCs for RDM can be relatively smaller than

those for MAM. This should not be surprising in view of the hard

boundary (B3 = Nmax) in RDM versus the soft boundary (B1+B2+B3 >=

Nmax) in MAM. The strict ordering of BCs (B1 < B2 < B3) gives RDM

the advantage of a higher degree of sharing among the different

classes; i.e., the ability to reallocate the unused bandwidth of

higher-priority classes to lower-priority ones, if needed.

Consequently, this leads to better performance when an identical set of BCs is used as exemplified above. Such a higher degree of sharing may necessitate the use of minimum deterministic bandwidth guarantee to offer some protection for lower-priority traffic from preemption. The explicit lack of ordering of BCs in MAM and its soft boundary

imply that the use of minimum deterministic guarantees for lower-

priority classes may not need to be enforced when there is a lesser

degree of sharing. This is demonstrated by the example in Section

4.2 with BCs (6, 9, 15) for MAM.

For illustration, Table 4 shows the performance under normal

conditions of RDM with BCs (6, 15, 15).

Lai Standards Track [Page 14]

Table 4. Blocking and preemption probabilities

BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3 RDM 0.03692 0.00060 0.02800 0.00032 0.02740 0.00092 0.05540 Regardless of whether deterministic guarantees are used, both BCMs

are bounded by the same aggregate constraint of the link capacity.

Also, in both BCMs, bandwidth access guarantees are necessarily

achieved statistically because of traffic fluctuations, as explained in Section 4.2. (As a result, service-level objectives are typically specified as monthly averages, under the use of statistical

guarantees rather than deterministic guarantees.) Thus, given the

fundamentally different operating principles of the two BCMs

(ordering, hard versus soft boundary), the dimensions of one BCM

should not be adopted to design for the other. Rather, it is the

service requirements, and perhaps also the operational needs, of a

service provider that should be used to drive how the BCs of a BCM

are selected.

5. Performance under Partial Preemption

In the previous two sections, preemption is fully enabled in the

sense that class 1 can preempt class 3 or class 2 (in that order),

and class 2 can preempt class 3. That is, both classes 1 and 2 are

preemptor-enabled, whereas classes 2 and 3 are preemptable. A class that is preemptor-enabled can preempt lower-priority classes

designated as preemptable. A class not designated as preemptable

cannot be preempted by any other classes, regardless of relative

priorities.

We now consider the three cases shown in Table 5, in which preemption is only partially enabled.

Table 5. Partial preemption modes

preemption modes preemptor-enabled preemptable

"1+2 on 3" (Fig. 3, 6) class 1, class 2 class 3

"1 on 3" (Fig. 4, 7) class 1 class 3

"1 on 2+3" (Fig. 5, 8) class 1 class 3, class 2

In this section, we evaluate how these preemption modes affect the

performance of a particular BCM. Thus, we are comparing how a given BCM performs when preemption is fully enabled versus how the same BCM performs when preemption is partially enabled. The performance of

these preemption modes is shown in Figures 3 to 5 for RDM, and in

Figures 6 through 8 for MAM, respectively. In all of these figures, Lai Standards Track [Page 15]

the BCs of Section 3.2 are used for illustration; i.e., (6, 7, 15)

for MAM and (6, 11, 15) for RDM. However, the general behavior is

similar when the BCs are changed to those in Sections 4.2 and 4.3;

i.e., (6, 9, 15) and (6, 13, 15), respectively.

5.1. Russian Dolls Model

Let us first examine the performance under RDM. There are two sets

of results, depending on whether class 2 is preemptable: (1) Figures 3 and 4 for the two modes when only class 3 is preemptable, and (2)

Figure 2 in the previous section and Figure 5 for the two modes when both classes 2 and 3 are preemptable. By comparing these two sets of results, the following impacts can be observed. Specifically, when

class 2 is non-preemptable, the behavior of each class is as follows: 1. Class 1 generally sees a higher blocking probability. As the

class 1 space allocated by the class 1 BC is shared with class 2, which is now non-preemptable, class 1 cannot reclaim any such

space occupied by class 2 when needed. Also, class 1 has less

opportunity to preempt, as it is able to preempt class 3 only.

2. Class 3 also sees higher blocking/preemption when its own load is increased, as it is being preempted more frequently by class 1,

when class 1 cannot preempt class 2. (See the last set of four

points in the series for class 3 shown in Figures 3 and 4, when

comparing with Figures 2 and 5.)

3. Class 2 blocking/preemption is reduced even when its own load is

increased, since it is not being preempted by class 1. (See the

middle set of four points in the series for class 2 shown in

Figures 3 and 4, when comparing with Figures 2 and 5.)

Another two sets of results are related to whether class 2 is

preemptor-enabled. In this case, when class 2 is not preemptor-

enabled, class 2 blocking/preemption is increased when class 3 load

is increased. (See the last set of four points in the series for

class 2 shown in Figures 4 and 5, when comparing with Figures 2 and

3.) This is because both classes 2 and 3 are now competing

independently with each other for resources.

5.2. Maximum Allocation Model

Turning now to MAM, the significant impact appears to be only on

class 2, when it cannot preempt class 3, thereby causing its

blocking/preemption to increase in two situations.

Lai Standards Track [Page 16]

1. When class 1 load is increased. (See the first set of four points in the series for class 2 shown in Figures 7 and 8, when comparing with Figures 1 and 6.)

2. When class 3 load is increased. (See the last set of four points in the series for class 2 shown in Figures 7 and 8, when comparing with Figures 1 and 6.) This is similar to RDM; i.e., class 2 and class 3 are now competing with each other.

When Figure 1 (for the case of fully enabled preemption) is compared to Figures 6 through 8 (for partially enabled preemption), it can be seen that the performance of MAM is relatively insensitive to the

different preemption modes. This is because when each class has its own bandwidth access limits, the degree of interference among the

different classes is reduced.

This is in contrast with RDM, whose behavior is more dependent on the preemption mode in use.

6. Performance under Pure Blocking

This section covers the case in which preemption is completely

disabled. We continue with the numerical example used in the

previous sections, with the same link capacity and offered load.

6.1. Russian Dolls Model

For RDM, we consider two different settings:

"Russian Dolls (1)" BCs:

up to 6 simultaneous LSPs for class 1 by itself,

up to 11 simultaneous LSPs for classes 1 and 2 together, and

up to 15 simultaneous LSPs for all three classes together.

"Russian Dolls (2)" BCs:

up to 9 simultaneous LSPs for class 3 by itself,

up to 14 simultaneous LSPs for classes 3 and 2 together, and

up to 15 simultaneous LSPs for all three classes together.

Note that the "Russian Dolls (1)" set of BCs is the same as

previously with preemption enabled, whereas the "Russian Dolls (2)"

has the cascade of bandwidth arranged in reverse order of the

classes.

Lai Standards Track [Page 17]

As observed in Section 4, the cascaded bandwidth arrangement is

intended to offer lower-priority traffic some protection from

preemption by higher-priority traffic. This is to avoid starvation. In a pure blocking environment, such protection is no longer

necessary. As depicted in Figure 9, it actually produces the

opposite, undesirable effect: higher-priority traffic sees higher

blocking than lower-priority traffic. With no preemption, higher-

priority traffic should be protected instead to ensure that it could get through when under high load. Indeed, when the reverse cascade

is used in "Russian Dolls (2)", the required performance of lower

blocking for higher-priority traffic is achieved, as shown in Figure 10. In this specific example, there is very little difference among the performance of the three classes in the first eight data points

for each of the three series. However, the BCs can be tuned to get a bigger differentiation.

6.2. Maximum Allocation Model

For MAM, we also consider two different settings:

"Exp. Max. Alloc. (1)" BCs:

up to 7 simultaneous LSPs for class 1,

up to 8 simultaneous LSPs for class 2, and

up to 8 simultaneous LSPs for class 3.

"Exp. Max. Alloc. (2)" BCs:

up to 7 simultaneous LSPs for class 1, with additional bandwidth for 1 LSP privately reserved

up to 8 simultaneous LSPs for class 2, and

up to 8 simultaneous LSPs for class 3.

These BCs are chosen so that, under normal conditions, the blocking

performance is similar to all the previous scenarios. The only

difference between these two sets of values is that the "Exp. Max.

Alloc. (2)" algorithm gives class 1 a private pool of 1 server for

class protection. As a result, class 1 has a relatively lower

blocking especially when its traffic is above normal, as can be seen by comparing Figures 11 and 12. This comes, of course, with a slight increase in the blocking of classes 2 and 3 traffic.

When comparing the "Russian Dolls (2)" in Figure 10 with MAM in

Figures 11 or 12, the difference between their behavior and the

associated explanation are again similar to the case when preemption is used. The higher degree of sharing in the cascaded bandwidth

arrangement of RDM leads to a tighter coupling between the different classes of traffic when under overload. Their performance therefore Lai Standards Track [Page 18]

tends to degrade together when the load of any one class is

increased. By imposing explicit maximum bandwidth usage on each

class individually, better class isolation is achieved. The trade-

off is that, generally, blocking performance in MAM is somewhat

higher than in RDM, because of reduced sharing.

The difference in the behavior of RDM with or without preemption has already been discussed at the beginning of this section. For MAM,

some notable differences can also be observed from a comparison of

Figures 1 and 11. If preemption is used, higher-priority traffic

tends to be able to maintain its performance despite the overloading of other classes. This is not so if preemption is not allowed. The trade-off is that, generally, the overloaded class sees a relatively higher blocking/preemption when preemption is enabled than there

would be if preemption is disabled.

7. Performance under Complete Sharing

As observed towards the end of Section 3, the partitioning of

bandwidth capacity for access by different traffic classes tends to

reduce the maximum link efficiency achievable. We now consider the

case where there is no such partitioning, thereby resulting in full

sharing of the total bandwidth among all the classes. This is

referred to as the Complete Sharing Model.

For MAM, this means that the BCs are such that up to 15 simultaneous LSPs are allowed for any class.

Similarly, for RDM, the BCs are

up to 15 simultaneous LSPs for class 1 by itself,

up to 15 simultaneous LSPs for classes 1 and 2 together, and

up to 15 simultaneous LSPs for all three classes together.

Effectively, there is now no distinction between MAM and RDM. Figure 13 shows the performance when all classes have equal access to link

bandwidth under Complete Sharing.

With preemption being fully enabled, class 1 sees virtually no

blocking, regardless of the loading conditions of the link. Since

class 2 can only preempt class 3, class 2 sees some blocking and/or

preemption when either class 1 load or its own load is above normal; otherwise, class 2 is unaffected by increases of class 3 load. As

higher priority classes always preempt class 3 when the link is full, class 3 suffers the most, with high blocking/preemption when there is any load increase from any class. A comparison of Figures 1, 2, and 13 shows that, although the performance of both classes 1 and 2 is

far superior under Complete Sharing, class 3 performance is much

Lai Standards Track [Page 19]

better off under either MAM or RDM. In a sense, class 3 is starved

under overload as no protection of its traffic is being provided

under Complete Sharing.

8. Implications on Performance Criteria

Based on the previous results, a general theme is shown to be the

trade-off between bandwidth sharing and class protection/isolation.

To show this more concretely, let us compare the different BCMs in

terms of the overall loss probability. This quantity is defined as

the long-term proportion of LSP requests from all classes combined

that are lost as a result of either blocking or preemption, for a

given level of offered load.

As noted in the previous sections, although RDM has a higher degree

of sharing than MAM, both ultimately converge to the Complete Sharing Model as the degree of sharing in each of them is increased. Figure 14 shows that, for a single link, the overall loss probability is the smallest under Complete Sharing and the largest under MAM, with that under RDM being intermediate. Expressed differently, Complete

Sharing yields the highest link efficiency and MAM the lowest. As a matter of fact, the overall loss probability of Complete Sharing is

identical to the loss probability of a single class as computed by

the Erlang loss formula. Yet Complete Sharing has the poorest class protection capability. (Note that, in a network with many links and multiple-link routing paths, analysis in [6] showed that Complete

Sharing does not necessarily lead to maximum network-wide bandwidth

efficiency.)

Increasing the degree of bandwidth sharing among the different

traffic classes helps increase link efficiency. Such increase,

however, will lead to a tighter coupling between different classes.

Under normal loading conditions, proper dimensioning of the link so

that there is adequate capacity for each class can minimize the

effect of such coupling. Under overload conditions, when there is a scarcity of capacity, such coupling will be unavoidable and can cause severe degradation of service to the lower-priority classes. Thus,

the objective of maximizing link usage as stated in criterion (5) of Section 1 must be exercised with care, with due consideration to the effect of interactions among the different classes. Otherwise, use

of this criterion alone will lead to the selection of the Complete

Sharing Model, as shown in Figure 14.

The intention of criterion (2) in judging the effectiveness of

different BCMs is to evaluate how they help the network achieve the

expected performance. This can be expressed in terms of the blocking and/or preemption behavior as seen by different classes under various loading conditions. For example, the relative strength of a BCM can Lai Standards Track [Page 20]

如何设计盈利模型

第二章创新盈利模式 、建立盈利思维 盈利好比是每天悬在我们手上的一把剑,控制不好就会伤到自己,弄的好就会带来新的回报。 今天,企业家都拥有对渠道、对品牌、对成功的渴望。大家都在高谈战略:企业战略、经营战略、发展战略;然后呢,大谈商业模式。难道说有了商业模式企业就可以高歌猛进了吗?就可以等量长期占据市场了吗?就可以大赚特赚了吗?但是战略的部分还要落脚在盈利上,但凡商业模式,都是为了盈利,但怎样的商业模式才能称作盈利模型呢?今天,我们就来解开这个问题的答案:什么是盈利模型? 作为企业家,我们都会遭遇同一个话题:今年企业有没有赚钱? 这是企业生存发展的基本问题:建立盈利的思维、共赢的意识。当有了盈的策略,和共赢的思维建立起来,一切就会变得简单。 简单来说盈利模型就是赚钱模型,它包括两点,一是设计如何让企业赚钱,二是设计如何让合作伙伴赚钱。整个盈利价值链条不能有缺失,一定要保证完整性。在市场竞争充分的时候要考虑到如何整合资源,并聚焦在如何给客户或消费者提供超价值上。在现在的社会市场经济中,仅仅给对方对等的价值是远远不够的,只有超价值才能无限增

长。 而企业为什么要建立盈利模型?这就好比打麻将。过去打麻将,我老是输,后来仔细想了想,发现是没有建立盈的理念,只靠运气赌牌大,撞运气这种事一两个小时行,可一场麻将要打四个小时,所以就老输。如今商业社会变化太快了。我的团队里,有十几个副总裁、八十多位咨询师,每天我给他们耳提面命的最多的话题就是“模型”,因为我们是企业的标杆,我们的水平和认知程度,决定了我们这个企业能走多快、走多高,包括我们对风险的控制。有很多时候,昨天我们还很好,但是明天就不行了。 在互联网、数字技术大规模发展的今天,市场的变化太剧烈了。所有的创业者都会面临很多的困难:资金、营销、产品、市场、供应链等等,过去的思维方式是点到点的,即我们制定了一个明确的目标后就开始实施,但通常第一年都会失败,之后第二年也失败。现在我们需要一个新的思维方式“框式思维”,即用一个经过周密设计的框架系统帮助我们制定目标、实施行动,而这个框架的设计应该是以如何把企业做的更大更好为标准的,我认为这个框架应该是企业的“盈利系统”。 、盈利模型的象限 商业都是有原理可循的,如今的商业原理就是互联网和的思维原理,即两世界(现实世界、虚拟世界)、三个屏(电脑屏、电视屏、手机屏)。三屏两世界构成了企业涉及到营销发展的核心,是企业的传播和聚焦点。不管做什么样的经济,实体经济和虚拟经济,传播的载体就是三屏,而真正的电脑和手机是非常难的,两个世界里面的现实世界和虚拟世界如何互动,怎么样去交流?利润从哪里来?利润价格*销量成本,我们要考虑自己的生意模式,企业靠什么赚钱?所有的盈利模型是考虑企业自己。

沙盘模型做的所用材料

沙盘模型制作的方法和材料 一、确定模型比例 一般来说,在着手制作沙盘模型之前,得首先确定沙盘模型制作的总体方针和比例。城市规划、住宅区域规划等大范围的沙盘模型,比例一般为1:3000—1:5000;房地产楼盘等建筑模型的比例常为1:200—1:50;住宅模型,这与其他建筑模型的情况稍有不同,如果建筑物不是很大,则采用1:50左右的比例,目的是为了让人看得清楚。比例确定后,先做出建筑用的场地模型,模型的制作者必须清楚地形高差,景观印象等,通过大脑进行计划立意处理,多作研究分析,就可以开始着手制作模型了。 二、制作模型底座 比例确定之后,就可开始做模型了,大峡谷集团习惯先做模型底座与基地。如果建筑场地是平坦的,则制作模型也简单易行。若场地高低

不平,且表现要求上也有周围邻近的建筑物,则依测量方法的不同,模型的制作方法也有相应的区别。尤其是针对复杂地形和城市规划等大场地时,常常是先将地形模型事先做成,一边看着模型一边进行方案设计的情况较多,因而必须在地形模型的制作上多下些功夫,但也不需把地形做的过细。 三、用卡纸做模型 卡纸是最常用模型材料,你可以根据你的需要选择不同的卡纸,如:白卡,灰卡,色卡。 四、用木板做模型 灵活运用轻木料木材所具有的柔软而粗糙的才质质感及加工方便的 特点,可以做出各种不同的表现效果来。切割薄而细的软木材坂料时,要尽可能使用薄形刀具,细小的软木在切割时,应使用安全刀片的刃

口精心切下,切割范围很小时,应在木材下面帖上一层赛路硌透明纸带,这样可以增加其强度,是切割不受影响。我建议大家选用 0.7--2mm的航模木板。不过要注意的是垂直于木纹切割时不要太用力,否则很容易切坏。 五、用泡沫苯乙烯纸做模型 这种材料最适合做一些草模和研究模型,非常便于加工。 六、用有机玻璃做模型 由于这种材料很难切割,要用专门的刀才能切开,这种材料由于很透明,通常用来做外表面,可以看到内部空间。也可用来做一些研究性模型。 七、用塑料板做模型 这种材料模型公司用的多,非常正式的模型才会用,加工起来很麻烦。沙盘模型制作的材料可以是五花八门,各种环保型材料,可回收材料应有尽有。

如何构建产业投资基金盈利预测模型doc资料

如何构建产业投资基金盈利预测模型 执笔人:程静萍 2006年12月30日,随着渤海产业投资基金的正式挂牌,中国国内第一支真正意义上的产业投资基金正式宣告成立,这就意味着产业投资基金在经过漫长的探讨和论证之后,终于走上了中国的经济大舞台。 一、什么是产业投资基金 产业投资基金(Industrial Investment Fund)是与证券投资基金相对的概念,指一种对未上市企业进行股权投资和提供经营管理服务的利益共享、风险共担的集合投资制度,即通过向投资者发行基金份额设立基金公司,由基金公司任基金管理人或另行委托基金管理人管理基金资产,委托基金托管人托管基金资产,从事创业投资、企业重组投资和基础设施投资等实业投资,投资者按照其出资份额分享投资收益,承担投资风险。 二、如何构建产业投资基金盈利预测模型 1、盈利预测模型的理论依据 产业投资基金的盈利预测模型是建立在现代投资组合理论基础之上的,可以认为是现代投资组合理论在实业股权投资方面的变种应用。 现代投资理论的基本要点: (1)现代资产组合理论的提出主要是针对化解投资风险的可能性; (2)有些风险是与其他或所有证券的风险具有相关性,在风险以相似方式影响市场上的所有证券时,所有证券都会做出类似的反应,因此投资证券组合并不能规避整个系统的风险;

(3)即使分散投资也未必是投资在数家不同公司的股票上,而是可能分散在股票、债券、房地产等多方面,未必每位投资者都会采取分散投资的方式; (4)最佳投资组合应当是具有风险厌恶特征的投资者的无差异曲线和资产的有效边界线的交点。 与证券投资相类似,产业投资基金的盈利预测模型也是根据一定的投资组合进行构建的,所不同的是产业投资基金是由若干股权投资和项目投资组合而成的。因此,构建产业投资基金的盈利预测模型应基于如下的投资组合理论: (1)与证券投资相类似,产业基金投资也需要通过构建投资组合,获得预期收益率,但是并不能完全规避投资风险; (2)产业基金投资组合也是取决于所投资公司或项目的收益和风险的; (3)产业基金投资以实业股权投资为主要表现形式,相关法规对基金的投降渠道进行了严格限定。 产业投资基金的盈利预测同时还基于国际上通行的私人股权投资(Private Equity,PE)J曲线(J Curve)理论。 图1:基于胜任力模型的招聘评价模型 所谓J曲线理论,是对私人股权投资在投资期间内的投入、产出分别进行预测,并根据预测结果绘制现金流量曲线,我们会发现,这条曲线呈现出一个“J”的形状,因此称之为J曲线。如上图所示,产业投资基金在投资期间的前段以投

模型制作材料

模型制作资料 1、模型分类:方案模型(形式表现要求不高)和展示模型(材料及制作深度高的成品) 2、:测绘:三棱尺、直尺、三角板、弯尺、圆规、游标卡尺、模板、蛇尺剪裁:勾刀、手术刀、推拉刀、45°切刀、切圆刀、剪刀、手锯、钢锯、电动手锯、电动曲线锯、带锯、电热切割器、优耐美模型机组、电脑雕刻机。打磨喷绘:砂纸、砂纸机、锉刀、什锦锉、木工刨、小型台式砂轮机、喷枪。热加工:塑料板亚克力弯板机、火焰抛光机。 3、:主材:泡沫聚苯乙烯板、有机玻璃板、塑料板、ABS板PVC板(abspvc 电脑加工)、木材版(轻木软木微薄木)辅材:金属材料:单面金属板、双色板、确玲珑、纸粘土、油泥、石膏、即时贴、植绒即时贴、仿真草皮、绿地粉、泡沫塑料、水面胶=A、B 水、软陶、石蜡、型材-基本、仿真成品。 胶黏剂:纸类:白乳胶、胶水、喷胶、双面胶带。 塑料类胶黏剂:三氯甲烷、502胶黏剂、建筑胶、热熔胶、hart 胶、Araldite 胶、无影胶。 木材类胶黏剂:乳胶、4115建筑胶、hart 胶。面层喷色材料:自喷漆、醇酸调和漆、硝基磁漆、聚酯漆、模型专用漆、丙烯颜料。 4、环艺模型制作设计:主题制作设计:总体与局部、效果表现、材料选择、模型色彩。基本介绍 建筑模型(architectural model)是建筑设计及都市规划方案中,不可缺少的审查项目。建筑及环境艺术模型介于平面图纸与实际立体空间之间,它把两者有机的联系在一起,是一种三维的立体模式,建筑模型有助于设计创作的推敲,可以直观地体现设计意图,弥补图纸在表现上的局限性(见建筑制图)。它既是设计师设计过程的一部分,同时也属于设计的一种表现形式,被广泛应用于城市建设、房地产开发、商品房销售、设计投标与招商合作等方面。 基本特征使用易于加工的材料依照建筑设计图样或设计构想,按缩小的比例制成的样品。建筑模型是在建筑设计中用以表现建筑物或建筑群的面貌和空间关系的一种手段。对于技术先进、功能复杂、艺术造型富于变化的现代建筑,尤其需要用模型进行设计创作。 在初步设计即方案设计阶段的称工作模型,制作可简略些,以便加工和拆卸。材料可用油泥、硬纸板和塑料等。在完成初步设计后,可以制作较精致的模型——展示模型(见图),供审定设计方案之用。展示模型不仅要求表现建筑物接近真实的比例、造型、色彩、质感和规划的环境,还可揭示重点建筑房间的内部空间、室内陈设和结构、构造等。展示模型一般用木板、胶合板、塑料板、有机玻璃和金属薄板等材料制成。模型的制作务求达到表现设计创作的立意和构思。 主要分类 黏土模型 黏土材料来源广泛取材方便价格低廉经过“洗泥” 工序和“炼熟过程其质地更加细腻。黏土具有一定的粘合性可塑性极强在塑造过程中可以反复修改任意调整修刮填,补比较方便。还可以重复使用是一种比较理想的造型材料,但是如果黏土中的水分失去过多则容易使黏土模型出现收缩龟裂甚至产生断裂现象不利于长期保存。另外,在黏土模型表面上进行效果处理的方法也不是很多, 黏土制作模型时一定要选用含沙量少,在使用前要反复加工,把泥和熟,使用起来才方便。一般作为雕塑、翻模用泥使用。[1] 油泥模型油泥是一种人造材料。凝固后极软,较软,坚硬。油泥可塑性强,黏性、韧性比

模型制作材料、工具及其使用

模型制作资料 一、 模型制作材料、工具及其使用 1、主材类 2、纸质材料 3、打印纸 4、绘图(卡)纸-----制作卡纸模型采用白色卡纸。如果需要其他颜色,可在白色卡纸上进行有色处理。卡纸模型还可以采用不干胶色纸和各种装饰纸来装饰表面,采用其他材料装饰屋顶和道路。 5、厚纸板-----厚纸板是以其颜色与白色的卡纸做区分,有灰色和 棕色制模用的厚纸板------它有一个由泡沫塑料制成的坚固核心,而此核心的两边是用纸张覆盖(粘合)的。 6、瓦楞纸-----波浪纹是用平滑的纸张粘合在一面或是两面上的, 因为具备可卷曲的特性。瓦楞纸的波浪越小、越细,就越坚。 各色不干胶:用于建筑模型的窗、道路、建筑小品、房屋的立面和台面等处的贴饰。 吹塑纸:适宜制作构思模型和规划模型等,它具有价格低廉、易加工、色彩柔和等特点。 仿真材料纸:仿石材、木纹和各种墙面、屋顶的半成品纸张。 各色涤纶纸:用于建筑模型的窗、环境中的水池、河流等仿真装饰。 锡箔纸:用于建筑模型中的仿金属构件等的装饰。 砂纸:砂纸是用来打磨材料,可做室内的地毯和球场、路面、绿地。 二、 木质材料 1.木工板 (木工用平板 细木工板) 2.胶合板-----胶合板是用三层或奇数多层刨制或旋切的单板,涂胶后经热压而成的人造板材,各单板之间的纤维方向互相垂直(或成一定角度)、对称,克服了木材的各向异性缺陷。 3.硬木板(密度板、刨花板)----- 硬木板是利用木材加工废料加工成一定规格的碎木,刨花后再使用胶合剂经热压而成的板材。 4.软木板-----软木板是由混合着合成树脂胶粘接剂的木质颗粒组合而成的。 5.航模板----- 航模板是采用密度不大的木头(主要是泡桐木)经过化学处理而制成的板材。 6.其他人造装饰板-----仿金属、仿塑料、仿织物和仿石材等效果的板材,各种用于裱糊的装饰木皮等。 三、塑料板材 1.ABS 板:ABS 板是一种新型的模型制作材料,称之为工程塑料,ABS 板是现今流行的手工及电脑雕刻加工制作的主要材料。 2.PVC 板:主要成分为聚氯乙烯分为软PVC 板(柔软耐寒,耐磨,耐酸、碱)和硬PVC(易弯曲、

模型制作材料

模型制作资料收集 一、制作工具 一.常用刀具 1.常用美工刀又称为墙纸刀,主要用于切割纸板、卡纸、吹塑纸、软木板、即时贴等较厚的材料。 2.美工钩刀切割有机玻璃、亚克力板、胶片和防火胶版的主要工具。 (美工刀) 3.手术刀 主要用于各种薄纸的切割与划线,尤其是建筑门窗的切、划。 (手术刀) 4单、双面刀片 单、双面刀片的刀片最薄,极为锋利,用于切割薄型材料 5.木刻刀 用于刻或切割薄型的塑料板材。 6.剪刀 用于裁剪纸张、双面胶带、薄型胶片和金属片的工具。根据用途通常需要几把不同型号。 7.微型机床、切割机 7.微型机床、切割机 相比手工切割,使用小型或者微型机床进行切割能够更好地提升工作效率,同时,使用高精度的锯片,能够使切割面更加整齐、平整。微型切割机搭配不同的锯片,能够用于切割比较厚、硬的板材。 二.常用度量工具 1.T形尺 用于测量尺寸,同时辅助切割。 2.三角板、圆规、量角器等 用于测量平行线、平面、直角,画圆、曲线等。

3.钢角直尺 画垂直线、平行线与直角,也用于判断两个平面是否相互垂直,辅助切割。 4.卷尺 用于测量较长的材料。 4.卷尺 用于测量较长的材料。 三.修整工具 1.砂纸 用于研磨金属、木材等表面,以使其光洁平滑。根据不同的研磨物质,有干磨砂纸、耐水砂纸等多种。干磨砂纸(木砂纸)用于磨光木、竹器表面。耐水砂纸用于在水中或油中磨光金属表面。 (砂纸) 锉 用于修平和打磨有机玻璃和木材,分为木锉和钢锉,木锉主要用于木料加工,钢锉用于金属材料与有机玻璃加工。按锉的形状与用途,可分为方锉,半圆锉,圆锉,三角锉,扁锉,真挫,可视工作的形状选用。 按锉的锉齿分为粗锉,中粗锉,细锉。锉的使用方法有横锉法,直锉法和磨光搓法。 其他工具 各种铅笔 用于做记号,在卡纸材料上通常用较硬的铅笔 镊子 制作细小构件时需要镊子来辅助工具 3,鸭嘴笔,勾线笔 画墨线的工具 4,清洁工具 模型制作过程中模型上会落有很多毛屑和灰尘,还会残留一些碎屑,可以用板刷,清洁用吹气球来清洁处理。 其他工具 一般最常用的五金工具,如老虎钳,小型锤、精密钢锯,卷尺,尖嘴钳等,在不同的材质制作模型中会需要用到。 纸质材料 按模型的使用特征分类,通常分为建筑结构和框材料,建筑表面和装饰材料,环境装饰材料和底盘材料。

制作建筑模型的材料

制作建筑模型的材料 Company number:【0089WT-8898YT-W8CCB-BUUT-202108】

建筑模型设计制作员:把“大家伙”变成工艺品职业定义 建筑模型设计制作员指的是能根据建筑设计图和比例要求、选用合适的模型制作材料,运用模型设计制作技能,设计制作出能体现建筑师设计思想的各种直观建筑模型的专业模型制作人员。 从事的主要工作包括 (1)读懂建筑图,理解建筑师设计思想及设计意图;(2)模型材料的选用及加工;(3)计算模型缩放比例;(4)制定模型制 作工艺流程;(5)制作模型。 职业概况 我国目前建筑模型设计制作业从业人员有120万多人,其中从事实物建筑模型(非计算机模拟模型)的专业制作人员占20%以上,按此计:建筑模型制作员从业人数可达24万。从业人员主要分布情况大致如下:70%的建筑模型制作员就业于模型制作公司;15%左右就职于各类展台布置装潢公司;10%开设独立的建筑模型设计制作工作室;5%分布在各大设计院、设计公司、设计师事务所。 目前的建筑模型设计制作员从业人员素质情况如下:水平参差不齐,很多从业人员都是半路出家,没有经过系统的学习与培训,靠师傅带和自己琢磨成才。有些模型制作人员无法读懂建筑设计图,使制作出来的建筑模型与要求相差太远。建筑模型设计制作不需要很大的场地,对人员的文化水平、年龄、性别等条件相对限制不是很多,没有各类污染,是花费少投入多的都市产业,对促进就业、发展社会经济作用很大。规范本职业的意义在于:提高本行业从业人员的素质、对衡量建筑模型库制作从业人者从业资格和能力提供依据;促进就业;加强建筑模型制作员这一新职业的科学化、规范化和现代化管理,从而从根本上提高从业人员整体素质。 目前本职业在劳动分工中主要有以下岗位:建筑模型设计公司模型制作工、展台布置装潢公司模型制作工、房产公司模型制作工等。 目前国内院校与本职业相关的专业设置没有。但在国内院校的建筑系有相关劳技课程,课时不多。 建筑模型设计制作员在国外的职业状况和我国相近,从业人员比我国少很多,制作水平更专业化。

模型制作大全

模型制作大全 INDEX 上手快速索引:关键词按拼音首字母的顺序排列,基础篇涵盖素组所需的大部分技术。 B 补土/无缝 D 打磨 G 工具勾线 K 刻线 Q 去色 S 渗线砂纸术语水口素组 T 贴纸 X 消光/光油 B 关键词:补土/无缝 Q:什么叫补土啊? A:分3种: 1)牙膏补土:补缝,填小气泡 2)宝丽补土/AB土:都可以用来塑型,填较大的洞等 3)水补土:上色前喷可以统一底色,检查无缝是否完美,也能补一些很小的气泡 Q:我的补土不是挤牙膏一样的:上面有两条,一条白色一条淡土黄色,说明书上指示使用的时候要把两种混合在一起。 A:AB土,用来塑型和填大窟窿用的,用的时候就将同样长度的两条剪下,捏,充分混合后填到洞里,或者捏想要的形状,等完全硬化后再修。如果要做无缝的话还是得用那种牙膏包装的补土。 Q:AB补土使用时,把两块补土捏在一起,之后要捏到什么程度呢?颜色完全统一?A:当然是捏到匀了。 Q:AB补土是不是直接粘在零件上造型并完全干透后就把能自行粘合在零件上还是要另加接合剂? A:自己会粘住的。

Q:组好以后零件之件的细缝只能用补土来填吗?用502的话会不会对塑料造成伤害呢?素组的是不是不用填缝呀?还是填了缝后一定要上色呢? A:无缝作业的话胶水和补土都需要,模型胶和502都可以用。做了无缝后还是上色好些,打磨会影响模型表面的。素组的概念因人而异,但如果做了无缝,不上色的话,会很难看的还有就是用502的话,要注意不要按的太死,不然会把塑料溶掉的。 Q:在用田宫补土补缝的时候发现田宫的补土刚放到模型上不久就干掉了,有什么办法可以让补土不用那么快干呢? A:可以用田宫溶剂稀释。但要注意:牙膏补土越稀释,干时的收缩就越明显。 Q:请问做无缝除了用补土还有什么办法?用胶水又怎么做?等胶水干又要多久? A:以下的网址里的5至11楼有教用胶水做无缝: https://www.doczj.com/doc/471062438.html,/forum/index.php?showtopic=10462也可参见:https://www.doczj.com/doc/471062438.html,/viewthrea ... 1%26filter50Ddigest Q:请问在上补土(做无缝的)之后处理比较圆的地方时有什么秘诀吗?还有!上补土时有什么要注意的? A:弧面的地方要慢慢磨,小心不要将弧面磨平即可。上补土的话我比较喜欢先用溶剂湿润一下补土后再上。 Q:拿什么来稀释这些“补土”呢? A:牙膏补土的话,可以用郡士的溶剂来稀释,或者田宫的Lacquer溶剂(黄色盖,水油通吃,也可用于除去漆膜)。 Q:罐装水补土HOBBY分几种? A:有灰的和白的两种颜色,有500,1000,1200等号数,号数越高粒子越细。 Q:水补土能用笔涂吗? A:效果会不均匀。如果做铸造感的特殊效果的话,可以用水补土干扫。 Q:500号和1000号的水补土有什么不一样? A:参见上面。水补的编号和砂纸的编号都是细腻度的意思。越大越细腻。 Q:如果不上补土直接上油性漆的话会有什么不好的后果? A:不上补土当然也可以上色,当然油性漆要经过稀释,只是附着力不会很好就是了。油性漆比水性的腐蚀性要大(指的是稀释液),如果不喷补土可能(有可能)塑料会变脆,尤其是透明件。 Q:罐装水补土要注意什么? A:可以参看进阶篇喷罐相关。简单的说,就是距离30cm左右,快一点来回碰,少量多次的原则。 Q:不喷漆想要做到无水口、无缝还需要什么工具? A:不涂装,做无缝比较难些,因为总是要打磨的,一打磨塑料表面就花了,基本不可能做到无痕迹。白色零件上可以尝试,痕迹不会很明显。模型胶水,砂纸必须。 D 关键词:打磨 Q:问下打磨高达用的砂纸除了田宫的还有别的种类吗? A:高目的砂纸就别省这个钱了。田宫的砂纸的确非常好用。(还可以用3M的砂纸,很好用)粗目的用五金砂纸也行,几毛钱一张。不过不耐用。美国3M打磨布,很好用,贵。

声明书盈利预测审核管理层声明

声明书盈利预测审核管 理层声明 SANY标准化小组 #QS8QHH-HHGX8Q8-GNHHJ8-HHMHGN#

关于预测审核的管理层声明书 天职国际会计师事务所并××注册会计师: 本公司已委托贵事务所对本公司编制的20××年12月 31日预测资产负债表、20××年度预测利润表、预测股东权益变动表和20××年度预测现金流量表(以下统称为“预测性财务信息”)及其编制所依据的假设进行审核,并出具审核报告。 本公司承诺对上述预测性财务信息的编制和列报负责,包括识别和披露上述预测性财务信息所依据的假设。 本公司就已知的全部事项,作出如下声明: 1.上述预测性财务信息反映了管理层对其涵盖期间内本公司的财务状况、经营成果和现金流量的预期。其编制和列报所采用的会计政策符合企业会计准则和《××会计制度》/财政部2006年2月15日颁布的《企业会计准则》的规定,并且与编制本公司历史财务报表时所使用的会计政策相一致。 2.上述预测性财务信息是在管理层确定的假设的基础上编制的。这些假设反映了管理层根据目前所能获取的信息,对于该预测性财务信息涵盖期间内的预期未来状况和预期将采取的行动所作出的判断和最佳估计。本公司确信上述预测性财务信息所依据的假设具有充分、适当的支持性证据,为预测性财务信息提供了合理的基础,且所依据的重大假设已在该预测性财务信息的附注中完整、充分披露。 3.本公司已向贵事务所完整地提供了下列资料,并对这些资料的真实性、合法性和完整性承担全部责任: (1)需审核的预测性财务信息及其所依据的各项基本假设和编制时所选用的会计政策。 (2)有关预测数、基本假设以及基础数据的支持性证据。 (3)预测性财务信息的附注说明。包括:编制所依据的假设;公司经营环境、市场情况和生产经营情况,以及影响公司未来上述预测性财务信息涵盖期间内财务状况、经营成果和现金流量的关键因素的资料。包括: ①本公司的历史背景、行业性质、生产经营方式、市场竞争能力、有关法律法规及会计政策的特殊要求; ②本公司的产品或劳务的市场占有率及营销计划; ③本公司生产经营所需要的人、财、物等资源的供应情况和成本水平; ④本公司以前年度的财务状况、经营成果、现金流量状况及其发展趋势; ⑤宏观经济的影响等。

(完整版)手工建筑模型制作工具、材料及步骤概要

模型手工制作工具及主要材料 ?常用刀具 1?常用美工刀 又称为墙纸刀,主要用于切割纸板、卡纸、吹塑纸、软木板、即时贴等较厚的材料。2?美工钩刀 切割有机玻璃、亚克力板、胶片和防火胶版的主要工具。 美工刀美工钩刀 3?手术 刀 单、双面刀片的刀片最薄,极为锋利,用于切割薄型材料。 5?木刻 刀 用于刻或切割薄型的塑料板材。

6?剪刀 用于裁剪纸张、双面胶带、薄型胶片和金属片的工具。根据用途通常需要几把不同型号。 7?微型机床、切割机 相比手工切割,使用小型或者微型机床进行切割能够更好地提升工作效率,同时,使用高精度的锯片,能够使切割面更加整齐、平整。微型切割机搭配不同的锯片,能够用于切割比较厚、硬的板材。 ?常用度量工具 1.T形尺 用于测量尺寸,同时辅助切割。 2?三角板、圆规、量角器等 用于测量平行线、平面、直角,画圆、曲线等。 三角板钢直角尺 3?钢角直尺 画垂直线、平行线与直角,也用于判断两个平面是否相互垂直,辅助切割。 4.卷尺 用于测量较长的材料。 三.修整工具 1.砂纸 用于研磨金属、木材等表面,以使其光洁平滑。根据不同的研磨物质,有干磨砂纸、耐水砂纸等多种。干磨砂纸(木砂纸)用于磨光木、竹器表面。耐水砂纸用于在水中或油中磨光金属表面。

2 .锉 用于修平和打磨有机玻璃和木料。分为木锉与钢锉,木锉主要用于木料加工,钢锉用于 金属材料与有机玻璃加工。 按锉的形状与用途,可分为方锉、半圆锉、圆锉、三角锉、扁锉、针锉,可视工件的形 状选用。 按锉的锉齿分粗锉、中粗锉和细锉。锉的使用方法有横锉法、直锉法和磨光锉法。 四.其他工具 2.镊子 制作细小构件时需要用镊子来辅助工作。 3.鸭嘴笔、勾线笔 画墨线的工具。 4?清洁工具 模型制作过程中,模型上会落有很多毛屑和灰尘,还会残留一些碎屑。可以用板刷、清 洁用吹气球等工具来清洁处理。 砂纸 锂 1.各种铅笔 用于做记号,在卡纸材料上通常用较硬的铅笔( H —3H )。

做模型用什么材料介绍

做模型用什么材料好 有很多朋友问我们做模型材料的,如果要用来做模型都是需要一些什么材料呢?那么用什么材料是最好的呢?这样的问题涉及面很 广泛,如果说要一一介绍到模型材料,可能三天三夜也不能完全介绍完,如果说要介绍用什么材料最好呢?其实每种模型都有最适合的模型材料,各个材料有各个材料的优点,当然也有缺点,关键是要看你怎么做这个模型!对于朋友们提的问题,我将会成系列的例举下来,文章每天都会有更新,请大家多多关注,多多支持! 材料系列(一) 材料是建筑模型构成的一个重要要素,它决定了建筑模型的表面形态和立体形态。 在现代建筑模型制作中,材料概念的内涵与外延随着科学技术的进步与发展,在不断的改变,而且,建筑模型制作的专业性材料与非专业性材料界限的区分也越加模糊。特别是用于建筑模型制作的基本材料呈现出多品种、多样化的趋势。由过去单一的板材,发展到点、线、面、快等多种形态的基本材料。另外,随着表现手段的日臻完善和对建筑模型制作的认识与理解,很多非专业性的材料和生活中的废弃物也被作为建筑模型制作的辅助材料。 这一现象的出现无疑给建筑模型的制作带来了更多的选择余地,但同时,也产生了一些负面效应。很多模型制作这认为,材料选用的档次越高,其最终效果越好。其实不然,任何事物都是相对而言,高档次材料固然很好,但是建筑模型制作所追求的是整体的最终效果。如

果违背了这一原则去选用材料,那么再好、再高档的材料也会黯然失色,失去它自身的价值。 模型材料的分类:材料有很多种分类方法,有按照材料产生的年代进行划分的,也有按照材料的物理特性和化学特性进行划分的。我们之后要介绍的材料分类,总分类为:主材料和辅助材料,再对两大类进行详细的分类。主要是从建筑模型制作角度进行划分,由各种材料在建筑模型制作过程中所充任的角色不同划分。(未完待续。。。。。。)

(完整版)手工建筑模型制作工具、材料及步骤概要

模型手工制作工具及主要材料 一.常用刀具 1.常用美工刀 又称为墙纸刀,主要用于切割纸板、卡纸、吹塑纸、软木板、即时贴等较厚的材料。 2.美工钩刀 切割有机玻璃、亚克力板、胶片和防火胶版的主要工具。 美工刀美工钩刀 3.手术刀 主要用于各种薄纸的切割与划线,尤其是建筑门窗的切、划。 4单、双面刀片 单、双面刀片的刀片最薄,极为锋利,用于切割薄型材料。 5.木刻刀 用于刻或切割薄型的塑料板材。 木刻刀剪刀

6.剪刀 用于裁剪纸张、双面胶带、薄型胶片和金属片的工具。根据用途通常需要几把不同型号。 7.微型机床、切割机 相比手工切割,使用小型或者微型机床进行切割能够更好地提升工作效率,同时,使用高精度的锯片,能够使切割面更加整齐、平整。微型切割机搭配不同的锯片,能够用于切割比较厚、硬的板材。 二.常用度量工具 1.T形尺 用于测量尺寸,同时辅助切割。 2.三角板、圆规、量角器等 用于测量平行线、平面、直角,画圆、曲线等。 三角板钢直角尺 3.钢角直尺 画垂直线、平行线与直角,也用于判断两个平面是否相互垂直,辅助切割。 4.卷尺 用于测量较长的材料。 三.修整工具 1.砂纸 用于研磨金属、木材等表面,以使其光洁平滑。根据不同的研磨物质,有干磨砂纸、耐水砂纸等多种。干磨砂纸(木砂纸)用于磨光木、竹器表面。耐水砂纸用于在水中或油中磨光金属表面。

砂纸锉 2.锉 用于修平和打磨有机玻璃和木料。分为木锉与钢锉,木锉主要用于木料加工,钢锉用于金属材料与有机玻璃加工。 按锉的形状与用途,可分为方锉、半圆锉、圆锉、三角锉、扁锉、针锉,可视工件的形状选用。 按锉的锉齿分粗锉、中粗锉和细锉。锉的使用方法有横锉法、直锉法和磨光锉法。四.其他工具 1.各种铅笔 用于做记号,在卡纸材料上通常用较硬的铅笔(H—3H)。 2.镊子 制作细小构件时需要用镊子来辅助工作。 3.鸭嘴笔、勾线笔 画墨线的工具。 4.清洁工具 模型制作过程中,模型上会落有很多毛屑和灰尘,还会残留一些碎屑。可以用板刷、清洁用吹气球等工具来清洁处理。

建筑模型制作所需材料和工具(2)

建筑模型制作所需材料和工具 一.基本设备: 简单工具,能够应付大多数的建模工作 1.测绘工具、三棱尺(比例尺) 、直尺、三角板、弯尺 (角尺) 、圆规、游标卡尺、模型、蛇尺等。 2.剪裁、切割工具 勾刀、笔刀、裁纸刀、角度刀(45o) 、切圆刀、剪刀、手锯、钢锯、电动手锯(积梳机) 、电动曲线锯、电热切割器、电脑雕刻机。 3.打磨喷绘工具 砂纸、砂纸机、锉刀、什锦锉、木工刨、台式砂轮机。 二. 材料 1简易的建筑模型用聚苯泡沫塑料块切割成建筑模型实体部分的毛坯,也可用泡沫极做简易模型的底盘;用茶色涤纶纸,茶色不干胶纸作模型的窗、底盘粘面;用吹塑板、吹塑纸作阳台、墙面、地面、道路、台阶、屋顶等;用绒纸、砂纸作绿地草坪、步行道、广场等;用彩色橡皮块、海绵作汽车、树木等配景。以上几种材料价格低,易加工制作。一般视为同一档次的模型材料。这些材料灵活配合使用,可快速制成比较理想的设计模型或表现模型。 2. 建筑模型是使用易于加工的材料依照建筑设计图样或设计构想,按缩小的比例制成的样品。它是在建筑设计中用以表现建筑物或建筑群的面貌和空间关系的一种手段。对于技术先进、功能复杂、艺术造型富于变化的现代建筑,尤其需要用模型进行设计创作。那么制作一般的建筑模型需要哪些材料呢?以下做了一个简单的归类: 1、主体墙面:模型专用"747"型ABS高分子工程塑料板(厚度0.8 mm -33 mm) 2、主体玻璃:模型专用ICI高透明有机玻璃(厚度0.8 mm -1.2 mm) 3、路面、硬质铺装及加工方式:全部使用模型专用LG ABS板材。 4、绿化草坪:模型专用FALLER草坪 5、植物:软化铜丝、高弹海绵、高质量颜料及模型专用FALLER草粉 6、粘合剂:优质三氯甲烷、日本A-A超能胶、德国UHU胶、喷胶 三、制作流程 做建筑模型用的材料一般是ABS.用卡纸。 1、把图纸按需要的比例进行缩放,比如要做一比一百,就缩放到一比一百。 2、把建筑的各个面分解出来,单独出图,然后一比一打印出来。 3、把打印出来的纸贴在卡纸上,然后裁出门窗和边框。

常用模型制作材料及特点

常用模型制作材料及特点 ●PP 聚丙烯——产量最大的塑料 ○比PE更轻,硬,透明,耐热; ○纤维面料叫“丙纶” ○做各种容器、热水瓶壳、饮料包装、玩具等。PP做的杯子比PE硬,耐热(100度不变形),但不如PE光滑; ○机械强度可与尼龙相比,做汽车水箱、车门等零部件;用玻璃纤维增强做汽车保险杠; ○做透明包扎绳、包扎带、编织带,用于编织草帽、拖鞋、地毯、门帘等; ○做文具盒、仪器盒铰链(耐弯曲疲劳性很好); ○做耐酸、碱管道(耐酸、碱性很好)。 缺点:透气性比PE好,不宜作食品及化妆品包装。 ●PS 聚苯乙烯——最铿锵有声的塑料 ○无色透明。在塑料中,除了有机玻璃外,透明性能要算聚苯乙烯最好,比普通玻璃透光性好。价格比有机玻璃便宜很多。耐火性不如有机玻璃; ○绝缘性能好,尤其是高频特性,做雷达的绝缘材料; ○做仪表外壳、灯罩、透明模型、玩具(特别易于加工成型,易于着鲜艳色彩); ○改变为ABS 缺点:脆性大,耐热性差。 ●PVC 聚氯乙烯——最多才多艺的塑料 ○硬质PVC:脸盆、梳子、帮热水瓶壳、塑料桶、搓衣板、耐酸泵以及各种管材、板材、水管、塑料地板和天花板; ○软质PVC:如塑料窗帘、台布、塑料雨衣、塑料腰带、塑料手提包、塑料鞋底及泡沫塑料凉鞋、拖鞋等; ○透明PVC:印刷用覆膜、包装用塑料袋、农用塑料薄膜(透气性小,保温性好,能透紫外线);塑料吹气玩具; ○不透明PVC:如塑料洗衣板、塑料管子; ○各种颜色鲜艳的塑料制品,如塑料头绳、娃娃的头、彩色塑料花(着色性好); ○电线被覆盖(绝缘,耐老化); ○人造革(比真皮加工容易、不怕水、质地均匀、可以做很大一块,但不透气,低温变硬);做成手提袋、钱包、雨衣等。 ●PE 聚乙烯——最简单的塑料 ○白色蜡状,摸上去像石蜡: ○做各种容器,如水壶、饭盒等餐具和厨房用品(无色无味无毒);玩具(易于成型,耐击); ○各种糖果、糕点等食品包装,化妆品药品容器; ○童车、摩托车的挡风盖,汽车挡泥板(易于成型、轻);海底电缆(耐腐蚀、绝缘,不水)。 缺点: 吸收紫外线易变色、脆化、裂纹。 ●PMMA 聚甲基丙烯酸甲酯(有机玻璃)——最透明的塑料 ○透光度93%,大于玻璃91%,重量仅为玻璃的一半。耐火性好。做车窗玻璃和镜片; ○易于成型、加工(钻孔、弯曲和机械加工); ○强度高,耐冲击性好。做透明篮球板; ○做隐形眼镜、假牙、人工角膜; ○做玻璃装饰品等。

建筑模型制作材料大全

物料名称Material Name 规格型号 Spec. 单位 Unit 編碼 Code 物料名称 Material Name 0.3mmPVC板 0.3mmPVC board 90x120/0.3mm 张 piece B01031200 广告钉 Ad. Nail 0.5mmPVC板 0.5mmPVC board 90x180/0.5mm 张 piece B01051800 针头 Needle head 1.0mmPVC板 1.0mmPVC board 90x180/1.0mm 张 piece B01101800 布料(呢绒) Cloth (woolen cloth) 0.3mmABS板 0.3mmABS board 90x100/0.3mm 张 piece B02039001 彩带 Colour belt 0.6mmABS板 0.6mmABS board 0.6mmX600X800 张 piece B02066001 钉眼腻子灰 0.6mmABS板 0.6mmABS board 800mmX800mm 张 piece B02068001 雕塑泥 Plasticien 0.8mmABS板 0.8mmABS board 100x200/0.8mm 张 piece B02081001 无仿布(黑色) Non-woven fabric 0.8mmABS板 0.8mmABS board 810x810 张 piece B02088101 16#针头 16# Needle 0.8mmABS板 0.8mmABS board 620x820 张 piece B02086201 石头(紫石) Stone (purple stone) 1.0mmABS板 1.0mmABS board 100x200/1.0mm 张 piece B02101001 瓦面 Roof tiling 1.0mmABS板 1.0mmABS board 620x820 张 piece B02106201 瓦面 Roof tiling 1.0mmABS板 1.0mmABS board 810x810 张 piece B02108101 9V电池 9V Battery 1.2mmABS板 1.2mmABS board 800X800 张 piece B02128001 家私宝 Furniture plate 1.2mmABS板 1.2mmABS board 600X800 张 piece B02126001 家私宝(金属纸) Furniture plate (mental paper) 1.5mmABS板 1.5mmABS board 100x200/1.5mm 张 piece B02151001 墙纸 Wall paper 1.5mmABS板 1.5mmABS board 620x820 张 piece B02156201 墙纸 Wall paper 1.5mmABS板 1.5mmABS board 810x810 张 piece B02158101 墙纸 Wall paper 2.0mmABS板 2.0mmABS board 620x820 张 piece B02206201 墙纸 Wall paper 2.0mmABS板 2.0mmABS board 810x810 张 piece B02208101 墙纸 Wall paper 3.0mmABS板 3.0mmABS board 3.0mm 张 piece B02300001 墙纸 Wall paper 4.0mmABS板100x200 张B02401001 小车

制作沙盘模型做的所用材料

沙盘模型制作的材料 一般来说,在着手制作沙盘模型之前,得首先确定沙盘模型制作的总体方针和比例。城市规划、住宅区域规划等大范围的沙盘模型,比例一般为1:3000—1:5000;房地产楼盘等建筑模型的比例常为1:200—1:50;住宅模型,这与其他建筑模型的情况稍有不同,如果建筑物不是很大,则采用1:50左右的比例,目的是为了让人看得清楚。比例确定后,先做出建筑用的场地模型,模型的制作者必须清楚地形高差,景观印象等,通过大脑进行计划立意处理,多作研究分析,就可以开始着手制作模型了。 制作模型底座 比例确定之后,就可开始做模型了,大峡谷集团习惯先做模型底座与基地。如果建筑场地是平坦的,则制作模型也简单易行。若场地高低不平,且表现要求上也有周围邻近的建筑物,则依测量方法的不同,模型的制作方法也有相应的区别。尤其是针对复杂地形和城市规划等大场地时,常常是先将地形模型事先做成,一边看着模型一边进行方案设计的情况较多,因而必须在地形模型的制作上多下些功夫,但也不需把地形做的过细。 三、用卡纸做模型 卡纸是最常用模型材料,你可以根据你的需要选择不同的卡纸,如:白卡,灰卡,色卡。

四、用木板做模型 灵活运用轻木料木材所具有的柔软而粗糙的才质质感及加工方 便的特点,可以做出各种不同的表现效果来。切割薄而细的软木材坂料时,要尽可能使用薄形刀具,细小的软木在切割时,应使用安全刀片的刃口精心切下,切割范围很小时,应在木材下面帖上一层赛路硌透明纸带,这样可以增加其强度,是切割不受影响。我建议大家选用0.7--2mm的航模木板。不过要注意的是垂直于木纹切割时不要太用力,否则很容易切坏。 五、用泡沫苯乙烯纸做模型 这种材料最适合做一些草模和研究模型,非常便于加工。 六、用有机玻璃做模型 由于这种材料很难切割,要用专门的刀才能切开,这种材料由于很透明,通常用来做外表面,可以看到内部空间。也可用来做一些研究性模型。 七、用塑料板做模型 这种材料模型公司用的多,非常正式的模型才会用,加工起来很麻烦。 沙盘模型制作的材料可以是五花八门,各种环保型材料,可回收材料应有尽有。

建筑模型制作材料大全

建筑模型制作材料大全

物料名称Material Name 规格型号 Spec. 单位 Unit 編碼 Code 物料名称 Material Name 0.3mmPVC板 0.3mmPVC board 90x120/0.3mm 张 piece B01031200 广告钉 Ad. Nail 0.5mmPVC板 0.5mmPVC board 90x180/0.5mm 张 piece B01051800 针头 Needle head 1.0mmPVC板 1.0mmPVC board 90x180/1.0mm 张 piece B01101800 布料(呢绒) Cloth (woolen cloth) 0.3mmABS板 0.3mmABS board 90x100/0.3mm 张 piece B02039001 彩带 Colour belt 0.6mmABS板 0.6mmABS board 0.6mmX600X800 张 piece B02066001 钉眼腻子灰 0.6mmABS板 0.6mmABS board 800mmX800mm 张 piece B02068001 雕塑泥 Plasticien 0.8mmABS板 0.8mmABS board 100x200/0.8mm 张 piece B02081001 无仿布(黑色) Non-woven fabric 0.8mmABS板 0.8mmABS board 810x810 张 piece B02088101 16#针头 16# Needle 0.8mmABS板 0.8mmABS board 620x820 张 piece B02086201 石头(紫石) Stone (purple stone) 1.0mmABS板 1.0mmABS board 100x200/1.0mm 张 piece B02101001 瓦面 Roof tiling 1.0mmABS板 1.0mmABS board 620x820 张 piece B02106201 瓦面 Roof tiling 1.0mmABS板 1.0mmABS board 810x810 张 piece B02108101 9V电池 9V Battery 1.2mmABS板 1.2mmABS board 800X800 张 piece B02128001 家私宝 Furniture plate 1.2mmABS板 1.2mmABS board 600X800 张 piece B02126001 家私宝(金属纸) Furniture plate (mental paper) 1.5mmABS板 1.5mmABS board 100x200/1.5mm 张 piece B02151001 墙纸 Wall paper 1.5mmABS板 1.5mmABS board 620x820 张 piece B02156201 墙纸 Wall paper 1.5mmABS板 1.5mmABS board 810x810 张 piece B02158101 墙纸 Wall paper 2.0mmABS板 2.0mmABS board 620x820 张 piece B02206201 墙纸 Wall paper 2.0mmABS板 2.0mmABS board 810x810 张 piece B02208101 墙纸 Wall paper 3.0mmABS板 3.0mmABS board 3.0mm 张 piece B02300001 墙纸 Wall paper 4.0mmABS板100x200 张B02401001 小车

相关主题
文本预览
相关文档 最新文档