当前位置:文档之家› PHAD Packet header anomaly detection for identifying hostile network traffic

PHAD Packet header anomaly detection for identifying hostile network traffic

PHAD Packet header anomaly detection for identifying hostile network traffic
PHAD Packet header anomaly detection for identifying hostile network traffic

PHAD: Packet Header Anomaly Detection

for Identifying Hostile Network Traffic

Matthew V. Mahoney and Philip K. Chan

Department of Computer Sciences

Florida Institute of Technology

Melbourne, FL 32901

{mmahoney,pkc}@https://www.doczj.com/doc/c74763838.html,

Florida Institute of Technology Technical Report CS-2001-04

Abstract

We describe an experimental packet header anomaly detector (PHAD) that learns the normal range of values for 33 fields of the Ethernet, IP, TCP, UDP, and ICMP protocols. On the 1999 DARPA off-line intrusion detection evaluation data set (Lippmann et al. 2000), PHAD detects 72 of 201 instances (29 of 59 types) of attacks, including all but 3 types that exploit the protocols examined, at a rate of 10 false alarms per day after training on 7 days of attack-free internal network traffic. In contrast to most other network intrusion detectors and firewalls, only 8 attacks (6 types) are detected based on anomalous IP addresses, and none by their port numbers. A number of variations of PHAD were studied, and the best results were obtained by examining packets and fields in isolation, and by using simple nonstationary models that estimate probabilities based on the time since the last event rather than the average rate of events.

1. Introduction

Most network intrusion detection systems (IDS) that use anomaly detection look for anomalous or unusual port number and IP addresses, where "unusual" means any value not observed in training on normal (presumably nonhostile) traffic. They use the firewall paradigm; a packet addressed to a nonexistent host or service must be hostile, so we reject it. The problem with the firewall model is that attacks addressed to legitimate services will still get through, even though the packets may differ from normal traffic in ways that we could detect.

Sasha and Beetle (2000) argue for a strict anomaly model to detect attacks on the TCP/IP stack. Attacks such as queso, teardrop, and land (Kendall 1999) exploit vulnerabilities in stacks that do not know how to handle unusual data. Queso is a fingerprinting probe that determines the operating system using characteristic responses to unusual packets, such as packets with the TCP reserved flags set. Teardrop crashes stacks that cannot cope with overlapping IP fragments. Land crashes stacks that cannot cope with a spoofed IP source address equal to the destination. Attacks are necessarily different from normal traffic because they exploit bugs, and bugs are most likely to be found in the parts of the software that were tested the least during normal use.

Horizon (1998) and Ptacek and Newsham (1998) describe techniques for attacking or evading an application layer IDS that would produce anomalies at the layers below. Techniques include the deliberate use of bad checksums, unusual TCP flags or IP options, invalid sequence numbers, spoofed

addresses, duplicate TCP packets with differing payloads, packets with short TTLs that expire between the target and IDS, and so on.

A common theme of attacks is that they exploit software bugs, either in the target or in the IDS. Bugs are inevitable in all software, so we should not be surprised to find them in attacking software as well. For example, in the DARPA intrusion detection data set (Lippmann et al. 2000), we found that smurf and udpstorm do not compute correct ICMP and UDP checksums. Smurf is a distributed ICMP echo reply flood, initiated by sending an echo request (ping) to a broadcast address with the spoofed source IP address of the target. UDPstorm sets up a flood between the echo and chargen servers on two victims by sending a request to one with the spoofed source address of the other.

A fourth source of anomalies might be the response from the victim of a successful attack. Forrest et al. (1996) has shown that victimized programs make unusual sequences of system calls, so we might expect them to produce unusual output as well. In any case, there are a wide variety of anomalies across many different fields of various network protocols, so an IDS ought to check them all.

The rest of the paper is organized as follows. In Section 2, we discuss related work in anomaly detection. In Section 3, we describe a packet header anomaly detector (PHAD) that looks at all fields except the application payload. In Section 4, we train PHAD on attack-free traffic from the DARPA data set and test it with 201 simulated attacks. In Section 5, we simulate PHAD in an on-line environment where the training data is not guaranteed to be attack-free. In Section 6 we compare various PHAD models. In Section 7 we compare PHAD with the intrusion detection systems that were submitted to the DARPA evaluation in 1999. We summarize the results in section 8.

2. Related Work

Network intrusion detection systems like snort (2001) or Bro (Paxson, 1998) typically use signature detection, matching patterns in network traffic to the patterns of known attacks. This works well, but has the obvious disadvantage of being vulnerable to novel attacks. An alternative approach is anomaly detection, which models normal traffic and signals any deviation from this model as suspicious. The idea is based on work by Forrest et al. (1996), who found that most UNIX processes make (mostly) highly predictable sequences of system calls in normal use. When a server or suid root program is compromised (by a buffer overflow, for example), it executes code supplied by the attacker, and deviates from its normal calling sequence.

It is not possible to observe every possible legitimate pattern in training, so an anomaly detector requires some type of machine learning algorithm in order to generalize from the training set. Forrest uses an n-gram model, allowing any sequence as long as all subsequences of length n (about 3 to 6) have been previously observed. Sekar et al. (2000) uses a state machine model, where a state is defined as the value of the program counter when the system call is made, and allows any sequence as long as all of the state transitions have been observed in training. Ghosh et al. (1999) use a neural network trained to accept observed n-grams and reject randomly generated training sequences.

Network anomaly detectors look for unusual traffic rather than unusual system calls. ADAM (Audit Data and Mining) (Barbará, Wu, and Jajodia, 2001) is an anomaly detector trained on both attack-free traffic and traffic with labeled attacks. It monitors port numbers, IP addresses and subnets, and TCP state. The system learns rules such as "if the first 3 bytes of the source IP address is X, then the

destination port is Y with probability p". It also aggregates packets over a time window. ADAM uses a naive Bayes classifier, which means that the probability that a packet belongs to some class (normal, known attack, or unknown) depends on the a-priori probability of the class, and the combined probabilities of a large collection of rules under the assumption that they are independent. ADAM has separate training modes and detection modes.

NIDES (Anderson et. al. 1995), like ADAM, monitors ports and addresses. Instead of using explicit training data, it builds a model of long term behavior over a period of hours or days, which is assumed to contain few or no attacks. If short term behavior (seconds, or a single packets) differs significantly, then an alarm is raised. NIDES does not model known attacks; instead it is used as a component of EMERALD (Neumann and Porras, 1998), which includes host and network based signature detection for known attacks.

Spade is a snort (2001) plug-in that detects anomalies in network traffic. Like NIDES and ADAM, it is based on port numbers and IP addresses. It uses several user selectable statistical models, including a Bayes classifier, and no explicit training period. It is supplemented by snort rules that use signature detection for known attacks. Snort rules are more powerful, in that they can test any part of the packet including string matching in the application payload. To allow examination of the application layer, snort includes plug-ins that reassemble IP fragments and TCP streams.

3. Packet Header Anomaly Detection (PHAD)

We developed an anomaly detection algorithm (PHAD) that learns the normal ranges of values for each packet header field at the data link (Ethernet), network (IP), and transport/control layers (TCP, UDP, ICMP). PHAD does not currently examine application layer protocols like DNS, HTTP or SMTP, so it would not detect attacks on servers, although it might detect attempts to hide them from an application layer monitor like snort by manipulating the TCP/IP protocols.

An important shortcoming of all anomaly detection systems is that they cannot discern intent; they can only detect when an event is unusual, which may or may not indicate an attack. Thus, a system should have a means of ranking alarms by how unusual or unexpected they were, with the assumption that the rarer the event, the more likely it is to be hostile. If this assumption holds, the user can adjust the threshold to trade off between a high detection rate or a low false alarm rate. PHAD is based on the assumption that events that occur with probability p should receive a score of 1/p.

PHAD uses the rate of anomalies during training to estimate the probability of an anomaly while in detection mode. If a packet field is observed n times with r distinct values, there must have been r "anomalies" during the training period. If this rate continues, the probability that the next observation will be anomalous is approximated by r/n. This method is probably an overestimate, since most anomalies probably occur early during training, but it is easy to compute, and it is consistent with the PPMC method of estimating the probability of novel events used by data compression programs (Bell, Witten, and Cleary, 1989).

To consider the dynamic behavior of real-time traffic, PHAD uses a nonstationary model while in detection mode. In this model, if an event last occurred t seconds ago, then the probability that it will occur in the next one second is approximated by 1/t. Often, when an event occurs for the first time, it is because of some change of state in the network, for example, installing new software or starting a process

that produces a particular type of traffic. Thus, we assume that events tend to occur in bursts. During training, the first anomalous event of a burst is added to the model, so only one anomaly is counted. This does not happen in detection mode, so we discount the subsequent events by the factor t, the time since the previous anomaly in the current field. Thus each packet header field containing an anomalous value is assigned a score inversely proportional to the probability,

score field = tn/r

Finally, we add up the scores to score the packet. Since each score is an inverse probability, we could assume that the fields are independent and multiply them to get the inverse probability of the packet. But they are not independent. Instead, we use a crude extension of the stationary model where we treat the fields as occurring sequentially. If all the tn/r are equal, then the probability of observing k consecutive anomalies in a nonstationary model is (r/tn)(1/2)(2/3)(3/4)...((k-1)/k) = (1/k)r/tn. This is consistent with the score ktn/r that we would obtain by summation. Thus, we assign a packet score of

score packet = Σi ∈ anomalous fields t i n i/r i

where anomalous fields are the fields with values not found in the training model. In PHAD, the packet header fields range from 1 to 4 bytes, allowing 28 to 232 possible values, depending on the field. It is not practical to store a set of 232 values for two reasons. First, the memory cost is prohibitive, and second, we want to allow generalization to reasonable values not observed in the limited training data. The approach we have used is to store a list of ranges or clusters, up to some limit C = 32. If C is exceeded during training, then we find the two closest ranges and merge them. For instance, if we have the set {1, 3-5, 8}, then merging the two closest clusters yields {1-5, 8}. There are other possibilities, which we discuss in section 5, but clustering is appropriate for fields representing continuous values, and this method (PHAD-C32) gives the best results on the test data that we used.

PHAD examines 33 packet header fields, mostly as defined in the protocol specifications. Fields smaller than 8 bits (such as the TCP flags) are grouped into a single byte field. Fields larger than 4 bytes (such as the 6 byte Ethernet addresses) are split in half. The philosophy behind PHAD is to build as little protocol-specific knowledge as possible into the algorithm, but we felt it was necessary to compute the checksum fields (IP, TCP, UDP, ICMP), because it would be unreasonable for a machine learning algorithm to figure out how to do this on its own. Thus, we replace the checksum fields with their computed values (normally FFFF hex) prior to processing.

4. Experimental Results on the 1999 DARPA IDS Evaluation Data Set

We evaluated PHAD on the 1999 DARPA off-line IDS evaluation data set (Lippmann et al. 2000). The set consists of network traffic (tcpdump files) collected at two points, BSM logs, audit logs, and file system dumps from a simulated local network of an imaginary air force base over a 5 week period. The simulated system consists of four real "victim" machines running SunOS, Solaris, Linux, and Windows NT, a Cisco router, and a simulation of a local network with hundreds of other hosts and thousands of users and an Internet connection without a firewall. We trained PHAD on 7 days of attack free network traffic (week 3) collected from a sniffer between the router and victim machines (inside.tcpdump), and tested it on 9 days of traffic from the same point (weeks 4 and 5, except week 4 day 2, which is missing), during which time there were 201 attacks (183 in the available data). The attacks were mostly published exploits from https://www.doczj.com/doc/c74763838.html, and the bugtraq mailing list, and are described by (Kendall, 1999), with a

few developed in-house. The test set includes a list of all attacks, labeled with the victim IP address, time, type, and a brief description. Table 4.1 shows the PHAD-C32 model after training.

The model lists all values observed in training, clustered if necessary. The values r/n show the number of anomalies that occurred in training (resulting in an addition to the list) out of the total number of packets in which the field is present. It can be observed that port numbers and IP addresses are not among the fields with a small value of r/n, and so should not generate large anomaly scores.

Field name r/n Values

Ether Size 508/12814738 42 60-1181 1182...

Ether Dest Hi 9/12814738 x0000C0 x00105A x00107B...

Ether Dest Lo 12/12814738 x000009 x09B949 x13E981..

Ether Src Hi 6/12814738 x0000C0 x00105A x00107B...

Ether Src Lo 9/12814738 x09B949 x13E981 x17795A...

Ether Protocol 4/12814738 x0136 x0800 x0806 x9000

IP Header Len 1/12715589 x45

IP TOS 4/12715589 x00 x08 x10 xC0

IP Length 527/12715589 38-1500

IP Frag ID 4117/12715589 0-65461 65462 65463...

IP Frag Ptr 2/12715589 x0000 x4000

IP TTL 10/12715589 2 32 60 62-64 127-128 254-255

IP Protocol 3/12715589 1 6 17

IP Checksum 1/12715589 xFFFF

IP Src 293/12715589 12.2.169.104-12.20.180.101...

IP Dest 287/12715589 0.67.97.110 12.2.169.104-12.20.180.101...

TCP Src Port 3546/10617293 20-135 139 515...

TCP Dest Port 3545/10617293 20-135 139 515...

TCP Seq 5455/10617293 0-395954185 395969583-396150583...

TCP Ack 4235/10617293 0-395954185 395969584-396150584...

TCP Header Len 2/10617293 x50 x60

TCP Flg UAPRSF 9/10617293 x02 x04 x10...

TCP Window Sz 1016/10617293 0-5374 5406-10028 10069-10101...

TCP Checksum 1/10617293 xFFFF

TCP URG Ptr 2/10617293 0 1

TCP Option 2/611126 x02040218 x020405B4

UCP Src Port 6052/2091127 53 123 137-138...

UDP Dest Port 6050/2091127 53 123 137-138...

UDP Len 128/2091127 25 27 29...

UDP Checksum 2/2091127 x0000 xFFFF

ICMP Type 3/7169 0 3 8

ICMP Code 3/7169 0 1 3

ICMP Checksum 1/7169 xFFFF

Table 4.1. The PHAD-C32 model after training on week 3.

4.1. Top 20 alarms generated by PHAD-C32

The top 20 scoring alarms are shown in Table 4.2. Eight of these correctly identify an attack according to DARPA criteria, which requires the destination IP address of a packet involved in the attack (either the victim or a response to the attacker), and the time of the attack within 60 seconds. Another three alarms detect arppoison from anomalous ARP packets, but since these do not have an IP address, they do not meet the criteria for detection. The other 9 alarms are false positives (FP). The most anomalous field column shows which field contributes the most to the anomaly score, the value of that field, and the percentage contribution to the total score. The score is displayed on a logarithmic scale (using 0.1 log10 (Σtn/r) - 0.6) to fit in the range 0 to 1. The machines under attack are on the net 172.16.112-118.xxx.

The router is 192.168.1.1, but since the sniffer is between the router and the local network, only inside attacks on it can be detected.

Date Time Dest. IP Addr. Score Det Attack Most anomalous field

04/06 08:59:16 172.016.112.194 0.748198 FP TCP Checksum=x6F4A 67%

04/01 08:26:16 172.016.114.050 0.697083 TP teardrop IP Frag Ptr=x2000 100%

04/01 11:00:01 172.016.112.100 0.689305 TP dosnuke TCP URG Ptr=49 100%

03/31 08:00:28 192.168.001.030 0.664309 FP IP TOS=x20 100%

03/31 11:35:13 000.000.000.000 0.664225 FP (arppoison) Ether Src Hi=xC66973 68%

03/31 11:35:18 172.016.114.050 0.653956 FP Ether Dest Hi=xE78D76 57%

04/08 08:01:20 172.016.113.050 0.644237 FP IP Frag Ptr=x2000 35%

04/05 08:39:50 172.016.112.050 0.639134 TP pod IP Frag Ptr=x2000 89%

04/05 20:00:27 172.016.113.050 0.628749 TP udpstorm UDP Checksum=x90EF 100%

04/09 08:01:26 172.016.113.050 0.626455 FP TCP Checksum=x77F7 48%

04/05 11:45:27 172.016.112.100 0.626234 TP dosnuke TCP URG Ptr=49 100%

03/29 11:15:08 192.168.001.001 0.615842 TP portsweep TCP Flg UAPRSF=x01 99%

03/29 09:15:01 172.016.113.050 0.612167 TP portsweep IP TTL=44 100%

04/08 23:10:00 000.000.000.000 0.601878 FP (arppoison) Ether Src Hi=xC66973 60%

04/06 11:57:55 206.048.044.050 0.601356 FP IP TOS=xC8 100%

04/05 11:17:50 000.000.000.000 0.597209 FP (arppoison) Ether Src Hi=xC66973 60%

04/07 08:39:42 172.016.114.050 0.592460 FP TCP Checksum=x148C 94%

04/08 23:11:04 172.016.112.010 0.586338 TP mscan Ether Dest Hi=x29CDBA 57%

04/08 08:01:21 172.016.113.050 0.583140 FP TCP URG Ptr=34757 98%

04/05 11:18:45 172.016.118.020 0.581669 FP Ether Dest Hi=xE78D76 57% Table 4.2. Top 20 scoring alarms on weeks 4 (3/29-4/2) and 5 (4/5-4/9).

4.2. Attacks detected

By DARPA criteria, PHAD-C32 detects 67 of the 201 attack instances at a rate of 10 false alarms per day (100 total). These are listed in the Table 4.3 (plus arppoison, which does not identify the IP address). The det column shows the number of detections out of the total number of instances of the attack described. DARPA classifies attacks as probes, denial of service (DOS), remote to local (R2L), user to root (U2R), and other violations of a security policy (Data). The how detected column describes the anomaly that led to detection, based on the field or fields that contribute the most to the anomaly score. The numbers in parentheses indicate how many instances are detected with each field when there is a difference in the way that the instances are detected. For example, all of the dosnuke attacks all generate two alarms each (URG pointer and TCP flags), but the ipsweep attacks generate only one alarm each. Two ipsweep attacks are detected by an anomalous value of 253 in the TTL field of the attacking packet, two by an anomalous packet size in the response from the victim, and three are not detected.

Attack Description Det How detected

apache2DOS, HTTP overflow in apache web server2/3outgoing TCP window, TCP

options, incoming TTL=253 (arppoison)DOS, spoofed ARP with bad IP/Ethernet map0/5Ethernet source address casesen U2R, NT bug exploit1/3TTL=253

crashiis DOS, HTTP bug in IIS web server1/8TTL=253

dosnuke DOS, URG data to netbios port crashes NT4/4URG ptr, TCP flags

guessftp R2L, FTP password guessing1/2TTL=253

guesstelnet R2L, telnet password guessing2/2TTL=253

insidesniffer Probe, detected by reverse DNS lookups1/2Bad TCP checksum

ipsweep Probe, tests for valid IP addresses4/7outgoing packet size (2),

incoming TTL=253 (2) mailbomb DOS, mail server flood2/4TTL=253

mscan Probe, tests many R2L vulnerabilities1/1Ethernet dest. addr.

named R2L, DNS buffer overflow1/3TTL=253

neptune DOS, SYN flood1/4TTL=253

netbus R2L, backdoor3/3TTL=126

netcat_breakin R2L, backdoor install1/2TTL=126

ntinfoscan Probe, test for NT vulnerabilities1/3TTL=126

pod DOS, oversize IP packet (ping of death)4/4Fragmented IP

portsweep Probe, test for listening ports13/15FIN w/o ACK (3), IP src/dest

(1), packet size (1),

TTL=36...52 (8)

ppmacro R2L, Powerpoint macro trojan1/3TTL=253

processtable DOS, server request flood1/4IP source address

queso Probe, stack fingerprint for operating system3/4FIN without ACK

satan Probe, test for many R2L vulnerabilities2/2packet size (1), TTL (1) sechole U2R, NT bug exploit1/3TTL=253

sendmail R2L, SMTP buffer overflow1/2outgoing IP dest addr, incoming

IP src addr

smurf DOS, distributed ICMP echo reply flood5/5ICMP checksum (2), IP src

addr (1), TTL=253 (2) teardrop DOS, overlapping IP fragments3/3fragmented IP

udpstorm DOS, echo/chargen loop using spoofed request2/2UDP checksum

warez Data, unauthorized files on FTP server1/4outgoing IP dest. address xlock R2L, fake screensaver captures password1/2IP source address

Table 4.3. Attacks detected by PHAD.

4.3. Contributions of fields to detection

Table 4.4 lists the attacks detected and false alarms generated by each field. TP is the number of detections, dup is the number of duplicate detections of an attack already detected by a higher scoring alarm, and FP is the number of false alarms generated by the field. Outgoing indicates that the attack was detected on a packet responding to the attack.

Field TP Dup FP Detected attacks

Ethernet Size 1 2 1 ipsweep (outgoing)

Ethernet Dest Hi 1 0 6 mscan

Ethernet Src Hi 0 0 7 (arppoison)

IP TOS 0 0 7

IP Packet Length 2 2 1 teardrop, portsweep, satan

IP TTL 33 8 20 netcat_breakin, netbus, ntinfoscan, dosnuke, queso

casesen, satan, apache2, mscan, mailbomb, ipsweep,

ppmacro, guesstelnet, neptune, sechole, crashiis,

named, smurf, portsweep

IP Frag Ptr 7 0 2 teardrop, pod

IP Source Address 4 1 0 smurf, xlock, portsweep, processtable, sendmail

IP Dest Address 2 1 7 portsweep, warez (outgoing), sendmail (outgoing)

TCP Acknowledgment 0 0 1

TCP Flags UAPRSF 7 3 2 queso, portsweep, dosnuke

TCP Window Size 0 1 2 apache2 (outgoing)

TCP Checksum 1 0 29 insidesniffer

TCP URG Ptr 3 0 5 dosnuke

TCP Options 2 0 4 apache2 (outgoing)

UDP Length 0 0 5

UDP Checksum 2 0 0 udpstorm

ICMP Checksum 2 0 0 smurf

Table 4.4. Packet header fields that generate alarms in PHAD.

Fifteen of the 33 fields did not generate any anomalies. These are the lower 3 bytes of the Ethernet source and destination addresses, IP header length, IP fragment ID, IP protocol, IP checksum, TCP source and destination ports, TCP sequence number, TCP header length, UDP source and destination ports and length, and the ICMP type and code fields. Of the 67 attacks, only 8 were detectable by their IP addresses, and none by their port numbers.

The 30 TCP checksum errors are mostly due to fragmented IP packets that fragment the TCP header. This is one of the techniques described by Horizon (1998) to thwart an IDS. There is no legitimate reason for using such small fragments, but no attack is listed by DARPA. We believe that PHAD is justified in reporting this anomaly. The error (accounting for 36% of the total score in most cases) occurs because PHAD makes no attempt to reassemble IP fragments and mistakenly computes a TCP checksum on an incomplete packet. The detection of insidesniffer is a coincidence.

The TTL field generates 33 detections of 19 types of attacks, and 20 false alarms. TTL is an 8-bit counter (0-255) which is decremented with each router hop until it reaches zero, in order to prevent infinite routing loops. Most of the detections and all of the false alarms due to TTL result from the anomalous values 126 or 253, which are absent in the training data. We believe that this is not realistic. The 12 million packets in the DARPA training set contain only 8 distinct TTL values (2, 32, 60, 62-64, 127-128, 254-255), but in real life, we observed 80 distinct TTL values in one million packets collected on a department web server over a 9 hour period. It is possible that an attacker might manipulate the TTL field to thwart an IDS using methods described by Horizon (1998) or Ptacek and Newsham (1998), but these techniques involve using small values in order to expire packets between the target and the IDS.

A more likely explanation is that the attacks were launched from a real machine that was 2 hops away from the sniffer in the simulation, but all of the other machines were at most one hop away. It is extremely difficult to simulate Internet traffic correctly (Floyd and Paxson, 2001), so such artifacts are to be expected. When we run PHAD-C32 without the TTL field, it detects 48, instead of 67, attack instances.

4.4. Attacks not detected

Table 4.5 lists all attacks not detected by PHAD. All of the undetected attacks except land, resetscan, and syslogd either exploit bugs at the application layer, which PHAD does not monitor, or are U2R or data attacks where the attacker already has a shell. In the latter case, the attacks are difficult to detect because they require interpretation of user commands, and could be hidden by running them from a local script or through an encrypted SSH session. A better way to detect such attacks might be to use anomaly detection at the system call level, for example, as described by Forrest et al. (1996).

Attack Description Det

anypw U2R, NT backdoor0/1

back DOS, HTTP overflow0/4

dict R2L, password guessing using dictionary0/1

eject U2R, Solaris buffer overflow in suid root prog.0/2

fdformat U2R, Solaris buffer overflow in suid root prog.0/3

ffbconfig U2R, Solaris buffer overflow in suid root prog.0/2

framespoofer R2L, hostile web page hidden in invisible frame0/1

guesspop R2L, password guessing to POP3 mail server0/1

httptunnel R2L, backdoor (uses HTTP to evade firewalls)0/3

imap R2L (root), buffer overflow to IMAP server0/2

land DOS, IP with source = destination crashes SunOS0/2

loadmodule U2R, SunOS, set IFS to call trojan suid program0/3

ls Probe, download DNS map0/3

ncftp R2L, FTP exploit0/5

netcat R2L, backdoor0/2

ntfsdos Data, copy NT disk at console0/3

perl U2R, Linux exploit0/4

phf R2L, Apache default CGI exploit0/4

ps U2R, Solaris exploit using race conditions0/4

resetscan Probe, test for listening ports0/1

secret Data, unauthorized copying0/5

selfping DOS, Solaris, ping localhost from shell crashes0/3

snmpget R2L, Cisco router password guessing0/4

sqlattack U2R, escape from SQL database shell0/3

sshtrojan U2R, fake SSH login screen captures password0/3

syslogd DOS, Solaris, crash audit logger w/ spoofed IP0/4

tcpreset DOS, sniff connections, spoof RST to close0/3

xsnoop R2L, intercept keystrokes on open X servers0/3

xterm U2R, Linux buffer overflow in suid root prog.0/3

yaga U2R, NT0/4

Table 4.5. Attacks not detected by PHAD.

Land crashes SunOS by sending a spoofed IP packet with identical source and destination addresses, but PHAD misses it because it only looks at one field at a time. Resetscan probes for ports by sending a RST packet on an unopened TCP port, but PHAD misses it because it doesn't track TCP connection states, and doesn't know that the connection is not open. Syslogd crashes the syslogd server on Solaris by sending a packet with a spoofed source IP that cannot be resolved to a hostname. If the threshold is increased from 10 false alarms per day to 30, PHAD does indeed detect all four instances of this attack, two by their source IP address, and two by an anomalous IP packet length (of 32).

4.5. Run-time overhead in time and space

Our implementation of PHAD-C32 processes 2.9 gigabytes of training data and 4.0 gigabytes of test data in 364 seconds (310 user + 54 system), or 95,900 packets per second on a Sparc Ultra 60 with a 450 MHz 64-bit processor, 512 MB memory and 4 MB cache. The overhead is 23 seconds of CPU time per simulated day, or 0.026% at the simulation rate. The wall time in our test was 465 seconds (78% usage), consisting of 165 seconds of training (77,665 packets per second) and 300 seconds of testing (73,560 packets per second). The PHAD model uses negligible memory: 34 fields times 32 pairs of 4-byte integers to represent the bounds of each cluster, or 8 kilobytes total.

5. Attacks in the Training Data

In a practical system, known attack-free training data will not always be available as it is in the DARPA set. If the training data contains attacks, then the anomalies that it generates will be added to the model of "normal" traffic, so that future attacks of the same type will be missed. We could use a shorter

training period to reduce the probability of training on an attack, but this could also reduce the set of allowed values, resulting in more false alarms. We performed experiments to answer two questions. First, how much is performance degraded by using a shorter training period? Second, how much is performance degraded due to the masking effect, where training on one attack type masks the detection of other attacks.

To answer the first question, we reduced the training period from 7 days (week 3, days 1-7), to 1 or 5 days. To answer the second question, we ran PHAD in on-line mode. For each test day, we used the previous day's traffic for training. On all but the first test day, the training data therefore contained attacks. The results are shown in Table 5.1 (for 10 FP/day).

Training set Detections

Days 1-7 (no attacks)67

Days 1-566

Day 155

Day 251

Day 355

Day 451

Day 553

Day 664

Day 734

Average of days 1-752.3

Previous day (on-line with attacks)35

Table 5.1. Attacks detected by PHAD using various training sets

First, we observe almost no loss in going from 7 training days to 5, but a significant 22% loss on average in going to one day. Second, we observe a loss of 17 detections, or 33% when we use the previous day's data as training for the next in on-line mode, compared to using one day of attack free training data. Out of 201 total attacks, there are 59 types (tables 4.3 and 4.5), so each attack type occurs on average 3.4 times during the 10 day test period, or 0.34 times per day. If we assume that each attack has a 34% chance of having occurred on the previous day, then the observed loss (33%) suggests that there is little or no masking effect. It is possible that there is a masking effect, but it is compensated by using more recent (better) training data in on-line mode than in the other 1-day tests. But if this is so, we should observe improved performance as we go from day 1 to day 7 of the training data, and no such trend is apparent.

6. Tuning PHAD

We investigated a number of ideas and performed experiments with PHAD to improve its performance on the DARPA evaluation data set, obtaining the best result with the variation we denote PHAD-C32 described in the previous sections. The results for the other versions are shown in Table 6.1. We describe the idea behind each variation briefly.

For each variation, we also applied a postprocessing step where we remove duplicate alarms (within 60 seconds of each other) that identify the same IP address. This step almost always improves the results because it reduces the number of false alarms (which would be counted twice) without reducing the number of detections (which would be counted only once). Also, in the rare case of tie scores, we rank

them in order of the time since the previous alarm indicating the same IP address with the longest intervals first. For instance, if there are alarms at times 5, 6, and 8 minutes, then we rank them 5, 8, 6 (intervals of 5, 2, and 1 minute) and discard 6 because it occurs within one minute of 5. Time-ranking has the same effect as the factor t in the PHAD-C32 score tn/r.

For PHAD-C32, postprocessing increases the number of detections from 67 to 72, the best result we could obtain from any system we tried.

PHAD variation Detections Postprocessed PHAD-C, values stored in C = 32, 1000 clusters, score = tn/r67, 6472, 70

PHAD-H, values hashed modulo H = 10006467

PHAD-B, one byte fields, all values stored (H = 256)19not tested PHAD-S, stationary model, score = 1/p based on counts, H = 1000827

PHAD-KL, stationary KL divergence of 60 sec. windows, H = 10001616

PHAD-KL0, KL without fields where all H = 1000 values > 02238

PHAD-Kt, score = time t since last anomaly in nearest of K = 16, 32, 64

28, 38, 1428, 38, 14 clusters in 34-dimensional packet space

PHAD-Kt/r, score = t/r of nearest of K = 16, 32 clusters, containing r elements38, 4639, 47

35, 53, 1237, 53, 12 PHAD-Kt/a, Score = t/a of nearest of K = 16, 32, 64 clusters expanded a times

during training

PHAD-1H, PHAD-H1000 but score = t/r (not tn/r)5862

PHAD-2H, H = 1000 over all pairs of fields, score = t/r5662

PHAD-C32-TTL, PHAD-C32 without TTL field4854

Table 6.1. Number of detections for various PHAD versions.

6.1. Increasing number of clusters

PHAD-C32 stores the observed values for each field in a list of 32 ranges or clusters. When C is increased to 1000, we have a more accurate representation of the training data, but lose some generalization when the training set is small. The models differs only when r > 32, which means that it affects only low scoring fields. In our experiments, the postprocessed detection rate decreased slightly from 72 to 70 postprocessed.

6.2. Storing hashed values

PHAD-H stores a set of H = 1000 hashes (mod H) of the observed values. This is faster to look up (O(1) vs. O(log C)), but loses the ability to generalize over continuous fields (such as packet size). There is a slight decrease in detection rate from 72 to 67. In our implementation, we did not see a noticeable increase in speed.

6.3. One byte fields

A simplified version of PHAD is to parse the header into 1 byte fields and simply store the set of 256 possible values. However, the detection rate was severely degraded from 64 to 19 (not postprocessed), suggesting that it is worthwhile to find "meaningful" boundaries in the data.

6.4. Stationary models

PHAD makes no distinction between a training value that occurs once and a value that occurs millions of times. It is a nonstationary model: the probability of an event depends only on the time since it last occurred. We built a stationary model, PHAD-S, in which the probability of an event depends on its average rate during training, and independent of recent events. Specifically, if a value is observed r times in n trials, then we assign a score of 1/p, where p = (r + 1)/(n + H), where there are H possible (hashed) values. This is the Laplace approximation of the probability p, allowing for nonzero probabilities for novel values. The result (8 detections) suggests that a stationary model is a poor one. This model generates a large number of tie scoring alarms, so postprocessing (which makes the model nonstationary) increases the number of detections to 27.

All of the PHAD models so far rate individual packets independently, but we know that many attacks can be recognized by a short term change of rate of some event over many packets (for example, a flooding attack). PHAD-KL assigns a score to a field over a set of packets in a 60 second time window based on how much the distribution of values differs from the distribution in training. The normal way to measure the divergence of some probability distribution q (the 1 minute window) from a reference distribution p (the training data) is the Kullback-Liebler divergence, D(q || p) = Σi q i log q i/p i. p i is measured using a Laplacian approximation as before, but q i = r i /n where the i'th value is observed r i times out of n during the one minute window. We do not use Laplacian estimation for q because otherwise a window with no packets would generate a uniform distribution and a large KL score, rather than a score of 0). PHAD-KL is a stationary model over the window, but the detections increase from 8 to 16 compared to considering single packets.

A variant, PHAD-KL0, is to remove those fields which do not contribute to the other models, specifically those fields in PHAD-H1000 where all H possible values are observed. PHAD-KL should be able to collect a distribution (and therefore, useful information) from these fields, so we might expect the number of detections to drop in PHAD-KL0. Instead, the detections increase from 16 to 22 (16 to 38 postprocessed), suggesting that these fields probably added more noise than useful data.

6.5. Examining combinations of fields

Every version of PHAD so far looks at each field in isolation, but we know that some fields may or may not be anomalous depending on the values of other fields. For example, a TCP destination port of 21 (FTP) is only anomalous when the destination IP address is that of a host not running an FTP server. We would like a model that allows port 21 only in some range of IP addresses, and possibly restricts other fields at the same time. PHAD-K clusters all training packets into K clusters, each defined by a lower and upper bound on each field, or equivalently, a 34-dimensional volume (33 fields plus time) in packet-space, where each field is a dimension. The clustering algorithm is to add each packet to the packet space as a cluster of size 134 (1 by 1 by 1...) and if the number of clusters exceeds K, to combine the two closest ones, where the distance is defined as the increase in volume that results from enclosing them both in a new cluster. The goal of this algorithm is to keep the model specific by minimizing the total cluster volume. Missing field values (such as the UDP checksum in a TCP packet) are assumed to be 0. There are three variants of PHAD-K, depending on how the score is computed for packets that fall outside of all clusters. In the PHAD-Kt variant, the score is t, where t is the time since the last anomaly in the nearest cluster (defining distance as before). This is a nonstationary model, and it assumes that

consecutive anomalies are likely to be close together, and therefore closest to the same cluster. We tested K = 16, 32, and 64 clusters, with the best result of 38 detections at K = 32.

PHAD-Kt is based on the assumption that the expected rates of anomalies near each cluster are initially equal. We can do better than that, by measuring the anomaly rate during training. We first estimate this rate by counting the number of packets, r, in each cluster, and assigning a packet score of t/r. This was tested with K = 16 and 32, and detects 47 attacks with K = 32. A better way to measure the anomaly rate is to count the number of times a cluster was expanded (due to an anomalous training event). A new packet gets a count of 1. If two clusters have counts of c1 and c2, then when merged the count is c1 + c2 + 1. This method, PHAD-Kt/a, was tested with K = 16, 32, and 64, with a best result of 53 detections at K = 32.

PHAD-K variants are computationally expensive (O(K) to find the closest cluster by linear search), and no better than PHAD-C or PHAD-H in the number of detections. PHAD-2H considers only pairs of fields, rather than combining them all at once, storing a hash (mod H = 1000) of both fields in a set of size H. There are (34)(33)/2 = 560 such pairs, plus the original 34 fields. In PHAD-H and PHAD-C, we use a score of tn/r, where n is the number of times the field is observed. Since we now have two fields with possibly different n, we have to eliminate n from the score. To test the effect of doing this, we modified PHAD-H to PHAD-1H (with H = 1000) using the score t/r, where there are r allowed values, and the last anomaly was t seconds ago. The effect of removing n is to effectively assign higher scores to the fields of seldom-used protocols such as UDP and ICMP. This baseline detects 62 attacks postprocessed (compared to 67 for PHAD-H). Extending this to PHAD-2H with 560 field pairs results in no change, 62 detections.

6.6. PHAD without TTL

We mentioned in Section 4.3 that many of the detections (and false alarms) are due to anomalies in the TTL field, which is probably a simulation artifact. When we remove this field (PHAD-C32-TTL), the number of detections drops from 72 to 54 postprocessed.

6.7. Conclusions

To summarize, our experiments suggest that:

? Nonstationary models (C, H, 1H, 2H, K), where the probability of an event depends on the time since it last occurred, outperform stationary models (S, KL), where the probability depends on the average rate during training.

? There is an optimal model size (C32, Kt32, Kt/r32, Kt/a32) which performs better than models that overfit the training data (C1000, Kt64, Kt/a64) or underfit (Kt16, KT/r16, KT/a16).

? Models that consider fields individually (C, H) outperform models that combine fields (2H, K), but these fields should be larger than one byte (B). (This could be an example of

overfitting/underfitting).

? Models that treat values as continuous (C) outperform discrete models (H).

? Postprocessing by removing duplicate alarms or ranking by time interval never hurts.

7. A Note on Results from DARPA Participants in 1999

Lippmann et al. (2000) reports that the detection rates for the best 4 of 18 systems evaluated on the DARPA set in 1999 had detection rates of 40% to 55% on the types of attacks that they were designed to detect (Table 7.1). However, we caution that unlike the original test, we had access to the test data during development and did our own unofficial evaluation of the results. Also, the original participants could only categorize their systems according to the type of attacks they are to detect (probe, DOS, R2L, U2R, data), operating system (SunOS, Solaris, Linux, NT, Cisco), and type of data examined (inside or outside traffic, BSM, audit logs, and file system dumps). We use a finer distinction when we classify attacks by protocol.

System Detections

Expert 185/169 (50%)

Expert 281/173 (47%)

Dmine41/102 (40%)

Forensics15/27 (55%)

Table 7.1. Official detection rates (out of the total number of attacks that the system is designed to detect) at 10 FA/day for top systems in the 1999 DARPA evaluation (Lippmann et al. 2000, Table 6). PHAD-C32 with postprocessing unofficially detects 72 of the 201 attacks in the DARPA evaluation. If we consider only probe and DOS attacks, and exclude the attacks in the missing data (week 4 day 2), then PHAD-C32 detects 59 of 99 instances, or 60%. It detects at least one instance of 17 of 24 probe and DOS types, and 29 of 59 types altogether. Of the 30 types missed, only 3 (resetscan, land, and syslogd) produce anomalies at the transport layer or below, and one of them (syslogd) is detectable at a lower threshold. Twelve of the 29 attack types that PHAD-C32 detects are detected solely by the TTL field, which we suspect is a simulation artifact. But if we remove this field, only neptune would be added to the list of attacks that we could reasonably detect. In contrast to the firewall model, only 8 instances (6 types) of attacks are detected by their IP addresses, and none by their port numbers.

Lippmann reports that several types of attacks were hard to detect by every system tested. PHAD-C32 outperforms the official results on four of these: dosnuke, queso, and the stealthy (slow scan) versions of portsweep and ipsweep. These are summarized in Table 7.2. PHAD is relatively immune to slow scans, since it examines each packet in isolation rather than correlate events over an arbitrary time period. Attack Best detection (Lippmann 2000)PHAD-C32 (unofficial)

stealthy ipsweep0/31/3

stealthy portsweep3/1111/11

queso0/43/4

dosnuke2/44/4

Table 7.2. Unofficial detection rates by PHAD-C32 for some hard to detect attacks.

8. Concluding Remarks

Intrusions generate anomalies because of bugs in the victim program, the attacking program, or the IDS, or because the victim generates anomalous output after an attack. We have seen examples of all four types of anomalies:

1. Strange input to poorly tested software, e.g. the corrupted packets used by teardrop, pod, dosnuke.

2. Strange data from poorly written attacks, e.g. bad checksums in smurf and udpstorm.

3. Strange data used to hide an attack from layers above, e.g. FIN scanning by portsweep.

4. Strange responses from the victim, e.g. unusual TCP options generated by apache2.

We proposed an anomaly detection algorithm (PHAD) based on examining packet header fields other than just the normal IP addresses and port numbers. We found that these fields play only a minor role in detecting most attacks. PHAD detects most of the attacks in the DARPA data set that involve exploits at the transport layer and below, including four that eluded most of the systems in the original evaluation. We experimented with a number of models, and the best results are obtained by nonstationary models that examine each field and each packet separately. PHAD-C32 gets slightly better results than C1000, probably because it avoids overfitting the training data, and better than H1000 because it is able to generalize to reasonable values for continuous variables. All three get similar results because they are equivalent for small r, those fields that contribute most to the anomaly score. They outperform stationary models (S, KL, KL0), and slightly outperform models that consider combinations of fields (2H, Kt, Kt/r, KT/a). PHAD-C32 is among the simplest models tested. It incurs low time and space overhead.

PHAD was relatively more successful in detecting four of the hard-to-detect attacks identified by Lippmann et al. (2000). This indicates that PHAD could cover an attack space different from the

other detectors. Hence, a combination of the detectors could increase the overall coverage of detectable attacks.

All of the PHAD models require no a-priori knowledge of any attacks and very little a-priori knowledge about the protocols they model. The models are learned from data which can have different characteristics in diverse environments. Moreover, new models can be learned and adapt to new traffic characteristics in the same environment. However, PHAD is not intended to be a standalone IDS. It should be component in a system that includes signature detection for known attacks, system call monitoring for novel R2L and U2R attacks, and file system integrity checking for backdoors.

As expected, we observed some degradation in PHAD's performace when attacks are present in the training data. One remedy is to employ misuse signature detectors to filter out known attacks from the data before supplying the data to PHAD for training. Another approach is to make PHAD more noise-tolerant (attacks, usually in small amounts, in the presumably attack-free data can be considered as noise). Currently, PHAD does not consider the frequency of each value observed during training, however, values of low frequency could be noise.

PHAD-C32 already detects most attacks in the protocols it examines. We suspect that in order to yield any significant improvements, it will be necessary to examine the application layer. There are systems that do this now, but they are mostly rule-based, matching strings to known attacks. We have detected a small number of attacks using anomaly models based on n-gram statistics in some preliminary

experiments, but there are great variety of application layer protocols, some far more complex than

TCP/IP, and there is much work to be done.

Acknowledgments

This research is partially supported by DARPA (F30602-00-1-0603). We thank Richard Lippmann and Richard Kemmerer for comments on an earlier draft of this paper.

References

Anderson, Debra, Teresa F. Lunt, Harold Javitz, Ann Tamaru, Alfonso Valdes, "Detecting unusual program behavior using the statistical component of the Next-generation Intrusion Detection Expert System (NIDES)", Computer Science Laboratory SRI-CSL 95-06 May 1995.

https://www.doczj.com/doc/c74763838.html,/papers/5/s/5sri/5sri.pdf

Bell, Timothy, Ian H. Witten, John G. Cleary, "Modeling for Text Compression", ACM Computing Surveys (21)4, pp. 557-591, Dec. 1989.

Barbará, D., N. Wu, S. Jajodia, "Detecting Novel Network Intrusions using Bayes Estimators", First SIAM International Conference on Data Mining, 2001,

https://www.doczj.com/doc/c74763838.html,/meetings/sdm01/pdf/sdm01_29.pdf

Floyd, S. and V. Paxson, "Difficulties in Simulating the Internet." To appear in IEEE/ACM Transactions on Networking, 2001. https://www.doczj.com/doc/c74763838.html,/vern/papers.html

Forrest, S., S. A. Hofmeyr, A. Somayaji, and T. A. Longstaff, "A Sense of Self for Unix Processes", Proceedings of 1996 IEEE Symposium on Computer Security and Privacy.

ftp://https://www.doczj.com/doc/c74763838.html,/pub/forrest/ieee-sp-96-unix.pdf

Ghosh, A.K., A. Schwartzbard, M. Schatz, "Learning Program Behavior Profiles for Intrusion Detection", Proceedings of the 1st USENIX Workshop on Intrusion Detection and Network Monitoring, April 9-12, 1999, Santa Clara, CA. https://www.doczj.com/doc/c74763838.html,/~anup/usenix_id99.pdf

Horizon, "Defeating Sniffers and Intrusion Detection Systems", Phrack 54(10), 1998,

https://www.doczj.com/doc/c74763838.html,

Kendall, Kristopher, "A Database of Computer Attacks for the Evaluation of Intrusion Detection Systems", Masters Thesis, MIT, 1999.

Lippmann, R., et al., "The 1999 DARPA Off-Line Intrusion Detection Evaluation", Computer Networks 34(4) 579-595, 2000.

Neumann, P., and P. Porras, "Experience with EMERALD to DATE", Proceedings 1st USENIX Workshop on Intrusion Detection and Network Monitoring, Santa Clara, California, April 1999, 73-80, https://www.doczj.com/doc/c74763838.html,/neumann/det99.html

Paxson, Vern, "Bro: A System for Detecting Network Intruders in Real-Time", Lawrence Berkeley National Laboratory Proceedings, 7'th USENIX Security Symbposium, Jan. 26-29, 1998, San Antonio TX, https://www.doczj.com/doc/c74763838.html,/publications/library/proceedings/sec98/paxson.html

Ptacek, Thomas H., and Timothy N. Newsham, "Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection", January, 1998, https://www.doczj.com/doc/c74763838.html,/mirror/Ptacek-Newsham-Evasion-98.html

Sasha/Beetle, "A Strict Anomaly Detection Model for IDS", Phrack 56(11), 2000, https://www.doczj.com/doc/c74763838.html, Sekar, R., M. Bendre, D. Dhurjati, P. Bollineni, "A Fast Automaton-based Method for Detecting Anomalous Program Behaviors". Proceedings of the 2001 IEEE Symposium on Security and Privacy. snort ver. 1.7 (2001), https://www.doczj.com/doc/c74763838.html,

印刷尺寸大小 纸张大小

常用尺寸印刷尺寸印刷大小纸张大小 1.印刷尺寸及版式设计基础 2.印刷纸张规格 所谓开数就切成几份的意思,例如8开的纸就是全开的1/8大(对切三次)。设计前要先选定纸张尺寸,因为印刷的机器只能使用少数几种纸张(通常是全开、菊全开),一次印完后再用机器切成所需大小,所以没事不要用下表以外的特殊规格,以免纸张印不满而浪费版面。 由于机器要抓纸、走纸的缘故,所以纸张的边缘是不能印刷的,因此纸张的原尺寸会比实际规格要来得大,等到印完再把边缘空白的部分切掉,所以才会有全尺寸跟裁切后尺寸的差别。如果海报故意要留下白边的话也可以选择不切。开数规格全尺寸(mm)裁切后尺寸(mm)常用纸张 全开B11091X7871042X751墙报纸 对开(半开)B2787X545751X521海报 3开787X363751X345 4开B3545X393521X375 5开454X318424X303 8开B4393X272375X260

16开B5272X196260X18726孔活页记事本32开B6196X136187X130 64开B7136X98130X93 菊全开A1872X621842X594海报 菊对开A2621X436594X421 菊3开621X290594X280 菊4开A3436X310421X297 菊8开A4310X218297X210影印纸 菊16开A5218X155210X148手册 菊32开A6155X109148X105 菊64开A7109X77105X74印刷尺寸 什么正度16开,大度16开是多大? 正度16开(185x260) 大度16开(210x285) “开”又是什么单位?

图解:主板电线接法(电源开关、重启等)

钥匙开机其实并不神秘 还记不记得你第一次见到装电脑的时候,JS将CPU、存、显卡等插在主板上,然后从兜里掏出自己的钥匙(或者是随便找颗螺丝)在主板边上轻轻一碰,电脑就运转起来了的情景吗?是不是感到很惊讶(笔者第一次见到的时候反正很惊讶)!面对一个全新的主板,JS总是不用看任何说明书,就能在1、2分钟之将主板上密密麻麻的跳线连接好,是不是觉得他是高手?呵呵,看完今天的文章,你将会觉得这并不值得一提,并且只要你稍微记一下,就能完全记住,达到不看说明书搞定主板所有跳线的秘密。 这个叫做真正的跳线

首先我们来更正一个概念性的问题,实际上主板上那一排排需要连线的插针并不叫做“跳线”,因为它们根本达不”到跳线的功能。真正的跳线是两根/三根插针,上面有一个小小的“跳线冒”那种才应该叫做“跳线”,它能起到硬件改变设置、频率等的作用;而与机箱连线的那些插针根本起不到这个作用,所以真正意义上它们应该叫做面板连接插针,不过由于和“跳线”从外观上区别不大,所以我们也就经常管它们叫做“跳线”。 看完本文,连接这一大把的线都会变得非常轻松 至于到底是谁第一次管面板连接插针叫做“跳线”的人,相信谁也确定不了。不过既然都这么叫了,大家也都习惯了,我们也就不追究这些,所以在本文里,我们姑且管面板连接插针叫做跳线吧。 轻松识别各连接线的定义 为了更加方便理解,我们先从机箱里的连接线说起。一般来说,机箱里的连接线上都采用了文字来对每组连接线的定义进行了标注,但是怎么识别这些标注,这是我们要解决的第一个问题。实际上,这些线上的标注都是相关英文的缩写,并不难记。下面我们来一个一个的认识(每图片下方是相关介绍)!

电脑主机内部所有接线连接方法

电脑主机内部所有接线连 接方法 Prepared on 22 November 2020

电脑主机内部所有接线连接方法 主机外的连线虽然简单,但我们要一一弄清楚哪个接口插什么配件、作用是什么。对于这些接口,最简单的连接方法就是对准针脚,向接口方向平直地插进去并固定好。 电源接口(黑色):负责给整个主机电源供电,有的电源提供了开关,笔者建议在不使用电脑的时候关闭这个电源开关(图1)。 图1 PS/2接口(蓝绿色):PS/2接口有二组,分别为下方(靠主板PCB方向)紫色的键盘接口和上方绿色的鼠标接口(图2),两组接口不能插反,否则将找不到相应硬件;在使用中也不能进行热拔插,否则会损坏相关芯片或电路。 图2

USB接口(黑色):接口外形呈扁平状,是家用电脑外部接口中唯一支持热拔插的接口,可连接所有采用USB接口的外设,具有防呆设计,反向将不能插入。 LPT接口(朱红色):该接口为针角最多的接口,共25针。可用来连接打印机,在连接好后应扭紧接口两边的旋转螺丝(其他类似配件设备的固定方法相同)。 COM接口(深蓝色):平均分布于并行接口下方,该接口有9个针脚,也称之为串口1和串口2。可连接游戏手柄或手写板等配件。 Line Out接口(淡绿色):靠近COM接口,通过音频线用来连接音箱的Line接口,输出经过电脑处理的各种音频信号(图3)。 图3

Line in接口(淡蓝色):位于Line Out和Mic中间的那个接口,意为音频输入接口,需和其他音频专业设备相连,家庭用户一般闲置无用。 Mic接口(粉红色):粉红色是MM喜欢的颜色,而聊天也是MM喜欢的。MIC接口可让二者兼得。MIC接口与麦克风连接,用于聊天或者录音。 显卡接口(蓝色):蓝色的15针D-Sub接口是一种模拟信号输出接口(图4),用来双向传输视频信号到显示器。该接口用来连接显示器上的15针视频线,需插稳并拧好两端的固定螺丝,以让插针与接口保持良好接触。 MIDI/游戏接口(黄色):该接口和显卡接口一样有15个针脚,可连接游戏摇杆、方向盘、二合一的双人游戏手柄以及专业的MIDI键盘和电子琴。 网卡接口:该接口一般位于网卡的挡板上(目前很多主板都集成了网卡,网卡接口常位于US B接口上端)。将网线的水晶头插入,正常情况下网卡上红色的链路灯会亮起,传输数据时则亮起绿色的数据灯。主机内的连线有简单的也有复杂的,但无论简单还是复杂,我们DIYer都要攻克这些困难,这样才能真正地组装起一台可以流畅运行的电脑。 1.电源连线

国内印刷物常用标准尺寸

国内常用标准尺寸: 以789mmX1092mm规格的纸作为标准规范用纸,所开切的不同开本的书籍成品净尺如下寸: 4开本:381mmX533mm 6开本:356mmX381mm 8开本:267mmX381mm 12开本:251mmX260mm 16开本:191mmX263mm 18开本:175mmX251mm 20开本:186mmX210mm 方20开本:185mmX207mm 长20开本:149mmX260mm 23开本:152mmX225mm 24开本:175mmX186mm 横24开本:185mmX175mm 长24开本:124mmX260mm 25开本:152mmX210mm 28开本:151mmX186mm 32开本:130mmX184mm 36开本:125mmX173mm 40开本:132mmX151mm

42开本:106mmX173mm 48开本:94mmX173mm 50开本:103mmX149mm 64开本:92mmX129mm 大标准纸张尺寸: 以大规格纸张850mmX1168mm的幅面纸作为常用开本,标准规格净尺寸为: 16开本:206mmX283mm 大16开本:210mmX285mm 大32开本:141mmX203mm 大64开本:102mmX138mm 常用开本 (1)A 度纸(印刷成品、复印纸和打印纸的尺寸):

2)RA度纸(一般印刷用纸,裁边后,可得A 度印刷成品尺寸):

(3)SRA度纸(用于出血印刷品的纸,其特点是幅面较宽) (4)B度纸(介于A度之间的纸,多用于较大成品尺寸的印刷品,如挂图、海报):

机箱主板插连接线图解

机箱主板插连接线图解 1.一般有7个接口(AUDIO,USB,HDD-LED,两个PW-LED,PW开关,重启开关)以上为一般机箱,假如弄到特殊的,这里也很难说明白 2.对于AUDIO和USB接口的都为9针借口,各有一个卡位,但是位置不一样,所以LZ只要留心点就很容易在主板上发现这两个接口,因为主板上那9针是唯一的,不可能接错,而USB的9针虽然在主板上有两三个,不过都是一样的,作为后备用,所以LZ接那一个都没关系 3.对于另外那几个也很容易判断,都是两针,不过不知道为什么,POWER-LED的两针是分开,其余都一样 4.主板上都有显示那两针是接什么都,通常每块主板的位置都差不远,只要LZ习惯就好 5.对于HDD-LED,两个PW-LED,PW开关,重启开关各两针,要小心的是正负,一般有颜色的那条线为正,白色为负;或者看接口那黑色的头上面,有一个三角形箭头,有三角的那端为正。 钥匙开机其实并不神秘 还记不记得你第一次见到装电脑的时候,JS将CPU、内存、显卡等插在主板上,然后从兜里掏出自己的钥匙(或者是随便找颗螺丝)在主板边上轻轻一碰,电脑就运转起来了的情景吗?是不是感到很惊讶(笔者第一次见到的时候反正很惊讶)!面对一个全新的主板,JS总是不用看任何说明书,就能在1、2分钟之内将主板上密密麻麻的跳线连接好,是不是觉得他是高手?呵呵,看完今天的文章,你将会觉得这并不值得一提,并且只要你稍微记一下,就能完全记住,达到不看说明书搞定主板所有跳线的秘密。

这个叫做真正的跳线 首先我们来更正一个概念性的问题,实际上主板上那一排排需要连线的插针并不叫做“跳线”,因为它们根本达不”到跳线的功能。真正的跳线是两根/三根插针,上面有一个小小的“跳线冒”那种才应该叫做“跳线”,它能起到硬件改变设置、频率等的作用;而与机箱连线的那些插针根本起不到这个作用,所以真正意义上它们应该叫做面板连接插针,不过由于和“跳线”从外观上区别不大,所以我们也就经常管它们叫做“跳线”。

电脑组装之接口线(图解)

在主板上,我们可以看到一个长方形的插槽,这个插槽就是电源为主板提供供电的插槽(如下图)。目前主板供电的接口主要有24针与 20针两种,在中高端的主板上,一般都采用2 4PIN的主板供电接口设计,低端的产品一般为20PIN。不论采用24PIN和20PIN,其插法都是一样的。 主板上24PIN的供电接口 主板上20PIN的供电接口

电源上为主板供电的24PIN接口 为主板供电的接口采用了防呆式的设计,只有按正确的方法才能够插入。通过仔细观察也会发现在主板供电的接口上的一面有一个凸起的槽,而在电源的供电接口上的一面也采用了卡扣式的设计,这样设计的好处一是为防止用户反插,另一方面也可以使两个接口更加牢固的安装在一起.

二、认识CPU供电接口图解安装详细过程 为了给CPU提供更强更稳定的电压,目前主板上均提供一个给CPU单独供电的接口(有4针、6针和8针三种),如图: 主板上提供给CPU单独供电的12V四针供电接口

电源上提供给CPU供电的4针、6针与8针的接口 安装的方法也相当的简单,接口与给主板供电的插槽相同,同样使用了防呆式的设计,让我们安装起来得心应手。

三、认识SATA串口图解SATA设备的安装 SATA串口由于具备更高的传输速度渐渐替代PATA(IDE)并口成为当前的主流,目前大部分的硬盘都采用了串口设计,由于SATA的数据线设计更加合理,给我们的安装提供了更多的方便。接下来认识一下主板上的SATA接口。这图片跟下面这图片便是主板上提供的SA TA接口,也许有些朋友会问,两块主板上的SATA口“模样”不太相同。 大家仔细观察会发现,在下面的那图中,SATA接口的四周设计了一圈保护层,这样对接口起到了很好的保护作用,在一起大品牌的主板上一般会采用这样的设计。

印刷尺寸表

印刷尺寸表对开大度875×580 正度775×530 四开大度430×575 正度520×370 八开大度420×285 正度370×260 十六开大度210×285 正度260×185 三十二开大度140×203 正度130×185 国内信封标准 代号长(mm)宽(mm)备注 B6号176 125 与现行3号信封一致 DL号220 110 与现行5号信封一致 ZL号230 120 与现行6号信封一致 C5号229 162 与现行7号信封一致 C4号324 229 与现行9号信封一致 国际信封标准 代号长(mm)宽(mm)备注 C6号162 114 新增加国际规格 B6号176 125 与现行3号信封一致 DL号220 110 与现行5号信封一致 ZL号230 120 与现行6号信封一致 C5号229 162 与现行7号信封一致 C4号324 229 与现行9号信封一致 三折页广告 标准尺寸: (A4)210mm x 285mm 一般宣传册 标准尺寸: (A4)210mm x 285mm

文件封套 标准尺寸:220mm x 305mm 招贴画: 标准尺寸:540mm x 380mm 挂旗 标准尺寸:8开376mm x 265mm 4开540mm x 380mm 手提袋: 标准尺寸:400mm x 285mm x 80mm 信纸便条: 标准尺寸:185mm x 260mm 210mm x 285mm 正度纸张:787×1092mm 开数(正度) 尺寸单位(mm) 全开781×1086 2开530×760 3开362×781 4开390×543 6开362×390 8开271×390 16开195×271 注:成品尺寸=纸张尺寸-修边尺寸 大度纸张:850*1168mm 开数(正度) 尺寸单位(mm)

印刷常用纸张尺寸和字号

印刷常用纸张尺寸和字号、磅与毫米关系 印刷常用纸张尺寸 正度纸张:787×1092mm 开数(正度) 尺寸单位(mm) 全开:781×1086 2开:530×760 3开:362×781 4开:390×543 6开:362×390 8开:271×390 16开:195×271 成品尺寸=纸张尺寸-修边尺寸 大度纸张:850*1168mm 开数(正度) 尺寸单位(mm) 全开:844×1162 2开:581×844 3开:387×844 4开:422×581 6开:387×422 8开:290×422 成品尺寸=纸张尺寸-修边尺寸 常见开本尺寸:787 x 1092 (单位:mm) 对开:736 x 520 4开:520 x 368 8开:368 x 260 16开:260 x 184 32开:184 x 130 常见开本尺寸(大度):850 x 1168(单位:mm) 对开:570 x 840 4开:420 x 570 8开:285 x 420 16开:210 x 285 32开:203 x 140 名片 横版:90*55mm<方角> 85*54mm<圆角> 竖版:50*90mm<方角> 54*85mm<圆角> 方版:90*90mm 90*95mm IC 卡 85x54MM 三折页广告标准尺寸: (A4)210mm x 285mm 普通宣传册标准尺寸: (A4)210mm x 285mm 文件封套标准尺寸:220mm x 305mm 招贴画:标准尺寸:540mm x 380mm 挂旗标准尺寸: 8开 376mm x 265mm / 4开 540mm x 380mm 手提袋:标准尺寸:400mm x 285mm x 80mm 信纸便条:标准尺寸:185mm x 260mm / 210mm x 285mm 国际标准 A 组: 提供一般印刷用途,包括文件及书刊等尺寸(mm) 2A - 1189×1682

电脑主板电源线 各种接法

Q:电脑电源黄线是+12V的可我找不到-12V 请问我该怎么接? A:1、千万不能接到-12V上,这样就成了24V了,要接到地线上,就是黑色的线(所有的黑线都是负极他们是相通的),也可以接到机箱上,机箱的电位就是零,和黑线是通的。 2、-12V 是蓝色的线,对0V黑线,通常只能输出大约0.5A的小电流,请注意。 电脑电源20pin与24pin接线区别及各针角对应电压 红色:代表+5V电源线(主板、硬盘、光驱等硬件上的芯片工作电压)。 黄色:代表+12V电源线(硬盘、光驱、风扇等硬件上的工作电压,和-12V同时向串口提供EIA电源)。 橙色:代表+3.3V电源线(直接向DIMM、AGP插槽供电)。 灰色:代表P.G信号线(电源状态信息线,它是其他电源线通过一定电路计算所得到的结果,当按下电脑开头键后,这个信号表示电源良好可以开机无信号说明有故障主板自动监测)。蓝色:代表-12V电源线(向串口提供EIA电源)。 白色:代表-5V电源线(软驱锁相式数据分离电路)。 紫色:代表+5V StandBy电源线(关机后为主板的一小部分电路提供动力,以检测各种开机命令). 绿色:代表PS-ON信号线(主板电源开/关的信号线,未接通时有一定电压) 黑色:系统电路的地线。 U V W是三相电源中表示3根火线的字母,3根火线在实际应用中可以都是用黑色线,也可以分分为红,黄,蓝3种颜色。PE表示地线,一般用黄绿双色线。 N L 则是零线火线,单相电源中L(火线)一般用红色,N(零线)一般用蓝色 相线:红、黄或红绿 零线:蓝或黑 地线:黄绿相间 1、+5V(红色线):转换各种逻辑电路。 2、+12V(黄色线):驱动磁盘驱动器马达和所有风扇(例外:笔记本电脑的风扇使用+5V 或+3.3V)。 3、+3.3V(橙色线):为CPU、主板、PCI(Peripheral Component Interconnect 外部设备互连)总线、I/O控制电路供电。 4、+5VSB(Stand By、紫色线):负责远程电源的启动(大于720mA,主板启动只要0.01A)。 5、PS-ON(绿色线):负责操作系统管理电源的开关,是一种主板信号,和+5VSB一起统称为软电源。小于1V时开启电源、大于4.5V时关闭电源,实现软件开关机、网络远程唤醒功能(设置唤醒时间、通过键盘开机)(和GND接电线短接就可启动电源) 6、-5V(白色线):(负电压很少使用、如SFX去掉了-5V) 7、-12V(蓝色线):PG信号(Power Good、灰色线、+5V信号(+3.0~+6.0V)):系统启动前,电源(电源打开后0.1秒~0.5秒发出该信号)进行内部检查和测试,测试通过则发给主板一个信号,故电源的开启受控于主板上的电源监控部件。PG信号非常重要,即使各路输出都正

电脑各种主板USB接线方法图解

电脑各种主板USB接线方法图解 主板USB管脚接口大全 一、概述 因为每个USB接口能够向外设提供+5V500MA的电流,当我们在连接板载USB接口时,一定要严格按照主板的使用说明书进行安装。绝对不能出错,否则将烧毁主板或者外设。相信有不少朋友在连接前置USB插线时也发生过类似的“冒烟事见“。这就需要我们能够准确判别前置USB线的排列顺序如果我们晓得USB接口的基本布线结构,那问题不是就迎刃而解了吗。 二、USB接口实物图 主机端: 接线图: VCC Data- Data+ GND 实物图: 设备端: 接线图: VCC GND Data- Data+三、市面上常见的USB接口的布线结构 这两年市面上销售的主板,板载的前置USB接口,使用的都是标准的九针USB接口,第九针是空的,比较容易判断。但是多数品牌电脑使用的都是厂家定制的主板,我们维修的时候根本没有使用说明书;还有像以前的815主板,440BX,440VX主板等,前置USB的接法非常混乱,没有一个统一的标准。当我们维修此类机器时,如何判断其接法呢? 现在,把市面上的比较常见的主板前置USB接法进行汇总,供大家参考。(说明:■代表有插针,□代表有针位但无插针。) 1、六针双排 这种接口不常用,这种类型的USB插针排列方式见于精英P6STP-FL(REV:1.1)主板,用于海尔小超人766主机。其电源正和电源负为两个前置USB接口共用,因此前置的两个USB 接口需要6根线与主板连接,布线如下表所示。 ■DATA1+ ■DAT A1- ■VCC

■DATA2+ ■GND 2、八针双排 这种接口最常见,实际上占用了十针的位置,只不过有两个针的位置是空着的,如精英的P4VXMS(REV:1.0)主板等。该主板还提供了标准的九针接法,这种作是为了方便DIY在组装电脑时连接容易。 ■VCC ■DATA- ■DATA+ □NUL ■GND ■GND □NUL ■DATA+ ■DATA- ■VCC 微星MS-5156主板采用的前置USB接口是八针互反接法。虽然该主板使用的是Intel 430TX芯片组,但首先提供了当时并不多见的USB1.0标准接口两个,只不过需要使用单独的引线外接。由于该主板的USB供电采用了限流保护技术,所以即使我们把USB的供电线接反,也不会导致主板无法启或烧毁USB 设备的情况产生。 ■VCC ■DATA- ■DATA+ ■GND ■GND ■DATA+ ■DATA- ■VCC 以下这种接口比较常见,多使用于815,或440BX较早的主板上。 ■VCC ■DATA+ ■DATA- ■GND ■VCC

印刷设计海报或者宣传册标准尺寸

印刷设计海报或者宣传册标准尺寸: DM单尺寸: 标准尺寸: (A4)210mm x 285mm 宣传画册尺寸、画册规格 标准尺寸: (A4)210mm x 285mm 封套尺寸: 标准尺寸:220mm x 305mm 海报尺寸: 海报的标准尺寸: 13cm×18cm、 19cm×25cm、 30cm×42cm、 42cm×57cm、 50cm×70cm、 60cm×90cm、 70cm×100cm 常见尺寸是 42cm×57cm、

50cm×70cm、 国际标准尺寸是70*100cm 吊旗、挂旗尺寸: 标准尺寸:8开376mm x 265mm 4开540mm x 380mm 手提袋尺寸: 标准尺寸:400mm x 285mm x 80mm 信纸、便条: 标准尺寸:185mm x 260mm 210mm x 285mm 正度纸张:787×1092mm 开数(正度) 尺寸单位(mm) 全开781×1086 2开530×760 3开362×781 4开390×543 6开362×390 8开271×390

16开195×271 注:成品尺寸=纸张尺寸-修边尺寸大度纸张:850*1168mm 开数(正度) 尺寸单位(mm) 全开844×1162 2开581×844 3开387×844 4开422×581 6开387×422 8开290×422 注:成品尺寸=纸张尺寸-修边尺寸常见开本尺寸(单位:mm) 开本尺寸:787 x 1092 对开:736 x 520 4开:520 x 368 8开:368 x 260 16开:260 x 184 32开:184 x 130 开本尺寸(大度):850 x 1168 对开:570 x 840 4开:420 x 570

主板插线-开机键的接法

主板插线-开机键的接法 作为一名新手,要真正从头组装好自己的电脑并不容易,也许你知道CPU应该插哪儿,内存应该插哪儿,但遇到一排排复杂跳线的时候,很多新手都不知道如何下手。 钥匙开机其实并不神秘 还记不记得你第一次见到装电脑的时候,JS将CPU、内存、显卡等插在主板上,然后从兜里掏出自己的钥匙(或者是随便找颗螺丝)在主板边上轻轻一碰,电脑就运转起来了的情景吗?是不是感到很惊讶(笔者第一次见到的时候反正很惊讶)!面对一个全新的主板,JS总是不用看任何说明书,就能在1、2分钟之内将主板上密密麻麻的跳线连接好,是不是觉得他是高手?呵呵,看完今天的文章,你将会觉得这并不值得一提,并且只要你稍微记一下,就能完全记住,达到不看说明书搞定主板所有跳线的秘密。

这个叫做真正的跳线 首先我们来更正一个概念性的问题,实际上主板上那一排排需要连线的插针并不叫做“跳线”,因为它们根本达不”到跳线的功能。真正的跳线是两根/三根插针,上面有一个小小的“跳线冒”那种才应该叫做“跳线”,它能起到硬件改变设置、频率等的作用;而与机箱连线的那些插针根本起不到这个作用,所以真正意义上它们应该叫做面板连接插针,不过由于和“跳线”从外观上区别不大,所以我们也就经常管它们叫做“跳线”。

看完本文,连接这一大把的线都会变得非常轻松 至于到底是谁第一次管面板连接插针叫做“跳线”的人,相信谁也确定不了。不过既然都这么叫了,大家也都习惯了,我们也就不追究这些,所以在本文里,我们姑且管面板连接插针叫做跳线吧。 为了更加方便理解,我们先从机箱里的连接线说起。一般来说,机箱里的连接线上都采用了文字来对每组连接线的定义进行了标注,但是怎么识别这些标注,这是我们要解决的第一个问题。实际上,这些线上的标注都是相关英文的缩写,并不难记。下面我们来一个一个的认识(每张图片下方是相关介绍)!

电脑主板插线方法图解详解

作为一名新手,要真正从头组装好自己的并不容易,也许你知道CPI应该插哪儿, 内存应该插哪儿,但遇到一排排复杂跳线的时候,很多新手都不知道如何下手。 钥匙开机其实并不神秘 还记不记得你第一次见到装电脑的时候,JS将CPU内存、显卡等插在主板上,然后从兜里掏出自己的钥匙(或者是随便找颗螺丝)在主板边上轻轻一碰,电脑就运转起来了的情景吗?是不是感到很惊讶(笔者第一次见到的时候反正很惊讶)!面对一个全新的主板,JS总是不用看任何说明书,就能在1、2分钟之内将主板上密密麻麻的跳线连接好,是不是觉得他是高手?呵呵,看完今天的文章,你将会觉得这并不值得一提,并且只要你稍微记一下,就能完全记住,达到不看说明书搞定主板所有跳线的秘密。 这个叫做真正的跳线 首先我们来更正一个概念性的问题,实际上主板上那一排排需要连线的插针并不叫做“跳线”,因为它们根本达不”到跳线的功能。真正的跳线是两根/ 三根插针,上面 有一个小小的“跳线冒”那种才应该叫做“跳线”,它能起到硬件改变设置、频率等的作用;而与机箱连线的那些插针根本起不到这个作用,所以真正意义上它们应该叫做面板连接插针,不过由于和“跳线”从外观上区别不大,所以我们也就经常管它们叫做“跳线”。 看完本文,连接这一大把的线都会变得非常轻松 至于到底是谁第一次管面板连接插针叫做“跳线”的人,相信谁也确定不了。不过既然都这么叫了,大家也都习惯了,我们也就不追究这些,所以在本文里,我们姑且管面板连接插针叫做跳线吧。 为了更加方便理解,我们先从机箱里的连接线说起。一般来说,机箱里的连接线上都采用了文字来对每组连接线的定义进行了标注,但是怎么识别这些标注,这是我们要解决的第一个问题。实际上,这些线上的标注都是相关英文的缩写,并不难记。下面我们来一个一个的认识(每张图片下方是相关介绍)! 电源开关:POWER SW 英文全 称: Power Swicth 可能用名: POWE、RPOWER SWIT、CHON/OFF、POWER SET、U P W 功能定义:机箱前面的开机按钮 复位/重启开关:RESET SW 英文全称:Reset Swicth 可能用名:RESET Reset Swicth、Reset Setup、RST等电脑板插线方法图解详解

印刷常规尺寸表

印刷常规尺寸表 16开尺寸 标准型: 大度16开:210*285mm(成品尺 寸) 214*289mm(加出血尺寸) 正度16开:185*260mm(成品尺 寸) 189*264mm(加出血尺寸) 长型: 大度16开:420*140mm(成品尺 寸) 424*144mm(加出血尺寸) 正度16开:370*125mm(成品尺 寸) 374*129mm(加出血尺寸) 8开宣尺寸 标准型: 大度8开:420*285mm(成品尺 寸) 424*289mm(加出血尺寸) 正度8开:370*260mm(成品尺 寸) 374*264mm(加出血尺寸) 长型: 大度8开:570*205mm(成品尺 寸) 574*209mm(加出血尺寸) 正度8开:520*180mm(成品尺 寸) 524*184mm(加出血尺寸) 4开宣尺寸 标准型: 大度4开:420*570mm(成品尺 寸) 424*574mm(加出血尺寸)

正度4开:370*520mm(成品尺 寸) 374*524mm(加出血尺寸) 长型: 大度4开:840*280mm(成品尺 寸) 844*284mm(加出血尺寸) 正度4开:740*255mm(成品尺 寸) 744*259mm(加出血尺寸) 32开宣尺寸 标准型: 大度32开:210*140mm(成品尺 寸) 214*144mm(加出血尺寸) 正度32开:185*125mm(成品尺 寸) 191*131mm(加出血尺寸) 长型: 大度32开:285*100mm(成品尺 寸) 289*104mm(加出血尺寸) 正度32开:255*90mm(成品尺 寸) 259*94mm(加出血尺寸) 名片尺寸 标准尺寸成品尺寸90*54 加出血92 * 56mm 吊旗尺寸 大度6开:420*370mm(成品尺寸) 424*374mm(加出血尺寸) 正度6开:340*370mm(成品尺寸) 344*374mm(加出血尺寸) 长大度6开:270*570mm(成品尺寸) 274*574mm(加出血尺寸) 长正度6开:240*520mm(成品尺寸) 244*524mm(加出血尺寸)

主板插线接口大全

主板插线接口大全 - 装机必看 2007-11-04 09:57 主板插线接口大全 - 装机必看 前段时间好多同学跟我反映说装机已经基本可以了 但是插个种线很困难所以小鱼就搜集了一些这方面的资料 让大家参考一下 很多朋友对各种接口和线缆的连接方法还不是很清楚,那么这里同样以Intel 平台为例,借助两块不同品牌的主板,对各种接口及其连接方法进行一下详细的介绍。 一、认识主板供电接口图解安装详细过程 在主板上,我们可以看到一个长方形的插槽,这个插槽就是电源为主板提供供电的插槽(如下图)。目前主板供电的接口主要有24针与 20针两种,在中高端的主板上,一般都采用24PIN的主板供电接口设计,低端的产品一般为 20PIN。不论采用24PIN和20PIN,其插法都是一样的。

主板上24PIN的供电接口 主板上20PIN的供电接口

电源上为主板供电的24PIN接口 为主板供电的接口采用了防呆式的设计,只有按正确的方法才能够插入。通过仔细观察也会发现在主板供电的接口上的一面有一个凸起的槽,而在电源的供电接口上的一面也采用了卡扣式的设计,这样设计的好处一是为防止用户反插,另一方面也可以使两个接口更加牢固的安装在一起。 二、认识CPU供电接口图解安装详细过程 为了给CPU提供更强更稳定的电压,目前主板上均提供一个给CPU单独供电

的接口(有4针、6针和8针三种),如下图: 主板上提供给CPU单独供电的12V四针供电接口

电源上提供给CPU供电的4针、6针与8针的接口 安装的方法也相当的简单,接口与给主板供电的插槽相同,同样使用了防呆式的设计,让我们安装起来得心应手。 三、认识SATA串口图解SATA设备的安装 SATA串口由于具备更高的传输速度渐渐替代PATA并口成为当前的主流,目前大部分的硬盘都采用了串口设计,由于SATA的数据线设计更加合理,给我们

宣传单页的实际尺寸及印刷

宣传单页的实际尺寸及印刷 一、标准彩页制作尺寸: 彩页制作尺寸291×216mm(四边各含3mm出血位); 宣传单页尺寸:通常16开的尺寸为210×285mm,8开的尺寸为420×285mm,非标准的尺寸可能会造成纸张的浪费,所以在选用时需格外小心。 常见宣传单页的设计尺寸常见如下: 标准的16k宣传单页尺寸是206mm*285mm 用于印刷,裁切需要每边增加2mm,有出血的卡片尺寸是210mm*289mm 。 标准的16K三折页尺寸是206mm*283mm 用于印刷、裁切需要每边增加2mm出血,有出血的16k宣传单尺寸是210mm*287mm 。 标准的8K宣传单页尺寸是420mm*285mm 用于印刷、裁切需要每边增加2mm出血,有出血的8k宣传单页尺寸是424mm*289mm 。下表列出了印刷的标准宣传单页和样本,有出血和无出血的尺寸: 尺寸表:无出血带出血 标准16k宣传单页 206*285mm 210*289mm 标准8k宣传单页 420*285mm 424*289mm 标准16k样本 420*285mm 424*289mm 16k三折页宣传单页 206*283mm 210*287mm 宣传单页的设计有以下几类: 1、单片式 单片式是一种简易的印刷宣传品,单片为32开、16开较多,因为此尺寸携带方便,也经济实惠。单片的保存期不长,适应于快速和短期性的广告宣传。 2、书刊式 书刊式是对商品做直接介绍的一种宣传印刷品,一般是通过产品拍成照片直接向消费者展示,产品内容介绍可详尽。通常以多页形式装订成册,与已知编排形式相似,但比已知形式更具有特色和个性。大部分采用四色印刷,设计规格一般为16开。 3、手风琴式 通常以折页形式为较多,大部分采用四色印刷,设计规格一般为6开6折、8开2折或4折、16开2折或3折,中间内容视产品特点而定。设计要尽量精致美观,展示的产品应选择最挂角度,力求逼真和清晰,字体清秀,色调与内容和谐,以增加客户购买力为最终目的。 4、插袋式 为了放置多种产品样张而设计成内袋式的一种样本,便于查阅和携带。 宣传单页印刷中的注意事项有以下几点: ①单页跨页印刷时,必须要求规格统一,色调一致;跨两页以上的,以中间一页色调为印刷基准。 ②贴数多,按顺序印刷。 ③在同页面印刷时,如不能保持全色调统一时,可考虑局部印刷。 ④宣传单页印刷时,尽量避免用辅助材料,以保证印刷质量。 纸张:常用可选择的纸张还有80g 、105g、128g、157g、200g、250g等。纸张的类型除铜版纸外尚可选择双胶纸及艺术纸印刷。 后道加工:宣传单页页可选择表面过油、覆膜等后道加工工艺来提高亮度或强度等,但通常情况下,一般不太选用。 二、标准彩页成品大小: 大16K彩页成品大小285×210mm; 三、彩页排版方法: 彩页排版时,请将文字等内容放置于裁切线内5mm,彩页裁切后才更美观;

电脑主板电源线接法

904305016.doc Q:电脑电源黄线是+12V的可我找不到-12V 请问我该怎么接? A:1、千万不能接到-12V上,这样就成了24V了,要接到地线上,就是黑色的线(所有的黑线都是负极他们是相通的),也可以接到机箱上,机箱的电位就是零,和黑线是通的。 2、-12V 是蓝色的线,对0V黑线,通常只能输出大约0.5A的小电流,请注意。 电脑电源20pin与24pin接线区别及各针角对应电压 红色:代表+5V电源线(主板、硬盘、光驱等硬件上的芯片工作电压)。 黄色:代表+12V电源线(硬盘、光驱、风扇等硬件上的工作电压,和-12V同时向串口提供EIA电源)。 橙色:代表+3.3V电源线(直接向DIMM、AGP插槽供电)。 灰色:代表P.G信号线(电源状态信息线,它是其他电源线通过一定电路计算所得到的结果,当按下电脑开头键后,这个信号表示电源良好可以开机无信号说明有故障主板自动监测)。 蓝色:代表-12V电源线(向串口提供EIA电源)。 白色:代表-5V电源线(软驱锁相式数据分离电路)。 紫色:代表+5V StandBy电源线(关机后为主板的一小部分电路提供动力,以检测各种开机命令). 绿色:代表PS-ON信号线(主板电源开/关的信号线,未接通时有一定电压)黑色:系统电路的地线。 U V W是三相电源中表示3根火线的字母,3根火线在实际应用中可以都是用黑色线,也可以分分为红,黄,蓝3种颜色。PE表示地线,一般用黄绿双色线。N L 则是零线火线,单相电源中L(火线)一般用红色,N(零线)一般用蓝色 相线:红、黄或红绿 零线:蓝或黑 地线:黄绿相间 1、+5V(红色线):转换各种逻辑电路。 2、+12V(黄色线):驱动磁盘驱动器马达和所有风扇(例外:笔记本电脑的风扇使用+5V或+3.3V)。 3、+3.3V(橙色线):为CPU、主板、PCI(Peripheral Component Interconnect 外部设备互连)总线、I/O控制电路供电。

主板与机箱连接线的接法(附图)

主板与机箱连接线的接法,实用 主板跳线连接方法揭秘作为一名新手,要真正从头组装好自己的电脑并不容易,也许你知道CPU应该插哪儿,内存应该插哪儿,但遇到一排排复杂跳线的时候,很多新手都不知道如何下手。 钥匙开机其实并不神秘 还记不记得你第一次见到装电脑的时候,技术员将CPU、内存、显卡等插在主板上,然后从兜里掏出自己的钥匙(或者是随便找颗螺丝)在主板边上轻轻一碰,电脑就运转起来了的情景吗?是不是感到很惊讶(笔者第一次见到的时候反正很惊讶)!面对一个全新的主板,JS总是不用看任何说明书,就能在1、2分钟之内将主板上密密麻麻的跳线连接好,是不是觉得他是高手?呵呵,看完今天的文章,你将会觉得这并不值得一提,并且只要你稍微记一下,就能完全记住,达到不看说明书搞定主板所有跳线的秘密

下图所示就是,机箱里的跳线,我们现在就是来,把这些扎乱的线,正确的插到主板上,实现机箱上所有按钮,插孔,指示灯工作。仔细看其实,每一个插帽上,都写有其功能缩写的英文字,如音频连接线:AUDIO,报警器:SPEAKER,硬盘状态指示灯:HDD LED,电源指示灯:+/-等等。 实际上,机箱上的线并不可怕,80%以上的初学者感觉最头疼的是主板上跳线的定义,但实际上真的那么可怕吗?答案是否定的!并且这其中还有很多的规律,就是因为这些规律,我们才能做到举一反三,无论什么品牌的主板都不用看说明书插好复杂的跳线。 ● 哪儿是跳线的第一Pin?

要学会如何跳线,我们必须先了解跳线到底从哪儿开始数,这个其实很简单。在主板(任何板卡设备都一样)上,跳线的两端总是有一端会有较粗的印刷框,而跳线就应该从这里数。找到这个较粗的印刷框之后,就本着从左到右,从上至下的原则数就是了。如上图。 9Pin的开关/复位/电源灯/硬盘灯跳线是目前最流行的一种方式,市场上70%以上的品牌都采用的是这种方式,慢慢的也就成了一种标准,特别是几大代工厂为通路厂商推出的主板,采用这种方式的更是高达90%以上。如下图:

照片和印刷纸张常用尺寸规格

尺寸名厘米数英寸数 一寸 2.5 x 3.6 cm 5x8寸 二寸 3.4 x 5.2 cm 5x9寸 三寸 5.5 x 8.4 cm 5x10寸 五寸 3.5 x 5 5x12寸 六寸 4 x 6 6x9寸 七寸 5 x 7 6x10寸 八寸 6 x 8 6x12寸 十寸8 x 10 6x14寸 十二寸10 x 12 8.5x12寸 十四寸10 x 14 10x14.5寸 十六寸12 x 16 12x17寸 十八寸12 x 18 12x17.5寸 十八寸以内按英寸数为准,二十寸以上按厘米数为准。备注:1英寸=2.54厘米二十寸40 x 50 cm 标准二十四寸50 x 60 cm 二十四寸44 x 60 cm 二十四寸挂照42 x 60 cm 三十寸60 x 75 cm 三十二寸60 x 80 cm 三十六寸60 x 90 cm 四十寸70 x 100 cm

四十八寸90 x 120 cm 五十八寸100 x 115 cm 六十八寸112 x 170 cm 我们拍摄好的数码照片一般都不是标准的照片尺寸,用数码相机所拍出的图像一般是按计算机屏幕的分辨率来设定的,所以基本上都是4∶3的比例,而标准照片尺寸的比例不同,如5寸照片的比例为10∶7,6寸照片的尺寸为3∶2,如不裁剪,在冲印的过程中,往往会在照片旁留下白边或者照片不完全。因此我们必须用软件对照片迚行裁剪加工。表一:常见的标准照片规格表二:数码照片冲印质量对照表胶卷质量:能保持原照片的最佳效果。优秀:细看能看出像素不够对照片的轻微影响,但幵不影响照片质量,可得到比较理想的打印效果。好:像素对照片的影响较明显,但冲印效果仍较满意。一般:像素不足对照片的影响明显,但照片仍可使用。[/font][/size][/color] 偶查了一下:1英寸=2.54厘米我们说照片尺寸通常是讲英寸的,照片尺寸的常规标准在10寸内的规格是相差2,比如5寸照片就是5X3,7寸=7X5 8寸=8X6。如果照片尺寸大于10寸的就相差4,12寸=12X8 14寸=14X10 18寸=18X14 崛起的小龙 [文曲星] 铜版纸 铜版纸又称涂布印刷纸,在香港等地区称为粉纸。它是以原纸涂布白色涂料制成的高级印刷纸。主要用于印刷高级书刊的封面和插图、彩色画片、各种精美的商

图解:主板电源线接法

图解:主板电源线接法 图解:主板电源线接法(电源开关、重启开关、USB、耳机麦克风等) 一般,主板电源开关和重启线不分正负,只要接上电源开关线就可以正常开机和关机了;电源和硬盘灯就分正负,不过些线接不接都不影响电脑正常使用。 一般,主板电源线等共有8根,每两根组成一组,电源开关一组,重启一组,电源灯一组(分正负),硬盘灯一组(分正负)。 一般电源线接口如下: 电源LED灯 + -电源开关 。。。。 。。。。。 + -重启未定义接口(这果多一个接口,貌似没用的) 硬盘LED灯 一般只需注意以上电源线的接法就行了,其他基本不用记,只要接口接合适就行了。 其他线如USB线、音频线、耳麦线都是整合成一组一组的,只要接口插得进就对了。

菜鸟进阶必读!主板跳线连接方法揭秘 初级用户最头疼的跳线连接 作为一名新手,要真正从头组装好自己的电脑并不容易,也许你知道CPU应该插哪儿,内存应该插哪儿,但遇到一排排复杂跳线的时候,很多新手都不知道如何下手。 钥匙开机其实并不神秘 还记不记得你第一次见到装电脑的时候,JS将CPU、内存、显卡等插在主板上,然后从兜里掏出自己的钥匙(或者是随便找颗螺丝)在主板边上轻轻一碰,电脑就运转起来了的情景吗?是不是感到很惊讶(笔者第一次见到的时候反正很惊讶)!面对一个全新的主板,JS总是不用看任何说明书,就能在1、2分钟之内将主板上密密麻麻的跳线连接好,是不是觉得他是高手?呵呵,看完今天的文章,你将会觉得这并不值得一提,并且只要你稍微记一下,就能完全记住,达到不看说明书搞定主板所有跳线的秘密。

这个叫做真正的跳线 首先我们来更正一个概念性的问题,实际上主板上那一排排需要连线的插针并不叫做“跳线”,因为它们根本达不”到跳线的功能。真正的跳线是两根/三根插针,上面有一个小小的“跳线冒”那种才应该叫做“跳线”,它能起到硬件改变设置、频率等的作用;而与机箱连线的那些插针根本起不到这个作用,所以真正意义上它们应该叫做面板连接插针,不过由于和“跳线”从外观上区别不大,所以我们也就经常管它们叫做“跳线”。 看完本文,连接这一大把的线都会变得非常轻松 至于到底是谁第一次管面板连接插针叫做“跳线”的人,相信谁也确定不了。不过既然都这么叫了,大家也都习惯了,我们也就不追究这些,所以在本文里,我们姑且管面板连接插针叫做跳线吧。 轻松识别各连接线的定义

超硬磨料微粉的粒子尺寸及尺寸分布

超硬磨料微粉的粒子尺寸及尺寸分布 此文是描述当今超硬磨料工业生产中关于金刚石微粉及CBN粉末的粒子尺寸和尺寸分布的技术和标准。颗粒粒度的表征 粉末系统的粒子尺寸及尺寸分布的测量是大部分工业生产中的主要要求。粒子尺寸及尺寸分布对最终产品的大块性质(机械性能,导电性能,导热性能)有决定性的影响。大多数粉末生产商提供粒子尺寸及尺寸分布标准,但是这些标准对质量控制的实际结果还需要讨论。一个合适的质量控制体系,可以防止或避免高斥力导致的最终产品的生产损失。市面上有很多粒子尺寸及尺寸分布的测量设备,不同的设备测量方法不同,必定导致不同的测量结果。此外,即使使用相同的测量方法,测量中的技术差异和计算方法差异,也会导致粒子尺寸及尺寸分布测量结果有很大差异。而且,粒子的形状对粒子尺寸及尺寸分布也有很大影响。所以不同测量方法下数据的对比应当特别重视,需要采用合理的测量规程。一般研究中的粉末特征是极为相似的,所以这些规程甚至会需要对设备的校准都有很严格的标准。当把粒子尺寸及尺寸分布作为一种生产过程控制标准时,粒子尺寸及尺寸分布数据的使用就非常重要。生产过程的变化表现在检测数据变化的趋势和幅度,这种变化直接影响到产品质量。在很多情况下,粒度大小和粒度分布需要规定一个可以接受的波动范围。只有在长度和重量两个主要的参数有国际标准作为参考时,才能获得相对准确的粒度分析。以长度或重量作为参数的测量技术称为“初级测量”或直接测量,包括:显微图象分析、筛选分析、电子区域检测、重力沉降和离心沉降等。根据二阶效应如衍射图象、布郎运动、浊度等来测量粒子尺寸表征的方法称为“二级测量”或间接测量,它通过电脑建模(伴随若干假设的数学算法)来研究实验数据,最常用的有激光衍射法、动态光散射法、光子相关谱法。当使用显微分析技术时,大部分测量数据有一定的可靠程度。在这类技术中,测量的粒子能直观的被检测。通常,测量结果的可靠程度受到次测量粒子数,粒子代表性的分析,粒子形状,弥散分布和相关试样制备技术影响。此外,为了只用一个尺寸(如直径)来描述粒子尺寸,分析中一般把粒子看成球状的。综上所述,通常粒子尺寸及尺寸分布的测量中,某次(种)测量的结果可以和其它的测量结果互相对比和联系,不同的设备在相似的条件下,可能获得同样的结果。所以,测量的精确性比准确性更加重要。在尺寸测量工作规程的制定中,粉末生产商和购买者之间的高效率交流至关重要。共同讨论合适的审计方法,可以获得良好的跟踪性能,就生产规程和产品要求达成共识。图1推荐了一般情况下分析粒子尺寸及尺寸分布的检测过程。附录1介绍了一些粒子尺寸及尺寸分布界定的定义和计算方法。附录2介绍了塞网尺寸、微米级尺寸、一克拉中近似粒子数三者之间的联系。 粗粒金刚石和CBN粉末的筛分以及标准筛分粒度表 一般情况下,筛分是指将自由流动干粉末样品振动通过一系列从大到小的筛子。筛孔最大的筛子放在最上面,最小的放在最下面。通过保留在筛孔之上或累积在给定尺寸筛孔上的部分来反映粒度分布。以下的类似ASTM和ISO标准对孔径的细节提出了要求,包括丝径、眶径和维护清洁筛子的方法:ASTM E11-95对象为金属丝制筛,ASTM E163-96对象为电成型薄版实验筛。ISO 3310-1 技术要求和检测/Part1-金属丝制筛;ISO 3310-3 技术要求和检测/Part3-电成型薄版实验筛。 筛子的校准 筛子的校准大都通过筛选已知的物料,对比测定粒度分布和已知粒度分布。国际学会规定了一种标准的材料,(SRM),筛子的检测标准和技术(NISI)。这些材料用电光学的显微方法确定了尺寸,而且适用与筛子的校准。这些恰当的筛子校准条件(SRM&NISI)在附录三-Part1列出。其他建议的筛子校准技术包括通过筛子的光学检查来测量某个筛子的孔径分布。 实用标准 这些标准意图为金刚石和CBN粉末的检测建立一个普遍的基础,用于工业生产各种各种超磨性产品。此标准意图为超磨性粉末的生产商、供应商、使用商提供这类产品的基本知识。以下的一系列标准应用于粗粒金刚石和CBN粉末:美国标准2002\7\12(American Standard ANSI B74.16/July 12,2002)-金刚石和CBN(立方氮化硼)的尺寸;磨粒国际标准2005\5\16(Abrasive Grains International Standard)ISO 6106/May 16,2005-磨料产品-超磨料的颗粒尺寸。 金刚石和CBN粗磨料粒子尺寸的标准

相关主题
文本预览
相关文档 最新文档