Guildford, Surrey GU2 5XH,
- 格式:pdf
- 大小:115.85 KB
- 文档页数:10
Printed in U.S.A.MAINTENANCE MANUALORION ™LBI-39139ericssonzEricsson Inc.Private Radio Systems Mountain View RoadLynchburg, Virginia 245021-800-528-7711 (Outside USA, 804-528-7711)(DDOO-CAH-505 2/2)COMPONENT IDENTIFICATION CHARTDESCRIPTIONThe RF Power Amplifiers for the Ericsson ORION low band mobile radio is available in two power levels and two frequen-cies ranges designated as:•29-42 MHz , 60 Watts•35-50 MHz , 60 Watts•29-42 MHz , 110 Watts•35-50 MHz , 110 WattsThe circuitry on the Power Amplifier Board consists of an Exciter circuit, an RF Power Amplifier circuit, a Power Control circuit, an Antenna Switch and Limiter Circuit (see Figure 1 -Block Diagram). The Exciter circuit consists of two wide band amplifier stages operating over a frequency range of 29-50 MHz without any tuning. This circuit amplifies the one milliwatt input signal from the V oltage Controlled Oscillator, on the Synthesizer/IF board, to 300 milliwatts to drive the Power Amplifier.The Power Amplifier circuit uses a driver and three RF power transistors to provide rated output power. The output power is adjustable over a range of 55 to 110 and 30 to 60 watts for the two power versions. Two transistors and three IC’s areused in the power control circuit.Supply voltage for the PA is provided by power leads fromthe power cable connector J1002 to J3 (A+) and (A-) on thePower Amplifier board.CIRCUIT ANALYSISEXCITERThe 29-50 MHz Tx injection input from the Tx VCO isapplied to AMPLIFIER-1 transistor TR151 through an AT-TENUATOR pad consisting of resistors R151, R152 and R153.Vcc voltage (+9 Vdc) is applied through a Vcc feed networkconsisting of resistor R158 and transformer T151. CapacitorC156 is used to bypass the supply line. The +9 Vdc is suppliedby 3-terminal voltage regulator IC3.The output of TR151 drives AMPLIFIER-2 transistorTR152 through impedance matching components consisting oftransformers T151 and T152, coupling capacitors C157 andC158. Resistors R152, R154 and diode CD151 set the biasvoltage for TR152.Collector voltage (+9 Vdc) of TR152 is applied throughcollector feed network resistor R165 and inductor L152.Capacitors C160 and C161 are bypass capacitors.The output of TR152 is coupled to EX OUT through theLOW-PASS FILTER consisting of capacitors C163 throughC167, and inductors L153 and L154. Resistor R163 providesnegative feedback for TR152 through capacitor C159. Tran-sistor TR152 amplifies the 15 milliwatts input level to 300milliwatts.A+, supplied from the J1003 connector through transistorTR11 and the Tx Power Switch is regulated to 9 Vdc byvoltage regulator IC3. Vcc (+9 Vdc) is applied to TR151 andTR152. When TX ENBL is high (receive mode), +9 Vdc isnot applied.The exciter is energized by pressing the PTT switch.Regulated +9 Vdc is present on all exciter stages when theradio is turned on.POWER AMPLIFIER BOARDThe four power amplifiers which cover the frequencyranges of 29-42 MHz and 35-50 MHz and power levels of 60watts and 110 watts, are very similar in construction andoperation. The only differences are in the transistor types andsome component values. The following description applies toall four versions.RF AmplifiersThe Exciter RF output (EX OUT) is coupled to the PAinput. The RF is then coupled through an ATTENUATORpad consisting of resistors R1, R2 and R3, impedance match-ing transformer T1 and decoupling capacitor C1 to the baseof PRE-AMPLIFIER transistor TR1. Inductor L1, diodeCD1 and resistor R5 set the bias of TR1. Capacitor C4 andresistors R4 and R45 provide negative feedback to improvethe stability of TR1. Collector voltage on TR1 is controlledby the power control circuit and is applied through a decou-pling network consisting of capacitors C5, C6 and C7.The output of TR1 is coupled to the base of DRIVERAMPLIFIER transistor TR2 through impedance matchingtransformer T2 and a frequency compensator consisting ofcapacitor C9 and resistor R6. Capacitor C8 provides matchingbetween T2 and the base of TR2. Capacitor C10 and resistorR7 provide negative feedback and R8 and R46 maintainsstability of TR2.Collector voltage to driver amplifier TR2 is suppliedthrough a decoupling network consisting of capacitors C12to C14 and inductor L4.The RF output from TR2 is coupled to POWER AM-PLIFIER transistors TR3 and TR4 through impedancematching transformer T4 and capacitors C17 and C56.Power AmplifierThe POWER AMPLIFIER, consisting of transistorsTR3 and TR4, and transformers T4 and T5 is a class-cpush-pull power amplifier. Transformer T4 provides imped-ance-matching and power splitting to the bases of TR3 andTR4. Capacitors C17 and C56 provide impedance-matchingelements to T4. Resistors R10 and R11 provide the baseloading to TR3 and TR4. Capacitors C19 and C21, andresistors R9 and R12 are negative feedback elements tomaintain the stability of TR3 and TR4. Transformer T5 pro-vides impedance-matching and power combining for the col-lectors of TR3 and TR4. Capacitors C16 and C23 providematching elements to T5. Capacitors C20 and C22 provideimpedance matching elements to the collector of TR3 andTR4.Operating voltage for the power amplifier is suppliedfrom the DC input through transformer T5 and a decouplingnetwork consisting of capacitors C24 through C26 and induc-tor L5.The output of the POWER AMPLIFIER passes throughT5 to the LOW-PASS FILTER consisting of capacitors C72though C76, and inductors L18 and L19.The RF power passes through a 50 ohm microstrip andtransmit/receive ANTENNA SWITCH diode CD5 to theLOW-PASS FILTERPower ControlWhen high VSWR load conditions are sensed thePOWER CONTROL circuit provides closed-loop RFpower leveling and power turndown.When the transmitter is keyed, Tx 9V turns on and sup-plies current to a DC AMPLIFIER consisting of transistorsTR5, TR6 and IC1-1. This amplifier supplies voltage to thecollector of TR1. The setting of RV1 determines the currentsupplied to the negative input of IC1-1. As the detected RFpower increases, the current to the negative input of IC1-1increases causing IC1-1 to pull current away from the base ofTR5. This cuts back the drive to TR5 and TR6, which reducesCopyright © March 1995, Ericsson Inc.Figure 1 - Block Diagram Low Band Power AmplifierLBI-391391the voltage at the collector of TR1, decreasing RF output power.RF power is sensed by directional POWER COUPLER T6 and associated elements. Forward power is sensed by diode CD9 and reflected power by diode CD10. Forward power is determined by the setting of RV1. Resistors R21 and R22 set the level of reflected RF power at which the control circuit reduces the RF output.Thermal protection is provided by "posistor" R41 and associated elements. Posistor R41 is thermally connected to the body of transistor TR3. As the temperature of TR3 rises above 100-degrees Centigrade, the resistance of R41 in-creases and TR9 turns on. This diverts output current of IC102 from R27 to TR9, which lowers the voltage at the collector of TR1, reducing the power output.Antenna SwitchThe ANTENNA SWITCH consists of PIN diodes CD5, CD6 and CD7 and associated components. When the trans-mitter is keyed, Tx 9V switch TR11 and the Tx 9V regulator IC3 turn on. Rx 5V (Rx bias) turns off and Tx 9V provides forward-bias to CD5. This results in low impedance on CD5 and high impedance on CD6 and CD7 isolating the transmit-ted power from the receiver.LimiterThe limiter on the PA board consists of diode CD2, transistors TR6, TR7 and the associated components. During Rx if the receiving signal level exceeds +10 dBm, the rectified currents of the CD2 can provide the forward-bias to TR6, TR7 and CD8 proportionally to the receiving signal level. This causes a tap-down circuit (CD6, CD7 and CD8) to turn on when the receiving signal exceeds +10 dBm and protects the receiver from excessively high receiving signal levels.In the Rx mode, signals from the antenna are coupled through this limiter to the receiver input.POWER AMPLIFIER BOARDCAH-505AL(Used in P1),CAH-505AL(Used in P2) CAH-505AH(Used in P3),CAH-505BH(Used in P4)Issue 1PARTS LIST LBI-391392Linear OP amplifier IC1 (JRC NJM3404AM-T1)Comparator IC2 (JRC NJM2404M-T1) 9 V Voltage Regulator IC3(PANASONIC AN6541)Voltage Regulator IC4 (PANASONIC AN6545SP)PARTS LIST IC DATA LBI-39139329.0-50 MHz 60/110 Watts POWER AMPLIFIEROUTLINE DIAGRAM ASSEMBLY DIAGRAMCOMPONENT SIDESOLDER SIDE29.0-50 MHz 60/110 W ATTSPOWER AMPLIFIER(DD00-CAH-505 1/2)LBI-39139 4SCHEMATIC DIAGRAM LBI-3913929.0-50 MHz 60/110 W ATTSPOWER AMPLIFIER(DD00-CAH-505 1/2)5。
专利名称:MODIFICATION OF METAL SURFACES 发明人:ZHAO, Qi,MÜLLER-STEINHAGEN, Hans 申请号:EP96935150.0申请日:19961104公开号:EP0858579A1公开日:19980819专利内容由知识产权出版社提供摘要:A method of maintaining a high value of heat transfer coefficient for a metal heat transfer surface in which the surface energy of the heat transfer surface is reduced by ion implantation or other surface modification techniques. The reduction in surface energy of the heat transfer surface delays the onset of scale formation and additionally reduces the adherence of any scale formed to the metal surface. Besides ion implantation, the energy may be reduced by magnetron sputter ion plating or plasma enhanced chemical vapour deposition (PECVD). A combination of these techniques may be used. Implanted ions include Cr + , F + , C + , H + , N + or SiF 3 + , the C + and F + being obtained from PTFE. Materials deposited by PECVD include Si 3 N 4 , Si, SiC, hexamethyldisalazane, tetraethylorthosilicate, or trimethylvinylsilane. As illustrated in Fig 5 dynamic mixing ion implantation may be used where two or more species of ion are simultaneously implanted.申请人:UNIVERSITY OF SURREY地址:Guildford, Surrey GU2 5XH GB国籍:GB代理机构:Knott, Stephen Gilbert, et al更多信息请下载全文后查看。
The control units are rated at 11.5kVA with a 2 second overload capability of 23kVA using pulse mode. All metering is digital and a memory facility is provided to hold the current reading when the output trips or is switched off. The current is thyristor controlled and is automatically switched off when the device under test trips.The PCU1 systems have a high accuracy timing system, allowing timing tests to be carried out to a resolution of 1ms. Selection for normally open or normally closed contacts is automatic, and the status of the contacts is shown on the front panel.The PCU1 series aremedium powered primary currentinjection systems offering output currents up to 5000A. The system consists of a separate control unit containing all metering and control functions and a loading unit that provides the high current output. The PCU1-LT and PCU1-SP are ideally suited to primary current injection, stability testing and circuit breaker testing. In addition, the SP version offers direct-reading CT ratio and polarity tests and a 100A secondary injection output. T&R also offer the higher-powered PCU1-HP and PCU2.Features (PCU1-LT and PCU1-SP) •5kA maximum output current (higher overload currents for 2s) •Multi-function digital timing system •Digital true RMS memory ammeter •Solid state switching•2000A and 5000A loading units •Three range outputs on loading units •Rugged, compact design•Optional trolley mounting of system PCU1-SP Additional Features•Secondary injection up to 100A •Direct reading CT ratio and polarityTiming modes are available to test under and over current devices, re-closers, under and over voltage devices, current trips and circuit breakers. A full range of high current output leads are available to complement the system in a range of lengths.Two loading units are available, delivering a maximum outputcurrent of 2000A or 5000A. Each loading unit has three output taps to allow for a wide range of load impedances. For example, the NLU5000 may be configured to either give a maximum current of 5000A on the 2.3V range, 2500A on the 4.6V range or 1250A on the 9.2V range.PCU1-SP PCU1-LTPrimary Current Injection SystemsPCU1-SP & NLU5000 withoptional trolleyFeature PCU1-LTPCU1-SPPCU2 Primary injection ü ü ü Max output power 11.5kVA 40s11.5kVA 40s 20kVA 5 min Secondary injection û ü û CT ratio/polarity testûüûOutput current Range Output state1.000kA 5kA On 000.000s 5000.:5ACTTimerAuxiliary metering23Note: Due to the company’s continuous research programme, the information above may change at any time without prior notificatio n. Please check that you have the most recent data on the product.T&R Test Equipment Ltd, 15-16 Woodbridge Meadows, Guildford, Surrey, GU1 1BJ, UKTel:+44(0)1483207428Fax:+44(0)1483511229email:****************PCU-LT/PCU1-SP Control Unit SpecificationLoading Unit Current MeteringThe AC output current is metered by a true RMS memory ammeter (acquisition time 200ms) with a liquid crystal display. The current metering has 3 ranges corresponding to 10%, 50% and 100% of the maximum rating of the loading unit. In addition, a 200% metering range is enabled in pulse mode. NLU2000Range Full scale Resolution Accuracy 10% 200.0A 0.1A ±0.5%rdg+5d* 50% 1000A 1A ±0.5%rdg+5d* 100% 2000A 1A ±0.5%rdg+5d* 200% 4000A 1A ±1%rdg+5d** NLU5000Range Full scale Resolution Accuracy 10% 500.0A 0.1A ±0.5%rdg+5d *1 50% 2500A 1A ±0.5%rdg+5d *1 100% 5000A 1A ±0.5%rdg+5d *1 200% 10kA 10A ±1.5%rdg+5d *2 *11.5% rdg+5d in pulse mode *2selected in pulse modeTiming SystemThe PCU1 systems have a flexible timing system with two contact inputs and 5 operating modes. Each contact circuit automatically selects for N/O or N/C contacts, and the status of each contact input is shown by an LED. The timing channels may also be triggered by a dc voltage between 24 and 240V. Timer resolution 1msTimer full scale 999.999sTimer accuracy ±0.01%rdg+2d (4d current mode) Contact O/C voltage 24V Contact S/C current 20mAVdc input range 24-240VdcTimer mode Timer start Timer stop Internal Start ‘On’ button Contact Single contact Contact 1 Contact 1 Dual contact Contact 1 Contact 2 Current operated *3 Current >20% rng Current <20% rngPulse mode 0.2s *4‘On button’ 0.2s Pulse mode 0.5s *4 ‘On button’ 0.5s Pulse mode 1s *4 ‘On button’ 1s Pulse mode 2s *4 ‘On button’ 2s Off Setting position*3Current operated mode is used to time circuit breakers with no auxiliary contacts. The timer is started when the current exceeds 20% of the selected metering range (e.g. 100A on the NLU5000 500A range). The timer stops when the current falls.*4Pulse mode applies current to the load for a maximum of the specified time. If contact set 1 changes state or the current drops below 20% of the metering range during the pulse ti me, the timer is stopped. The maximum output current is increased in pulse mode. The maximum obtainable current is set by the impedance of the test object and output leads.Secondary Injection Output (PCU1-SP only)The secondary current injection output on the unit has two taps, allowing the injection of currents up to 100A. Output Range Continuous Intermittent current current 5min on *5 1 min on *5 0-5V 33A 67A 100A 0-16V 10A 20A 30A *5All on times must be followed by an off time of 15 minutesMetering Range Resolution Accuracy Current trip 10.00A 0.01A ±0.5%rdg +5d 10.5A 20.00A 0.01A ±0.5%rdg +5d 21A 100.0A 0.1A ±0.5%rdg +5d 100A The output is protected by electronic trips and a circuit breaker.Supply Requirements230V±10%, 45-65Hz 1ph 11.5kVA max (23kVA overload for 2s)Control Unit Standard AccessoriesMains lead (5m), loading unit power and metering leads (5m), operating manual and spare fuses.DimensionsPCU1-SP/PCU1-LT 450(w) x 275(h) x 305mm(d)NLU2000/NLU5000 450(w) x 275(h) x 370mm(d) incl. terminalsWeightPCU1-LT 23kg PCU1-SP 26kg NLU5000 55kg NLU2000 50kgTemperature RangeStorage -20°C to 60°C, Operating 0°C to 45°CProtection and SafetyThe PCU1 series and loading units are CE marked and are designed to meet the requirements of BS EN61010. The system is protected by electronic trips on the outputs, circuit breakers on the mains input, control unit output and secondary injection output (PCU1-SP only). The unit also has a duty cycle trip on the loading unit output and the loading unit has comprehensive thermal protection.Optional Loading Unit SpecificationsTwo loading units are available to provide a range of output currents suitable for different primary injection tasks. Each loading unit has three output taps allowing current injection into a wide range of loads of differing impedances. Optional output lead sets are also available (see for details).NLU5000 Loading Unit Intermittent RatingsOutput Maximum currentVoltage* Cont. 5 min 1 min 40s 2.3V 1500A 3000A 4500A 5000A 4.6V 750A 1500A 2250A 2500A 9.2V 375A 750A 1125A 1250ANLU2000 Loading Unit Intermittent RatingsOutput Maximum currentVoltage* Cont. 5 min 1 min 40s 3.5V 600A 1200A 1800A 2000A 6.9V 300A 600A 900A 1200A 13.8V 150A 300A 450A 500AI <20% r a n g e C o n t a c t 124 PCU1-LT-SP Data Sheet rev 4 28/12/05。
3B SCIENTIFIC ® PHYSICSBedienungsanleitung06/15 LW/ALF1 Spulenabgriff2 Zündspule3 Grundplatte4 4-mm-Sicherheitsbuchsen5 Funkenstrecke (Zündkerze)6 Primärspule7 Sekundärspule8 Kugelelektrode, kurz 9 Kugelelektrode, lang∙Vorsicht! Umsichtiges Experimentieren von befähigtem Fachpersonal erforderlich! Lehre-rexperiment!∙ Betrieb nur im Innenbereich!∙Der Betrieb des Tesla-Transformators darf ausschließlich entsprechend dieser Beschrei-bung mit dem mitgelieferten Zubehör erfol-gen!∙Der Tesla-Transformator erzeugt hochfre-quente elektromagnetische Wellen. Wegen der großen Bandbreite kann der Transforma-tor empfindliche elektronische Geräte in un-mittelbarer Nähe stören oder zerstören. Des-halb dürfen solche Geräte nur im Abstand von mindestens fünf Metern aufgestellt werden. ∙Die vom Teslatransformator ausgestrahlten Frequenzen liegen im Bereich zahlreicher Funkfrequenzen. Die Inbetriebnahme darf daher nur kurzzeitig für Ausbildungszwecke erfolgen.∙Wenn sich Personen mit Herzschrittmachern und anderen elektronischen Steuergeräten in der Nähe des Tesla-Transformators aufhal-ten, darf dieser nicht in Betrieb genommen werden. Lebensgefahr!∙Es darf kein Betrieb des Gerätes durch schockgefährdete (kranke) Menschen erfol-gen!∙Keine Experimente an Tieren oder anderen Lebewesen mit dem Tesla-Transformator durchführen!∙Der Teslatrafo darf mit keinen Flüssigkeiten in Berührung kommen oder feucht werden!∙Bei Beschädigungen oder Fehlern den Tesla-Trafo nicht selbst reparieren!∙Es dürfen keine Metall- oder andere leitende Teile des Tesla-Transformators berührt wer-den. Von der Hochspannungsspule ist ein Si-cherheitsabstand von 20 cm einzuhalten, um Funkenüberschläge zu vermeiden.∙Nicht in der Nähe von feuergefährlichen Ma-terialien oder entzündlichen Flüssigkeiten und Gasen bzw. Dämpfen verwenden –Funken-bildung!∙Bei der Entwicklung des Tesla-Transformators wurde ein optimaler Kompro-miss zwischen seiner Leistungsfähigkeit und Universalität seines Einsatzes bei Einhaltung der erforderlichen Sicherheitsvorkehrungen geschlossen.∙Auf einen absoluten Berührungsschutz der Spannung führenden Teile wurde bewusst verzichtet, damit die Schüler Aufbau und Wir-kungsweise des Gerätes in allen Details gut erkennen können.∙Die Sicherheit des Experimentierenden ist in jeder Hinsicht vollständig garantiert, wenn Veränderungen am Tesla-Transformator (Va-riieren der Primärwindungszahl) und an der experimentellen Anordnung nur bei ausge-schaltetem Gerät vorgenommen werden. Es besteht bei keinem Experiment die Notwen-digkeit, bei angelegter Spannung am Tesla-Transformator Teile des Transformators oder der Experimentieranordnung zu berühren.∙Die Eingangsspannung des Teslatransforma-tors bereitet in ihrer Handhabung keinerlei Si-cherheitsrisiko (20 V), die Primärstromstärke(3 A) ebenfalls nicht.∙Entsprechendes gilt für die Ausgangsspan-nung und Stromstärke. Die Sekundärspan-nung hat eine Frequenz von 200 kHz bis 1200 kHz. bei einer Spannung von ca.100000 V. Die maximale Stromstärke liegt bei etwa 0,08 mA.Der Tesla-Transformator dient der Demonstration und Untersuchung der physikalischen Gesetz-mäßigkeiten hochfrequenter elektromagnetischer Wellen.Im Einzelnen ermöglicht der Tesla-Transformator die Demonstration folgender Phänomene:∙Erzeugung hochfrequenter elektromagneti-scher Schwingungen in einem Schwingkreis geringer Induktivität und Kapazität∙Abschirmung hochfrequenter elektromagneti-scher Schwingungen∙Berührungsloses Leuchten einer Leuchtstoff-lampe im hochfrequenten Feld∙Koronaentladung∙Funkenentladung∙Drahtlose Energieübertragung durch Hertz’sche Wellen∙Durchdringungsfähigkeit und Absorption Hertzscher Wellen∙Stehende Wellen auf einer Tesla-Spule2.1 AufbauDie Sekundärspule wird zentrisch in die Primär-spule eingeführt und eingesteckt. Man schließt den Tesla-Transformator über die Anschluss-buchsen (4) an eine Wechselspannungsquelle an.2.2 FunktionsprinzipMit einer Halbwelle der Versorgungsspannung wird über die Zündspule ein Kondensator gela-den, der sich über die Funkenstrecke (Zündker-ze) und die Primärwicklung des Tesla-Transformators entlädt.An der Primärwicklung entsteht eine gedämpfte Schwingung, die Energie auf die Sekundärspule überträgt. Dort entsteht eine elektromagnetische Schwingung im Bereich von 200 kHz bis 1200 kHz.In der Sekundärspule entsteht eine hochfrequen-te Hochspannung von mehr als 100 kV. Dabei schwingt die Sekundärspule in Resonanz zum Schwingkreis.1 Tesla-Transformator1 Kugelelektrode, kurz1 Kugelelektrode, lang1 Nadelelektrode mit Sprührad1 Handspule1 Sekundärspule1 Leuchstoffröhre mit Halterung1 ReflektorAbmessungenTransformator: 330x200x120 mm3 Sekundärspule: 240 mm x 75 mm Ø Masse Transformator: ca. 3 kg WindungszahlPrimärspule: 9Sekundärspule: 1150Primärspannung: 20 V ACSekundärspannung: ca. 100 kVZusatzspule 1000967 AC/DC Netzgerät, 30 V, 6 A @230V 1003593 oderAC/DC Netzgerät, 30 V, 6 A @230V 1003593∙Bei allen nachstehend beschriebenen Expe-rimenten ist ein Netzgerät mit einstellbarer Wechselspannung von 15 ... 24 V (max. 4 A) erforderlich. Zur Inbetriebnahme wird die Ver-sorgungsspannung erhöht, bis an der Zünd-kerze eine periodische Funkenentladung ein-tritt.∙Das Gerät ist nicht für Dauerbetrieb geeignet.Nach 5 Minuten Betriebsdauer sollte eine Ab-kühlung von mindestens 15 Minuten eingehal-ten werden.7.1 Abschirmung elektromagnetischerSchwingungen∙Der Tesla-Transformator wird ohne Sekun-därspule betrieben. Auf den Kunststoffring der Primärspule wird die Handspule gelegt.∙Der Abgriff der Primärspule ist auf höchste Position zu bringen. Nach der Inbetriebnahme des Tesla-Transformators wird in der Hand-spule eine Spannung induziert (Leuchten der Glühlampe).∙Danach wird zwischen Primärspule und Handspule der Reflektor geschoben. Die Aluminiumfolie schirmt die elektromagneti-schen Schwingungen ab. Die Lampe an der Handspule leuchtet nicht mehr. 7.2 Thomson’sche Schwingungsgleichung∙Der Tesla-Transformator wird mit einer Se-kundärspule betrieben. In die Buchse an der Oberseite wird die Nadelelektrode gesteckt. ∙Nach dem Anlegen der Spannung tritt an der Nadelspitze eine Koronaentladung auf. Durch Variieren der Abgriffposition (Veränderung der Primärspuleninduktivität) wird das Maxi-mum der Entladung eingestellt. (Spannungs-überhöhung bei Resonanz).∙Danach werden zwei Sekundärspulen überei-nander gesteckt und in die obere Spule die Nadelelektrode eingesetzt.∙Die Resonanz tritt bei höherer Primärwin-dungszahl auf, da das Sekundärspulenpaar jetzt die doppelte Windungszahl besitzt. Die Eigenfrequenz ist dadurch geringer gewor-den.∙Durch Vergrößern der Windungszahl des Schwingkreises verringert sich die Resonanz-frequenz.7.3 Korona-Entladung∙Der Tesla-Transformator wird mit zwei Se-kundärspulen und aufgesetzter Nadelelekt-rode betrieben.∙Die Windungszahl der Schwingkreisspule wird auf 7 eingestellt. An der Spitze der Nadel tritt eine Koronaentladung infolge der hohen Spannung auf.7.4 Elektrischer Wind∙Der Transformator wird mit einer Sekundär-spule bei einer Primärwindungszahl von 4 be-trieben. Auf die Sekundärspule wird die Na-delelektrode mit Sprührad gesetzt.∙Die Enden des S-förmigen Sprührades laufen spitz aus. Dort treten infolge der hohen elektrischen Feldstärke Elektronen aus Sie lagern sich an Luftmoleküle an, die abgesto-ßen werden. Die Bewegung der Luftmoleküle bewirkt einen Rückstoß, welcher das Rad in Bewegung setzt.7.5 Funkenentladung∙Der Transformator wird mit nur einer Sekun-därspule bei einer Primärwindungszahl von 4 betrieben. Auf die Sekundärspule wird die Nadelelektrode gesetzt.∙In die zweite Erdungsbuchse wird die lange Kugelelektrode gesteckt und deren Kugel zur Nadelspitze hin ausgerichtet.∙Zwischen Kugel und Nadelspitze springen lebhaft einige Zentimeter lange Funken über.7.6 Drahtlose Energieübertragung∙Der Transformator wird mit einer Sekundär-spule und aufgesteckter Kugelelektrode be-trieben.∙Eine zweite Sekundärspule wird mit Aufstell-fuß und Halterung für Leuchtstoffröhre in ca.1 m Abstand vom Tesla-Transformator aufge-stellt.∙Die Erdungsbuchsen zwischen Aufstellfuß und Tesla-Transformator sind mit einer Labor-Leitung zu verbinden.∙Nach der Inbetriebnahme des Tesla-Transformators ist im teilweise abgedunkelten Raum ein Leuchten der Röhre erkennbar. Es findet eine drahtlose Energieübertragung zwi-schen den Spulen statt.∙Mit dem Reflektor, zwischen den Spulen auf-gestellt, kann die abschirmende Wirkung der Metallfolie demonstriert werden.7.7 Stehende Wellen auf der Tesla-Spule∙Der Tesla-Transformator wird mit zwei ver-bundenen Spulen betrieben. Am oberen Ende befindet sich die kleine Kugelelektrode. Die Primärwindungszahl wird auf 8 eingestellt.∙Man führt die Handspule von oben her über das Spulenpaar langsam nach unten. Mit zu-nehmender Tiefe leuchtet die Lampe heller.Die Sekundärspule schwingt als λ/4-Dipol.Am oberen Ende tritt ein Stromknoten auf, am unteren Ende ein Strombauch.∙Man verringert die Windungszahl der Primär-spule auf 3 und führt wiederum die Handspule vom oberen Ende der Tesla-Spule langsam nach unten. Am oberen Ende tritt ein Strom-knoten auf, die Glühlampe leuchtet nicht oder schwach.∙Beim weiteren Bewegen nach unten sind noch zwei Schwingungsbäuche und ein Schwingungsknoten nachweisbar. Die Tesla-Spule schwingt als 3/4-λ-Dipol.∙Gerät an einem sauberen, trockenen und staubfreien Platz aufbewahren.∙Vor der Reinigung Gerät von der Stromver-sorgung trennen.∙Zur Reinigung keine aggressiven Reiniger oder Lösungsmittel verwenden.∙Zum Reinigen ein weiches, feuchtes Tuch benutzen.∙Die Verpackung ist bei den örtlichen Recyc-lingstellen zu entsorgen.∙Sofern das Gerätselbst verschrottetwerden soll, so gehörtdieses nicht in dennormalen Hausmüll. Essind die lokalen Vor-schriften zur Entsor-gung von Elektro-schrott einzuhalten.3B Scientific GmbH ▪ Rudorffweg 8 ▪ 21031 Hamburg ▪ Deutschland ▪ 。
3B SCIENTIFIC ® PHYSICS1Instrucciones de uso09/15 ALF1 Eje2 Manguito (móvil)3 Barra de palanca4 Muelle helicoidal5Barra de palanca con pesaExiste peligro de daños debido a la gran fuerza centrífuga que se desarrolla, por este motivo:∙ Comprobar antes del experimento que laspesas estén unida firmemente a la barra del péndulo.∙ Introducir profundamente el eje en elmandril del motor de experimentación y ajustarlo con firmeza.∙ Mantener la distancia de seguridad.∙ Aumentar la velocidad angular lentamente. ∙ No tocar los cuerpos en rotación.∙ Interrumpir la alimentación eléctrica antesde desmontar el equipo.Las piezas giratorias pueden atrapar el cabello largo, prendas sueltas o alhajas, y enrollarlos. ∙ Para evitar este peligro, se debe usar unaredecilla si se tiene cabello largo.∙ Retire cualquier prenda de vestiro alhajano apropiada.El péndulo Watt sirve para demostrar el principio de la regulación de revoluciones, por ejemplo, en las máquinas a vapor.Centrado sobre un eje, se encuentra emplazado un péndulo doble. En posición de reposo, los péndulos se mantienen unidos mediante un resorte. Durante la rotación, los péndulos son elevados más y más a lo largo del eje dependiendo del número de revoluciones por segundo. Este desplazamiento se emplea técnicamente paratareas de regulación (regulador centrífugo). Diámetro máximo: 350 mm Altura:250 mm Diámetro del eje: 10 mmPeso: aprox. 0,4 kg3B Scientific GmbH ▪ Rudorffweg 8 ▪ 21031 Hamburg o ▪ Alemania ▪ Se reservan las modificaciones técnicas © Copyright 2015 3B Scientific GmbHPara la ejecución de los experimentos se requieren adicionalmente los siguientes equipos.1 Motor de experimentación con engranaje 1002663 1 fuente de alimentación cc 0–20 V @230 V 1003312 o1 fuente de alimentación cc 0–20 V @115 V 1003311 1 trípode dúplex 1002836 Cable de experimentación∙ Montar el motor de experimentación sobreel trípode.∙ Introducir profundamente el eje delpéndulo de Watt en el mandril del motor de experimentación y ajustarlo con firmeza. ∙ Se hace el enlace de la fuente dealimentación con el motor de experimentación.∙ Primero se ajusta en cero la tensión desalida de la fuente de alimentación y luego se conecta.∙ Para incrementar las rotaciones porsegundo se aumenta lentamente la tensión de salida de la fuente y se observa la desviación del péndulo de Watt con respecto a la vertical.Fig. 1: Montaje experimental。
Bundle Adjustment—A Modern SynthesisBill Triggs1,Philip McLauchlan2,Richard Hartley3and Andrew Fitzgibbon41INRIA Rhˆo ne-Alpes,655avenue de l’Europe,38330Montbonnot,France.Bill.Triggs@inrialpes.fr http://www.inrialpes.fr/movi/people/Triggs2School of Electrical Engineering,Information Technology&MathematicsUniversity of Surrey,Guildford,GU25XH,U.K.P.McLauchlan@ /Personal/P.McLauchlan3General Electric CRD,Schenectady,NY,12301hartley@4Dept of Engineering Science,University of Oxford,19Parks Road,OX13PJ,U.K.awf@ /awfAbstractThis paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community.Bundle adjustment is theproblem of refining a visual reconstruction to produce jointly optimal structure and viewing pa-rameter estimates.Topics covered include:the choice of cost function and robustness;numericaloptimization including sparse Newton methods,linearly convergent approximations,updatingand recursive methods;gauge(datum)invariance;and quality control.The theory is developedfor general robust cost functions rather than restricting attention to traditional nonlinear leastsquares.Keywords:Bundle Adjustment,Scene Reconstruction,Gauge Freedom,Sparse Matrices,Opti-mization.1IntroductionThis paper is a survey of the theory and methods of bundle adjustment aimed at the computer vision community,and more especially at potential implementors who already know a little about bundle methods.Most of the results appeared long ago in the photogrammetry and geodesy literatures,but many seem to be little known in vision,where they are gradually being reinvented.By providing an accessible modern synthesis,we hope to forestall some of this duplication of effort,correct some com-mon misconceptions,and speed progress in visual reconstruction by promoting interaction between the vision and photogrammetry communities.Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal 3D structure and viewing parameter(camera pose and/or calibration)estimates.Optimal means thatthe parameter estimates are found by minimizing some cost function that quantifies the modelfitting error,and jointly that the solution is simultaneously optimal with respect to both structure and camera variations.The name refers to the‘bundles’of light rays leaving each3D feature and converging on each camera centre,which are‘adjusted’optimally with respect to both feature and camera positions. Equivalently—unlike independent model methods,which merge partial reconstructions without up-dating their internal structure—all of the structure and camera parameters are adjusted together‘in one bundle’.Bundle adjustment is really just a large sparse geometric parameter estimation problem,the pa-rameters being the combined3D feature coordinates,camera poses and calibrations.Almost every-thing that we will say can be applied to many similar estimation problems in vision,photogrammetry, industrial metrology,surveying and geodesy.Adjustment computations are a major common theme throughout the measurement sciences,and once the basic theory and methods are understood,they are easy to adapt to a wide variety of problems.Adaptation is largely a matter of choosing a numerical optimization scheme that exploits the problem structure and sparsity.We will consider several such schemes below for bundle adjustment.Classically,bundle adjustment and similar adjustment computations are formulated as nonlinear least squares problems[19,46,100,21,22,69,5,73,109].The cost function is assumed to be quadratic in the feature reprojection errors,and robustness is provided by explicit outlier screening. Although it is already veryflexible,this model is not really general enough.Modern systems of-ten use non-quadratic M-estimator-like distributional models to handle outliers more integrally,and many include additional penalties related to overfitting,model selection and system performance(pri-ors,MDL).For this reason,we will not assume a least squares/quadratic cost model.Instead,the cost will be modelled as a sum of opaque contributions from the independent information sources (individual observations,prior distributions,overfitting penalties...).The functional forms of these contributions and their dependence onfixed quantities such as observations will usually be left im-plicit.This allows many different types of robust and non-robust cost contributions to be incorporated, without unduly cluttering the notation or hiding essential model structure.Itfits well with modern sparse optimization methods(cost contributions are usually sparse functions of the parameters)and object-centred software organization,and it avoids many tedious displays of chain-rule results.Im-plementors are assumed to be capable of choosing appropriate functions and calculating derivatives themselves.One aim of this paper is to correct a number of misconceptions that seem to be common in the vision literature:•“Optimization/bundle adjustment is slow”:Such statements often appear in papers introducing yet another heuristic Structure from Motion(SFM)iteration.The claimed slowness is almost always due to the unthinking use of a general-purpose optimization routine that completely ignores the problem structure and sparseness.Real bundle routines are much more efficient than this,and usually considerably more efficient andflexible than the newly suggested method(§6,7).That is why bundle adjustment remains the dominant structure refinement technique for real applications, after40years of research.•“Only linear algebra is required”:This is a recent variant of the above,presumably meant to imply that the new technique is especially simple.Virtually all iterative refinement techniques use only linear algebra,and bundle adjustment is simpler than many in that it only solves linear systems:it makes no use of eigen-decomposition or SVD,which are themselves complex iterative methods.•“Any sequence can be used”:Many vision workers seem to be very resistant to the idea that2reconstruction problems should be planned in advance(§11),and results checked afterwards to verify their reliability(§10).System builders should at least be aware of the basic techniques for this,even if application constraints make it difficult to use them.The extraordinary extent to which weak geometry and lack of redundancy can mask gross errors is too seldom appreciated,c.f. [34,50,30,33].•“Point P is reconstructed accurately”:In reconstruction,just as there are no absolute references for position,there are none for uncertainty.The3D coordinate frame is itself uncertain,as it can only be located relative to uncertain reconstructed features or cameras.All other feature and camera uncertainties are expressed relative to the frame and inherit its uncertainty,so statements about them are meaningless until the frame and its uncertainty are specified.Covariances can look completely different in different frames,particularly in object-centred versus camera-centred ones. See§9.There is a tendency in vision to develop a profusion of ad hoc adjustment iterations.Why should you use bundle adjustment rather than one of these methods?:•Flexibility:Bundle adjustment gracefully handles a very wide variety of different3D feature and camera types(points,lines,curves,surfaces,exotic cameras),scene types(including dynamic and articulated models,scene constraints),information sources(2D features,intensities,3D informa-tion,priors)and error models(including robust ones).It has no problems with missing data.•Accuracy:Bundle adjustment gives precise and easily interpreted results because it uses accurate statistical error models and supports a sound,well-developed quality control methodology.•Efficiency:Mature bundle algorithms are comparatively efficient even on very large problems. They use economical and rapidly convergent numerical methods and make near-optimal use of problem sparseness.In general,as computer vision reconstruction technology matures,we expect that bundle adjustment will predominate over alternative adjustment methods in much the same way as it has in photogram-metry.We see this as an inevitable consequence of a greater appreciation of optimization(notably, more effective use of problem structure and sparseness),and of systems issues such as quality control and network design.Coverage:We will touch on a good many aspects of bundle methods.We start by considering the camera projection model and the parametrization of the bundle problem§2,and the choice of er-ror metric or cost function§3.§4gives a rapid sketch of the optimization theory we will use.§5 discusses the network structure(parameter interactions and characteristic sparseness)of the bundle problem.The following three sections consider three types of implementation strategies for adjust-ment computations:§6covers second order Newton-like methods,which are still the most often used adjustment algorithms;§7covers methods with onlyfirst order convergence(most of the ad hoc methods are in this class);and§8discusses solution updating strategies and recursivefiltering bundle methods.§9returns to the theoretical issue of gauge freedom(datum deficiency),including the theory of inner constraints.§10goes into some detail on quality control methods for monitoring the accuracy and reliability of the parameter estimates.§11gives some brief hints on network design, i.e.how to place your shots to ensure accurate,reliable reconstruction.§12completes the body of the paper by summarizing the main conclusions and giving some provisional recommendations for methods.There are also several appendices.§A gives a brief historical overview of the development of bundle methods,with literature references.§B gives some technical details of matrix factorization, updating and covariance calculation methods.§C gives some hints on designing bundle software,and pointers to useful resources on the Internet.The paper ends with a glossary and references.3General references:Cultural differences sometimes make it difficult for vision workers to read the photogrammetry literature.The collection edited by Atkinson[5]and the manual by Karara [69]are both relatively accessible introductions to close-range(rather than aerial)photogrammetry. Other accessible tutorial papers include[46,21,22].Kraus[73]is probably the most widely used photogrammetry textbook.Brown’s early survey of bundle methods[19]is well worth reading.The often-cited manual edited by Slama[100]is now quite dated,although its presentation of bundle adjustment is still relevant.Wolf&Ghiliani[109]is a text devoted to adjustment computations,with an emphasis on surveying.Hartley&Zisserman[62]is an excellent recent textbook covering vision geometry from a computer vision viewpoint.For nonlinear optimization,Fletcher[29]and Gill et al [42]are the traditional texts,and Nocedal&Wright[93]is a good modern introduction.For linear least squares,Bj¨o rck[11]is superlative,and Lawson&Hanson is a good older text.For more general numerical linear algebra,Golub&Van Loan[44]is the standard.Duff et al[26]and George&Liu [40]are the standard texts on sparse matrix techniques.We will not discuss initialization methods for bundle adjustment in detail,but appropriate reconstruction methods are plentiful and well-known in the vision community.See,e.g.,[62]for references.Notation:The structure,cameras,etc.,being estimated will be parametrized by a single large state vector x.In general the state belongs to a nonlinear manifold,but we linearize this locally and work with small linear state displacements denotedδx.Observations(e.g.measured image features)are denoted z−z(x).However,observations and prediction errors usually only appear implicitly,through their influence on the cost function f(x)=f(predz(x)).The cost.The observation-state Jacobian is function’s gradient is g≡dfdx2J≡dzFor simplicity,suppose that the scene is modelled by individual static3D features X p,p=1...n, imaged in m shots with camera pose and internal calibration parameters P i,i=1...m.There may also be further calibration parameters C c,c=1...k,constant across several images(e.g.,depending on which of several cameras was used).We are given uncertain measurements xip ,we assume that we have a predictive model x ip=x(C c,P i,X p)based on the parameters,that can be used to derive a feature prediction error:x ip(C c,P i,X p)≡xip−x(C c,P i,X p).But as the observations xFigure 1:Vision geometry and its error model are essen-tially projective.Affine parametrization introduces an ar-tificial singularity at projective infinity,which may causenumerical problems for distant features.Although they are in principle equivalent,different parametrizations often have profoundly dif-ferent numerical behaviours which greatly affect the speed and reliability of the adjustment iteration.The most suitable parametrizations for optimization are as uniform,finite and well-behaved as pos-sible near the current state estimate .Ideally,they should be locally close to linear in terms of their effect on the chosen error model,so that the cost function is locally nearly quadratic.Nonlinearity hinders convergence by reducing the accuracy of the second order cost model used to predict state up-dates (§6).Excessive correlations and parametrization singularities cause ill-conditioning and erratic numerical rge or infinite parameter values can only be reached after excessively many finite adjustment steps.Any given parametrization will usually only be well-behaved in this sense over a relatively small section of state space.So to guarantee uniformly good performance,however the state itself may be represented,state updates should be evaluated using a stable local parametrization based on increments from the current estimate .As examples we consider 3D points and rotations.3D points:Even for calibrated cameras,vision geometry and visual reconstructions are intrinsically projective.If a 3D (X Y Z ) parametrization (or equivalently a homogeneous affine (X Y Z 1) one)is used for very distant 3D points,large X,Y,Z displacements are needed to change the image significantly.I.e .,in (X Y Z )space the cost function becomes very flat and steps needed for cost adjustment become very large for distant points.In comparison,with a homogeneous projective parametrization (X Y Z W ) ,the behaviour near infinity is natural,finite and well-conditioned so long as the normalization keeps the homogeneous 4-vector finite at infinity (by sending W →0there).In fact,there is no immediate visual distinction between the images of real points near infinity and virtual ones ‘beyond’it (all camera geometries admit such virtual points as bona fide projective constructs).The optimal reconstruction of a real 3D point may even be virtual in this sense,if image noise happens to push it ‘across infinity’.Also,there is nothing to stop a reconstructed point wandering beyond infinity and back during the optimization.This sounds bizarre at first,but it is an inescapable consequence of the fact that the natural geometry and error model for visual reconstruction is projective rather than affine.Projectively,infinity is just like any other place .Affine parametrization (X Y Z 1) is acceptable for points near the origin with close-range convergent camera geometries,but it is disastrous for distant ones because it artificially cuts away half of the natural parameter space,and hides the fact by sending the resulting edge to infinite parameter values.Instead,you should use a homogeneous parametrization (X Y Z W ) for distant points,e.g .with spherical normalization X 2i =1.Rotations:Similarly,experience suggests that quasi-global 3parameter rotation parametrizations such as Euler angles cause numerical problems unless one can be certain to avoid their singularities and regions of uneven coverage.Rotations should be parametrized using either quaternions subject to q 2=1,or local perturbations R δR or δR R of an existing rotation R ,where δR can be any well-behaved 3parameter small rotation approximation,e.g .δR =(I +[δr ]×),the Rodriguez formula,local Euler angles,etc .State updates:Just as state vectors x represent points in some nonlinear space,state updates x →x +δx represent displacements in this nonlinear space that often can not be represented exactly by vector6addition.Nevertheless,we assume that we can locally linearize the state manifold,locally resolving any internal constraints and freedoms that it may be subject to,to produce an unconstrained vectorδx parametrizing the possible local state displacements.We can then,e.g.,use Taylor expansion inδx to form a local cost model f(x+δx)≈f(x)+df2δx d2fdx2at the minimum).Also,the residual cost at the minimum can be usedas a test statistic for model validity(§10).E.g.,for a negative log likelihood cost model with Gaussian error distributions,twice the residual is aχ2variable.3.1Desiderata for the Cost FunctionIn adjustment computations we go to considerable lengths to optimize a large nonlinear cost model, so it seems reasonable to require that the refinement should actually improve the estimates in some objective(albeit statistical)sense.Heuristically motivated cost functions can not usually guarantee this.They almost always lead to biased parameter estimates,and often severely biased ones.A large7body of statistical theory points to maximum likelihood(ML)and its Bayesian cousin maximum a posteriori(MAP)as the estimators of choice.ML simply selects the model for which the total probability of the observed data is highest,or saying the same thing in different words,for which the total posterior probability of the model given the observations is highest.MAP adds a prior term representing background information.ML could just as easily have included the prior as an additional‘observation’:so far as estimation is concerned,the distinction between ML/MAP and prior/observation is purely terminological.Information usually comes from many independent sources.In bundle adjustment these include: covariance-weighted reprojection errors of individual image features;other measurements such as 3D positions of control points,GPS or inertial sensor readings;predictions from uncertain dynamical models(for‘Kalmanfiltering’of dynamic cameras or scenes);prior knowledge expressed as soft constraints(e.g.on camera calibration or pose values);and supplementary sources such as overfit-ting,regularization or description length penalties.Note the variety.One of the great strengths of adjustment computations is their ability to combine information from disparate sources.Assuming that the sources are statistically independent of one another given the model,the total probability for the model given the combined data is the product of the probabilities from the individual sources.To get an additive cost function we take logs,so the total log likelihood for the model given the combined data is the sum of the individual source log likelihoods.Properties of ML estimators:Apart from their obvious simplicity and intuitive appeal,ML and MAP estimators have strong statistical properties.Many of the most notable ones are asymptotic, i.e.they apply in the limit of a large number of independent measurements,or more precisely in the central limit where the posterior distribution becomes effectively Gaussian1.In particular:•Under mild regularity conditions on the observation distributions,the posterior distribution of the ML estimate converges asymptotically in probability to a Gaussian with covariance equal to the dispersion matrix.•The ML estimate asymptotically has zero bias and the lowest variance that any unbiased estimator can have.So in this sense,ML estimation is at least as good as any other method2.Non-asymptotically,the dispersion is not necessarily a good approximation for the covariance of the ML estimator.The asymptotic limit is usually assumed to be a valid for well-designed highly-redundant photogrammetric measurement networks,but recent sampling-based empirical studies of posterior likelihood surfaces[35,80,68]suggest that the case is much less clear for small vision geometry problems and weaker networks.More work is needed on this.n z−n x ,where O(σ)characterizes the average standard deviation from a single observation.For n z−n x (σ/r)2,essentially the entire posterior probability mass lies inside the quadratic region,so the posterior distribution converges asymptotically in probability to a Gaussian.This happens at any proper isolated cost minimum at which second order Taylor expansion is locally valid.The approximation gets better with more data(stronger curvature)and smaller higher order Taylor terms.2This result follows from the Cram´e r-Rao bound(e.g.[23]),which says that the covariance of any unbiased estimator is bounded below by the Fisher information or mean curvature of the posterior log likelihood surface ( x−x) − d2log px the true underlying x value,and A B denotes positive semidefiniteness of A−B.Asymptotically,the posterior distribution becomes Gaussian and the Fisher information converges to the inverse dispersion(the curvature of the posterior log likelihood surface at the cost minimum),so the ML estimate attains the Cram´e r-Rao bound.800.050.10.150.20.250.30.350.4-10-50510Gaussian PDF Cauchy PDF 012345678-10-50510Gaussian -log likelihood Cauchy -log likelihood 020406080100010020030040050060070080090010001000 Samples from a Cauchy and a Gaussian Distribution Cauchy Gaussian Figure 2:Beware of treating any bell-shaped observation distribution as a Gaussian.Despite being narrower in the peak and broader in the tails,the probability density function of a Cauchy distribution,p (x )= π(1+x 2) −1,does not look so very different from that of a Gaussian (top left ).But their negative log likelihoods are very different (bottom left ),and large deviations (“outliers”)are much more probable for Cauchy variates than for Gaussian ones (right ).In fact,the Cauchy distribution has infinite covariance.The effect of incorrect error models:It is clear that incorrect modelling of the observation distri-butions is likely to disturb the ML estimate.Such mismodelling is to some extent inevitable because error distributions stand for influences that we can not fully predict or control.To understand the distortions that unrealistic error models can cause,first realize that geometric fitting is really a special case of parametric probability density estimation.For each set of parameter values,the geometric image projection model and the assumed observation error models combine to predict a probability density for the observations.Maximizing the likelihood corresponds to fitting this predicted obser-vation density to the observed data.The geometry and camera model only enter indirectly,via their influence on the predicted distributions.Accurate noise modelling is just as critical to successful estimation as accurate geometric mod-elling.The most important mismodelling is failure to take account of the possibility of outliers (aberrant data values,caused e.g .,by blunders such as incorrect feature correspondences).We stress that so long as the assumed error distributions model the behaviour of all of the data used in the fit (including both inliers and outliers),the above properties of ML estimation including asymptotic minimum variance remain valid in the presence of outliers.In other words,ML estimation is natu-rally robust :there is no need to robustify it so long as realistic error distributions were used in the first place.A distribution that models both inliers and outliers is called a total distribution .There is no need to separate the two classes,as ML estimation does not care about the distinction.If the total distribution happens to be an explicit mixture of an inlier and an outlier distribution (e.g .,a Gaussian with a locally uniform background of outliers),outliers can be labeled after fitting using likelihood9ratio tests,but this is in no way essential to the estimation process.It is also important to realize the extent to which superficially similar distributions can differ from a Gaussian,or equivalently,how extraordinarily rapidly the tails of a Gaussian distribution fall away compared to more realistic models of real observation errors.See figure 2.In fact,un-modelled outliers typically have very severe effects on the fit.To see this,suppose that the real observations are drawn from a fixed (but perhaps unknown)underlying distribution p 0(z).So for each model,the negative loglikelihood cost sum −i log p model (z i |x )converges to −n z p 0(z )log(p model (z |x ))dz .Up to a model-independent constant,this is n z times the relative entropy or Kullback-Leibler divergence p 0(z )log(p 0(z )/p model (z |x ))dz of the model distribution w.r.t.the true one p 0(z ipredicted by a model z i =z i (x ),where x is a vector of modelparameters.Then nonlinear least squares takes as estimates the parameter values that minimize the weighted Sum of Squared Error (SSE)cost function:f (x )≡1i −z i (x )(2)Here, z i (x )is the feature prediction error and W i is an arbitrary symmetric positive definite (SPD)weight matrix .Modulo normalization terms independent of x ,the weighted SSE cost function coin-cides with the negative log likelihood for observations zi .Even for non-Gaussian noise withthis mean and covariance,the Gauss-Markov theorem [37,11]states that if the models z i (x )arelinear,least squares gives the Best Linear Unbiased Estimator (BLUE),where ‘best’means minimum variance 4.Any weighted least squares model can be converted to an unweighted one (W i =1)by pre-multiplying zz i =L i z i is called a standardizedresidual ,and the resulting unweighted least squares problem min x 14z i (x ) 2is said to be in standard form .One advantage of this is that optimization methods based on linear least squares solvers can be used in place of ones based on linear (normal)equation solvers,which allows ill-conditioned problems to be handled more stably (§B.2).Another peculiarity of the SSE cost function is its indifference to the natural boundaries between the observations.If observations z≡(z k ) ,and their weight matrices W i are assembled into compound block diagonalweight matrix W ≡diag (W 1,...,W k ),the weighted squared error f (x )≡12 i z i (x ) W i z i (x ).The general quadratic form of the SSE cost is preserved under such compounding,and also under arbitrary linear transformationsof zi ,each group being irreducible in the sense that it is accepted,down-weightedor rejected as a whole,independently of all the other groups.The partitions should reflect the types of blunders that occur.For example,if feature correspondence errors are the most common blunders,the two coordinates of a single image point would naturally form a group as both would usually be invalidated by such a blunder,while no other image point would be affected.Even if one of the co-ordinates appeared to be correct,if the other were incorrect we would usually want to discard both for safety.On the other hand,in stereo problems,the four coordinates of each pair of correspond-ing image points might be a more natural grouping,as a point in one image is useless without its correspondent in the other one.Henceforth,when we say observation we mean irreducible group of observations treated as a unit by the robustifying model .I.e .,our observations need not be scalars,but they must be units,probabilistically independent of one another irrespective of whether they are inliers or outliers.。
3B SCIENTIFIC ® PHYSICS1Gerätesatz Spule für Hysteresekurve 8496112Bedienungsanleitung08/06/DML1 Grundplatte2 Halter für Hallsonde3 Spule4 4-mm-Anschlussbuchsen5 Eisenproben1. SicherheitshinweiseAchtung! Um eine Zerstörung der Spule durch Wärmeentwicklung zu verhindern:• Maximalen zulässigen Strom von 3 A DC nichtüberschreiten.2. BeschreibungDer Gerätesatz Spule für Hysteresekurve dient zur Aufnahme der Hysteresekurven (magnetische Flussdichte B in Abhängigkeit von der magneti-schen Feldstärke H ) verschiedener ferromagneti-scher Kernmaterialien.Die zylinderförmige Spule besteht aus 600 kompakt in mehreren Schichten aufgewickelten Windungen auf einer Grundplatte. Den Spulenkern bilden drei verschiedene Eisenproben. Ein Halter auf der Grundplatte dient zur Aufnahme der Feldsonde.3. Technische DatenDrahtdurchmesser: 1 mm Windungszahl: 600Innenwiderstand: 1,5 Ω Induktivität ohne Kern: 3,5 mH Abmessungen: 200 x 145 x 65 mm 3 Eisenproben: 140 mm x 10 mm ØMaterial: Baustahl, Hartmetall,Eisen (rostfrei)Gesamtmasse: ca. 950 g4. BedienungZur Aufnahme der Hysteresekurve sind folgende Geräte zusätzlich erforderlich: 1 AC/DC-Netzgerät 8521131 1 Teslameter 8533981 1 Axial-/Tangentialfeldsonde 8533997 1 Vielfach-Messgerät Escola 10 8531160 Experimentierkabel• N etzgerät, Spule und Amperemeter in Reiheschalten (siehe Fig. 1). • Kern in die Spule einführen.Elwe Didactic GmbH ▪ Steinfelsstr. 6 ▪ 08248 Klingenthal ▪ Deutschland ▪ 3B Scientific GmbH ▪ Rudorffweg 8 ▪ 21031 Hamburg ▪ Deutschland ▪ Technische Änderungen vorbehalten• Feldsonde so im Halter befestigen, dass dieTangentialsonde am Spulenkern anliegt. • N etzgerät einschalten und Spulenstrom lang-sam erhöhen bis die magnetische Flussdichte B ihren Sättigungswert erreicht. Dabei darauf achten, dass der Spulenstrom die 3-Ampere-Grenze nicht überschreitet. • Magnetische Feldstärke H aus dem Spulen-strom I , der Windungsanzahl n und der Spulen-länge s bestimmen.sI n H ⋅=• Magnetische Flussdichte B am Teslameter ab-lesen. • Strom auf N ull zurückfahren, Polung an derSpule vertauschen und Messung im negativen Strombereich durchführen. • Abhängigkeit der magnetischen Flussdichtevon der magnetischen Feldstärke grafisch dar-stellen. • Versuch mit den verschiedenen Eisenprobenwiederholen.Fig.1 Aufnahme der HysteresekurveFig. 2 Beispiel einer Hysteresekurve。
Progressive Probabilistic Hough TransformJ.Matas,C.Galambos and J.KittlerCVSSP,University of Surrey,Guildford,Surrey GU25XH,United KingdomCentre for Machine Perception,Czech Technical University,Karlovo n´a mˇe st´ı13,12135Praha,Czech Republicg.matas@AbstractIn the paper we present the Progressive Probabilistic Hough Transform (PPHT).Unlike the Probabilistic Hough Transform[4]where Standard HoughTransform is performed on a pre-selected fraction of input points,PPHT min-imises the amount of computation needed to detect lines by exploiting thedifference in the fraction of votes needed to reliably detect lines with differ-ent numbers of supporting points.The fraction of points used for voting neednot be specified ad hoc or using a priori knowledge,as in the ProbabilisticHough Transform;it is a function of the inherent complexity of data.The algorithm is ideally suited for real-time applications with afixed amount of available processing time,since voting and line detection is in-terleaved.The most salient features are likely to be detectedfirst.Experi-ments show PPHT has,in many circumstances,advantages over the StandardHough Transform.1IntroductionThe Hough Transform(HT)is a popular method for the extraction of geometric primi-tives.In literally hundreds of papers every aspect of the transform has been scrutinised-parameterisation,accumulator design,voting patterns,peak detection-to name but a few [2].The introduction of the randomised version of the Hough Transform is one of the most important recent developments[7],[4]in thefield.The approach proposed in the paper falls in the’probabilistic’(or Monte Carlo)class of Hough Transform algorithms, the objective of which is to minimise the proportion of points that are used in voting while maintaining false negative and false positive detection rates almost at the level achieved by the Standard Hough Transform[5].Unlike the Randomised Hough Transform(RHT)British Machine Vision Conference257 class of algorithms(see[3]for an overview of the Hough Transform),the Probabilis-tic Hough Transform(PHT)and the Standard Hough Transform(SHT)share the sameone-to-many voting pattern and the representation of the accumulator array.In the original paper on the Probabilistic Hough Transform[4],Kiryati et al.showthat it is often possible to obtain results identical to SHT if only a fraction of input points is used in the voting process.In thefirst step of PHT a random subset of points is selectedand a Standard Hough Transform is performed on the subset.Successful experiments with as low as2%are presented.The poll size is a parameter critically influencing the PHT performance.The authors analyse the problem for the case of a single line immersed innoise.Unfortunately the derived formulae require the knowledge of the number of pointsbelonging to the line a priori,which is rare in practice.In[1],(Bergen and Schvaytzer) it is shown that the Probabilistic Hough Transform can be formulated as a Monte Carlo estimation of the Hough Transform.The number of votes necessary to achieve a desired error rate is derived using the theory of Monte Carlo evaluation.Nevertheless,the poll size remains independent of the data and is based on a priori knowledge1.If little is known about the detected objects,a conservative approach(much larger than necessary poll size)must be adopted,diminishing the computational advantage of the Probabilistic Hough Transform.The need to pre-select a poll size can be bypassed by an adaptive scheme[9,8,6].Inthe Adaptive Probabilistic Hough Transform the termination of voting is based on moni-toring the polling process.The criterion suggested by Yla-Jaaski and Kiryati[9]is based on a measure of stability of the ranks of the highest peaks.Shaked et al.[6]propose a sequential rule.Both methods formulate their goal so as to“obtain the same maximum as if the Hough Transform had been accumulated and analysed”.In our opinion,such for-mulation is unrealistic since in almost all applications the detection of multiple instances of lines(circles,etc.)is required whereas the given stopping rule depends on the promi-nence of the most significant feature only.For instance if the input contains any number of lines of equal length,it will not be possible to detect a stable maximum regardless of the percentage of input points used for voting.In the paper we present a new form of an Adaptive Probabilistic Hough Transform.Unlike the above-mentioned work we attempt to minimise the amount of computation needed to detect lines(or in general geometric features)by exploiting the difference in the fraction of votes needed to reliably detect lines(features)with different numbers of supporting points.It is intuitively obvious(and it directly follows e.g.from Kiryati’s analysis in[4])that for lines with strong support(long lines)only a small fraction of its supporting points have to vote before the corresponding accumulator bin reaches a count that is non-accidental.For shorter lines a much higher proportion of supporting points must vote.For lines with support size close to counts due to noise a full transform must be performed.2The AlgorithmTo minimise the computation requirements the proposed algorithm which we call Pro-gressive Probabilistic Hough Transform(PPHT)proceeds as follows.Repeatedly,a new random point is selected for voting.After casting a vote,we test the following hypothesis: 1Perhaps it is more appropriate to speak about assumptions rather than knowledge258British Machine Vision Conference‘could the count be due to random noise?’,or more formally‘having sampled out of points,does any bin count exceed the threshold of points which would be expected from random noise?’.The test requires a single comparison with a threshold per bin update.Of course,the threshold changes as votes are cast.When a line is detected the supporting points retract their votes.Remaining points supporting the line are removed from the set of points that have not yet voted and the process continues with another random selection.Such an algorithm possesses a number of attractive properties.Firstly,a feature is de-tected as soon as the contents of the accumulator allows a decision.The PPHT algorithm is an anytime algorithm.It can be interrupted and still output useful results,in particular salient features that could be detected in the allowed time.The algorithm does not require a stopping rule.The computation stops when all the points have either voted or have been assigned to a feature.This does not mean that a full Hough Transform has been per-formed.Depending on the data,only a small fraction of points could have voted,the rest being removed as supporting evidence for the detected features.If constraints are given e.g.in the form of minimum line length a stopping rule can be tested before selecting a point for voting.Without a stopping rule the PPHT and the Standard Hough Transform differ only in the number of false positives.False negatives–missed features with respect to the Stan-dard Hough Transform–should not pose a problem,because if the feature is detectable by SHT it will be detected by PPHT at the latest when the votingfinished when the cor-responding bin counts of PPHT and SHT are identical.PPHT and SHT performance differs from the point of view of false positives,only where assignment of points to lines is ambiguous,i.e.where points are in the neighbourhood of more than one line.In our experiments a straightforward one-to-many voting scheme was used.The pro-posed method reduces computation by minimising the number of points that participate in the voting process,so common improvements that reduce the number of votes cast(e.g. by exploiting the gradient information if available)do not interfere with the benefits of the method.In setting the decision threshold we assume that all points are due to noise.It is a worst-case assumption,but if many lines are present in the image the assumption is almost valid,since only a fraction of points belong to any single line.Since every pixel votes into one bin with a given value of,we can focus on the anal-ysis of vote distribution along the axis of the accumulator.If we adopt the assumption that a vote into any bin is equally likely2,then the distribution of points in the bins is multinomial.If we used the multinomial distribution to make a detection decision,we would have to monitor all bins,which is computationally impractical.We therefore as-sume that counts in the bins are independent,with the probability of a vote falling into a given bin being.Again,this is clearly an approximation,since counts in all bins must add to,the number of votes cast so far.Under the simplifying assumptions the number of votes in a single bin will follow the binomial distribution.Since is of the order of1and much less than1the binomial distribution is well approximated by a Poisson distribution.Here is an outline of the algorithm used:1.Check the input image,if it is empty thenfinish.2We are aware of the fact that this assumption is most likely not correct,since it corresponds to spatially non-uniform distribution of noise pointsBritish Machine Vision Conference259 Method SHT PPHTLines FP FN FP FN20.080.270.080.270.010.100.000.0040.360.670.360.670.120.460.060.246 1.07 1.22 1.06 1.200.370.810.190.448 2.56 1.68 2.56 1.68 1.38 1.300.90 1.0110 4.00 1.84 3.98 1.83 2.28 1.90 1.52 1.4812 6.44 2.07 6.40 2.06 3.79 2.15 2.78 1.99148.13 2.158.03 2.11 5.94 2.21 4.48 2.491610.90 2.0610.86 2.028.35 2.23 6.66 3.071813.36 2.0313.32 2.079.86 2.778.16 3.512016.12 1.9316.04 1.8812.74 2.3711.56 4.09Table1:Effect of image clutter on false positives(FP)and negatives(FN).Averages and standard deviations()over100runs are shown.2.Update the accumulator with a single pixel randomly selected from the input image.3.Remove pixel from input image.4.Check if the highest peak in the accumulator that was modified by the new pixel ishigher than threshold.If not then goto1.5.Look along a corridor specified by the peak in the accumulator,andfind the longestsegment of pixels either continuous or exhibiting a gap not exceeding a given threshold.6.Remove the pixels in the segment from input image.7.Unvote from the accumulator all the pixels from the line that have previously voted.8.If the line segment is longer than the minimum length add it into the output list.9.goto1.3Performance evaluation of the PPHTPerformance evaluation of a line detection algorithm has many aspects.In synthetic im-ages the correctness of the output can be measured by the error rate,i.e.the number of false positives(spurious detected features)and false negatives(mis-detected lines).In real images the definition of what should and should not be detected as a line is subjective or application-dependent.Since PPHT is not designed for a specific application we illus-trate its performance on real imagery by comparing it with Standard Hough Transform [5](Section3.4).The tests designed to assess the correctness(quality)of PPHT are pre-sented in section3.2.In the limited space we do not present any results on the correctness of pixel assignment and the precision of recovered line parameters.We do not consider precision of the PPHT to be an important characteristic of the method-standard precision can by achieved by a(robust)least squaresfit performed on the assigned points.The rest of the experiments test the computational efficiency of the PPHT(section3.3).260British Machine Vision Conference3.1Experimental setupTo assess the relative merit of the PPHT,the quality of its output was compared to that of the Standard Hough Transform.The experiments were designed to minimise any dif-ferences between the SHT and PPHT implementations.The implementations were kept as simple as possible,with minimal post and pre-processing(use of gradient information, connectivity etc.).Most standard enhancements of SHT are directly applicable to PPHT and do not therefore change the relative merits of the methods.In the implementation, the standard,line parameterisation was used.All experiments were carried out with the following settings.Resolution of the accumulator space was0.01radians for and 1pixel for.Pixels within a3pixel wide corridor were assigned to a line.Since the Hough Transform detects peaks corresponding to infinite straight lines,but we want to detectfinite line segments,a single post-processing step was implemented to separate collinear lines.From the pixels supporting a particular bin,the longest segment was cho-sen which had no gaps bigger than6pixels long.The minimum accepted line length was 4pixels.Unless stated otherwise the value used for the PPHT specific parameter for the significance threshold was0.99999.3.2Correctness of line detection using PPHT.The correctness of the PPHT output was measured by the error rate.Definition of what constitutes a false positive and a false negative is not as straightforward as it mayfirst appear.This is particularly a problem where post processing is used to obtain line seg-ments for the infinite lines found by the Hough Transform.If edge pixels are allowed to be assigned to a single line,incorrect assignment of pixels(e.g.at intersections of lines) can lead to a split of a correctly extracted line.A pair of collinear lines almost covering a model line are not,in our opinion,identical to a pair of false positives and a false negative. Similarly,it has to be decided at what level of assignment errors a feature is declared not detected.We decided to use the following criteria for determining the error statistics.False positives are detected lines that cover less than80%of any single ground-truth line in the image.False negatives are those lines in the model which are covered by less than80 percent by detected lines,excluding those counted as false positives.Experiment1.A hundred images containing afixed number of randomly positioned lines were syntheticaly generated,the only source of noise being in the digitisation of the lines themselves.Figure4shows a typical image used in the experiment,the image resolution was256both horizontally and vertically,the lines were hundred pixels long. The number of lines varied between2and20.Both SHT and PPHT were run on all images,the results of this experiment are summarised in table1.These measurements of false negatives and positives where counted as described at the beginning of this section.The results in table1show that PPHT outperforms the Standard Hough Transform in all the cases.Both the number of false positives and the number of false negatives for PPHT were lower than those given by SHT.This is an unexpected result.PPHT uses a fraction of SHT votes,but this should reduce,in controlled way,the average correctness of output.The test results show the advantages of clearing the accumulator space of clutter from explained pixels as soon as the hypothesis they are associated with becomes almost certain.British Machine Vision Conference 261FP FN votes 0.90.340.730.390.8246.889.700.990.250.700.360.7646.689.800.9990.130.340.270.6855.478.890.99990.170.550.250.5957.278.800.999990.170.430.360.7072.7312.150.9999990.110.310.250.6184.6313.190.99999990.160.420.390.7993.0615.200.999999990.070.260.330.65108.6212.18Table 2:Influence of significance level on error rate and run time.Averages and standard deviations over 100tests are shown.0.01000.02000.03000.0Voting operations 0.00.51.01.52.02.53.0T i m e (S e c o n d s )Figure 1:Run time as a function of voting operations.The correlation coefficient is 0.998.0.050.0100.0150.0Pixels in line 0.010.020.030.040.0V o t i n g o p e r a t i o n s3-Lines 2-LinesFigure 2:V otes needed to find asmall number of lines262British Machine Vision Conference 0.020.040.060.080.0100.0Number of lines 0.00.20.40.60.81.0F r ac t i o n o f p o i n t s t h a t v o t ed 10 Pixels30 Pixels 50 Pixels90 Pixels70 PixelsFigure 3:Effect of line length onnumber of voting operations.Figure 4:Example synthetic image.Experiment 2.The parameter of PPHT,the significance level at which lines are accepted,controls the trade-off between false positives and the speed of the method.The influence of is shown in table 2.The experiments were again performed on a set of 100synthetic images of 256x256pixels each with 5lines of 100pixels.Increase in the significance level for line detection leads to a decrease in number of false positives at the cost of increasing the number of voting operations required,which as will be shown in the next section,is proportional to run time.3.3Computational efficiencyWe have argued that the advantage of PPHT lies in its ability to minimise the number of voting operations while almost retaining the quality of the SHT output.The run time of the algorithm depends on implementation details,the compiler as well as on the machine it is run on.To measure the run time speed we would prefer to use a number related directly to the algorithm itself rather than the measured run time.Since the activity which dominates the computation is voting and unvoting (vote retracting)in the Hough space we will use the number of such operations as a measure of the amount of computation required.Experiment 3.To check the validity of the assumption,the run time and number of voting operations were measured on a large set of test images.The results in figure 1show a very high correlation between the number of voting operations and the program run time (correlation coefficient 0.998).Experiment 4.Since lines are removed as they are found,the number of voting operations required to find a single line is nearly independent of its length.Figure 2shows the number of voting operations required to find a line in a simple image with only a few lines and no noise.The experiment was repeated 50times,and the error bars show one standard deviation.Once the line length exceeds a small threshold value,the number of votes required becomes constant.The actual number of voting operations needed to process any image is effectively proportional to the number of lines in the image.This is particularly true if noise points are considered to be small lines in their own right,whichBritish Machine Vision Conference263Figure5:Input edge image.Figure6:Results of Standard HoughTransform.is reasonable if the output comes from an edge detector.Figure3shows the fraction of the image points that voted as a function of the number of lines in an image.The slight positive gradient of these lines is due to the accumulator threshold rising as the number of votes it contains increases.3.4Experiments on real imagesFinally we looked at the results of processing real images.The starting point is the edge image shown infigure5.Infigure6the results of the SHT are shown.Infigures7and8 the results of the PPHT algorithm are shown with low and high values of respectively. The number of voting operations used to process each of these images is shown in table 3.The main difference that can be seen between these images is due to the combination of two factors.Thefirst is the use of the greedy pixel allocation routine used by these programs and the second factor is that the order in which lines are found in PPHT is much less well defined than in SHT.This means short lines may be found before longer ones.This in itself is not incorrect,but because the greedy pixels allocation takes pixels from either end of the shorter line it can interfere with the detection of other,longer lines which are found by SHT.This problem is less serious with higher values of because theV oting operationsSHT31200.99999999918970.999991042Table3:V oting operations for house image.264British Machine Vision ConferenceFigure7:Results of PPHT,with a low value for(0.99999).Figure8:Results of PPHT,with a high value for(0.999999999).number of votes required tofind a line increases,and so the order of detection becomes more deterministic.The other problems inherent to PPHT with a low false positive threshold can be seen in the lower left corner offigure7.Here false positive lines are found diagonally crossing the actual lines in the image.This problem can also be seen in the output of the SHT, the window in the centre offigure6.The problem shall be solved by exploiting gradient information,a common technique used to enhance SHT.4ConclusionsIn the paper we presented the Progressive Probabilistic Hough Transform algorithm.We showed that unlike the Probabilistic Hough Transform the proposed algorithm minimises the amount of computation needed to detect lines by exploiting the difference in the frac-tion of votes needed to reliably detect lines with different support.The dependence of the fraction of points used for voting is a function of the inherent complexity of data.We demonstrated on input images containing N lines that the number of votes(speed)of PPHT is almost independent of line lengths(input points).The post-processing used for the experiments presented in this paper is essentially the same as that used for SHT.The ordering of recovered lines in PPHT is not as well defined as in case of SHT.Although this loss of ordering will lead to some undesirable results around line intersections when using standard SHT post processing,it is easy to conceive effective schemes which would eradicate this problem.One such scheme could use edge pixel gradients to choose when to add a pixel into a ing gradient information when accumulating would also speed up the voting process,and serve to reduce further the clutter in the accumulator space.British Machine Vision Conference265References[1]JR Bergen and H Shvaytser.A probabilistic algorithm for computing Hough Transforms.Jour-nal Of Algorithms,12(4):639–656,1991.[2]J.Illingworth and J.Kittler.A survey of the Hough puter Vision,Graphics andImage Processing,44:87–116,1988.[3]H Kalviainen,P Hirvonen,L Xu,and E Oja.Probabilistic and nonprobabilistic Hough Trans-forms-overview and comparisons.Image And Vision Computing,13(4):239–252,1995. [4]N Kiryati,Y Eldar,and AM Bruckstein.A probabilistic Hough Transform.Pattern Recogni-tion,24(4):303–316,1991.[5]J Princen,J Illingworth,and J Kittler.Hypothesis-testing-a framework for analyzing and opti-mizing Hough Transform performance.IEEE Transactions On Pattern Analysis And Machine Intelligence,16(4):329–341,1994.[6]D Shaked,O Yaron,and N Kiryati.Deriving stopping rules for the Probabilistic HoughTransform by puter Vision And Image Understanding,63(3):512–526, 1996.[7]L Xu and E Oja.Randomized Hough Transform-basic mechanisms,algorithms,and compu-tational complexities.CVGIP-image Understanding,57(2):131–154,1993.[8]Antti Yla-Jaaski and Nahum Kiryati.Automatic termination rules for probabilistic hough al-gorithms.In8th Scandinavian Conference on Image Analysis,pages121–128,1993.[9]A Ylajaaski and N Kiryati.Adaptive termination of voting in the probabilistic circular houghtransform.IEEE Transactions On Pattern Analysis And Machine Intelligence,16(9):911–915, 1994.。