当前位置:文档之家› !How Far are We from Solving Pedestrian Detection

!How Far are We from Solving Pedestrian Detection

!How Far are We from Solving Pedestrian Detection
!How Far are We from Solving Pedestrian Detection

from Solving Pedestrian Detection?

Mohamed Omran,Jan Hosang,and Bernt Schiele

Planck Institute for Informatics Saarbrücken,Germany

https://www.doczj.com/doc/2c15460535.html,stname@mpi-inf.mpg.de

Abstract

Encouraged by the recent progress in pedestrian detec-tion,we investigate the gap between current state-of-the-art methods and the “perfect single frame detector”.We en-able our analysis by creating a human baseline for pedes-trian detection (over the Caltech dataset),and by manually clustering the recurrent errors of a top detector.Our res-ults characterize both localization and background-versus-foreground errors.

To address localization errors we study the impact of training annotation noise on the detector performance,and show that we can improve even with a small portion of sanitized training data.To address background/foreground discrimination,we study convnets for pedestrian detection,and discuss which factors affect their performance.

Other than our in-depth analysis,we report top perform-ance on the Caltech dataset,and provide a new sanitized set of training and test annotations 1.

1.Introduction

Object detection has received great attention during re-cent years.Pedestrian detection is a canonical sub-problem that remains a popular topic of research due to its diverse applications.

Despite the extensive research on pedestrian detection,recent papers still show signi?cant improvements,suggest-ing that a saturation point has not yet been reached.In this paper we analyse the gap between the state of the art and a newly created human baseline (section 3.1).The results indicate that there is still a ten fold improvement to be made before reaching human performance.We aim to investigate which factors will help close this gap.

We analyse failure cases of top performing pedestrian detectors and diagnose what should be changed to further push performance.We show several different analysis,in-cluding human inspection,automated analysis of problem

1If

you are interested in our new annotations,please contact Shanshan Zhang.

10

10101010

Figure 1:Overview of the top results on the Caltech-USA pedestrian benchmark (CVPR2015snapshot).At ~95%recall,state-of-the-art detectors make ten times more errors than the human baseline.

cases (e.g.blur,contrast),and oracle experiments (section 3.2).Our results indicate that localization is an important source of high con?dence false positives.We address this aspect by improving the training set alignment quality,both by manually sanitising the Caltech training annotations and via algorithmic means for the remaining training samples (sections 3.3and 4.1).

To address background versus foreground discrimina-tion,we study convnets for pedestrian detection,and dis-cuss which factors affect their performance (section 4.2).

1.1.Related work

In the last years,diverse efforts have been made to im-prove the performance of pedestrian detection.Following the success of integral channel feature detector (ICF)[6,5],many variants [22,23,16,18]were proposed and showed signi?cant improvement.A recent review of pedestrian de-tection [3]concludes that improved features have been driv-ing performance and are likely to continue doing so.It also shows that optical ?ow [19]and context information [17]are complementary to image features and can further boost 1

a r X i v :1602.01237v 1 [c s .C V ] 3 F e

b 2016

detection accuracy.

By?ne-tuning a model pre-trained on external data convolution neural networks(convnets)have also reached state-of-the-art performance[15,20].

Most of the recent papers focus on introducing novelty and better results,but neglect the analysis of the resulting system.Some analysis work can be found for general ob-ject detection[1,14];in contrast,in the?eld of pedestrian detection,this kind of analysis is rarely done.In2008,[21] provided a failure analysis on the INRIA dataset,which is relatively small.The best method considered in the2012 Caltech dataset survey[7]had10×more false positives at20%recall than the methods considered here,and no method had reached the95%mark.

Since pedestrian detection has improved signi?cantly in recent years,a deeper and more comprehensive analysis based on state-of-the-art detectors is valuable to provide better understanding as to where future efforts would best be invested.

1.2.Contributions

Our key contributions are as follows:

(a)We provide a detailed analysis of a state-of-the-art ped-estrian detection system,providing insights into failure cases.

(b)We provide a human baseline for the Caltech Pedestrian Benchmark;as well as a sanitised version of the annotations to serve as new,high quality ground truth for the training and test sets of the benchmark.The data will be public. (c)We analyse how much the quality of training data affects the detector.More speci?cally we quantify how much bet-ter alignment and fewer annotation mistakes can improve performance.

(d)Using the insights of the analysis,we explore variants of top performing methods:?ltered channel feature detector [23]and R-CNN detector[13,15],and show improvements over the baselines.

2.Preliminaries

Before delving into our analysis,let us describe the data-sets in use,their metrics,and our baseline detector.

2.1.Caltech-USA pedestrian detection benchmark

Amongst existing pedestrian datasets[4,9,8],KITTI [11]and Caltech-USA are currently the most popular ones. In this work we focus on the Caltech-USA benchmark[7] which consists of2.5hours of30Hz video recorded from a vehicle traversing the streets of Los Angeles,USA.The video annotations amount to a total of350000bound-ing boxes covering~2300unique pedestrians.Detec-tion methods are evaluated on a test set consisting of4024 frames.The provided evaluation toolbox generates plots

Filter type MR O?2

ACF[5]44.2

SCF[3]34.8

LDCF[16]24.8

RotatedFilters19.2

Checkerboards18.5

Table1:The?lter

type determines the

ICF methods quality.

Base detector MR O?2+Context+Flow

Orig.2Ped[17]48~5pp/

Orig.SDt[19]45/8pp

SCF[3]355pp4pp

Checkerboards19~01pp

Table2:Detection quality gain of

adding context[17]and optical?ow

[19],as function of the base detector.

for different subsets of the test set based on annotation size, occlusion level and aspect ratio.The established proced-ure for training is to use every30th video frame which res-ults in a total of4250frames with~1600pedestrian cut-outs.More recently,methods which can leverage more data for training have resorted to a?ner sampling of the videos [16,23],yielding up to10×as much data for training than the standard“1×”setting.

MR O,MR N In the standard Caltech evaluation[7]the miss rate(MR)is averaged over the low precision range of [10?2,100]FPPI.This metric does not re?ect well improve-ments in localization errors(lowest FPPI range).Aiming for a more complete evaluation,we extend the evaluation FPPI range from traditional[10?2,100]to[10?4,100],we denote these MR O?2and MR O?4.O stands for“original an-notations”.In section3.3we introduce new annotations, and mark evaluations done there as MR N?2and MR N?4.We expect the MR?4metric to become more important as de-tectors get stronger.

2.2.Filtered channel features detector

For the analysis in this paper we consider all methods published on the Caltech Pedestrian benchmark,up to the last major conference(CVPR2015).As shown in?gure1, the best method at the time is Checkerboards,and most of the top performing methods are of its same family.

The Checkerboards detector[23]is a generalization of the Integral Channels Feature detector(ICF)[6],which ?lters the HOG+LUV feature channels before feeding them into a boosted decision forest.

We compare the performance of several detectors from the ICF family in table1,where we can see a big improve-ment from44.2%to18.5%MR O?2by introducing?lters over the feature channels and optimizing the?lter bank.

Current top performing convnets methods[15,20]are sensitive to the underlying detection proposals,thus we?rst focus on the proposals by optimizing the?ltered channel feature detectors(more on convnets in section4.2). Rotated?lters For the experiments involving train-ing new models(in section 4.1)we use our own re-implementation of Checkerboards[23],based on the LDCF[16]codebase.To improve the training time we decrease the number of?lters from61in the original

Checkerboards down to9?lters.Our so-called Rota-tedFilters are a simpli?ed version of LDCF,applied at three different scales(in the same spirit as Squares-ChnFtrs(SCF)[3]).More details on the?lters are given in the supplementary material.As shown in table1,Ro-tatedFilters are signi?cantly better than the original LDCF,and only1pp(percent point)worse than Checker-boards,yet run6×faster at train and test time. Additional cues The review[3]showed that context and optical?ow information can help improve detections. However,as the detector quality improves(table1)the re-turns obtained from these additional cues erodes(table2). Without re-engineering such cues,gains in detection must come from the core detector.

3.Analysing the state of the art

In this section we estimate a lower bound on the re-maining progress available,analyse the mistakes of current pedestrian detectors,and propose new annotations to better measure future progress.

3.1.Are we reaching saturation?

Progress on pedestrian detection has been showing no sign of slowing in recent years[23,20,3],despite recent im-pressive gains in performance.How much progress can still be expected on current benchmarks?To answer this ques-tion,we propose to use a human baseline as lower bound. We asked domain experts to manually“detect”pedestrians in the Caltech-USA test set;machine detection algorithms should be able to at least reach human performance and, eventually,superhuman performance.

Human baseline protocol To ensure a fair comparison with existing detectors,we focus on the single frame mon-ocular detection setting.Frames are presented to annotators in random order,and without access to surrounding frames from the source videos.Annotators have to rely on pedes-trian appearance and single-frame context rather than(long-term)motion cues.

The Caltech benchmark normalizes the aspect ratio of all detection boxes[7].Thus our human annotations are done by drawing a line from the top of the head to the point between both feet.A bounding box is then automatically generated such that its centre coincides with the centre point of the manually-drawn axis,see illustration in?gure2.This procedure ensures the box is well centred on the subject (which is hard to achieve when marking a bounding box).

To check for consistency among the two annotators,we produced duplicate annotations for a subset of the test im-ages(~10%),and evaluated these separately.With a Intersection over Union(IoU)≥0.5matching criterion, the results were identical up to a single bounding

box.Figure2:Illustration of bounding box generation for human baseline.The annotator only needs to draw a line from the top of the head to the central point between both feet,a tight bounding box is then automatically generated. Conclusion In?gure3,we compare our human baseline with other top performing methods on different subsets of the test data(varying height ranges and occlu-sion levels).We?nd that the human baseline widely out-performs state-of-the-art detectors in all settings2,indicat-ing that there is still room for improvement for automatic methods.

3.2.Failure analysis

Since there is room to grow for existing detectors,one might want to know:when do they fail?In this section we analyse detection mistakes of Checkerboards,which obtains top performance on most subsets of the test set(see ?gure3).Since most top methods of?gure1are of the ICF family,we expect a similar behaviour for them too.Meth-ods using convnets with proposals based on ICF detectors will also be affected.

3.2.1Error sources

There are two types of errors a detector can do:false pos-itives(detections on background or poorly localized detec-tions)and false negatives(low-scoring or missing pedes-trian detections).In this analysis,we look into false positive and false negative detections at0.1false positives per im-age(FPPI,1false positive every10images),and manually cluster them(one to one mapping)into visually distinctive groups.A total of402false positive and148false negative detections(missing recall)are categorized by error type. False positives After inspection,we end up having all false positives clustered in eleven categories,shown in?g-ure4a.These categories fall into three groups:localization, background,and annotation errors.

Background errors are the most common ones,mainly ver-tical structures(e.g.?gure5b),tree leaves,and traf?c lights. This indicates that the detectors need to be extended with a better vertical context,providing visibility over larger struc-tures and a rough height estimate.

Localization errors are dominated by double detections

2Except for IoU≥0.8.This is due to issues with the ground truth, discussed in section3.3.

Reasonable (IoU >= 0.5)Height > 80Height in [50,80]Height in [30,50]020

406080100HumanBaseline

Checkerboards RotatedFilters

m i s s r a t e

Figure 3:Detection quality (log-average miss rate)for different test set subsets.Each group shows the human baseline,the Checkerboards [23]and RotatedFilters detectors,as well as the next top three (unspeci?ed)methods (different for each setting).The corresponding curves are provided in the supplementary material.(high scoring detections covering the same pedestrian,e.g.?gure 5a ).This indicates that improved detectors need to have more localized responses (peakier score maps)and/or a different non-maxima suppression strategy.In sections 3.3and 4.1we explore how to improve the detector localiz-ation.

The annotation errors are mainly missing ignore regions,and a few missing person annotations.In section 3.3we revisit the Caltech annotations.

False negatives Our clustering results in ?gure 4b show the well known dif?culty of detecting small and oc-cluded objects.We hypothesise that low scoring side-view persons and cyclists may be due to a dataset bias,i.e.these cases are under-represented in the training set (most per-sons are non-cyclist walking on the side-walk,parallel to the car).Augmenting the training set with external images for these cases might be an effective strategy.

To understand better the issue with small pedestrians,we measure size,blur,and contrast for each (true or false)de-tection.We observed that small persons are commonly sat-urated (over or under exposed)and blurry,and thus hypo-thesised that this might be an underlying factor for weak detection (other than simply having fewer pixels to make the decision).Our results indicate however that this is not the case.As ?gure 4c illustrates,there seems to be no cor-relation between low detection score and low contrast.This also holds for the blur case,detailed plots are in the sup-plementary material.We conclude that the small number of pixels is the true source of dif?culty.Improving small objects detection thus need to rely on making proper use of all pixels available,both inside the window and in the surrounding context,as well as across time.

Conclusion Our analysis shows that false positive er-rors have well de?ned sources that can be speci?cally tar-geted with the strategies suggested above.A fraction of the false negatives are also addressable,albeit the small and oc-cluded pedestrians remain a (hard and)signi?cant problem.

20

4060

80

100120# e r r o r s 0

100200300loc a liz a tion ba c k g round a nnota e rrors

#

e r r o r s (a)False positive sources

15

30

45

60# e r r o r s (b)False negative sources

(c)Contrast versus detection score

Figure 4:Errors analysis of Checkerboards [23]on the test set.

(a)double detection

Figure 5:Example of analysed false positive cases (red box).Additional ones in supplementary material.3.2.2

Oracle test cases

The analysis of section 3.2.1focused on errors counts.

For area-under-the-curve metrics,such as

the

ones used in Caltech,high-scoring errors matter more than low-scoring ones.In this section we directly measure the impact of loc-alization and background-vs-foreground errors on the de-tection quality metric (log-average miss-rate)by using or-acle test cases.

In the oracle case for localization,all false positives that overlap with ground truth are ignored for evaluation.In the oracle tests for background-vs-foreground,all false posit-ives that do not overlap with ground truth are ignored.

Figure 6a shows that ?xing localization mistakes im-proves performance in the low FPPI region;while ?xing background mistakes improves results in the high FPPI re-gion.Fixing both types of mistakes results zero errors,even though this is not immediately visible due to the double log plot.

In ?gure 6b we show the gains to be obtained in MR O ?4terms by ?xing localization or background issues.When comparing the eight top performing methods we ?nd that most methods would boost performance signi?cantly by ?x-ing either problem.Note that due to the log-log nature of the numbers,the sum of localization and background deltas do not add up to the total miss-rate.

Conclusion For most top performing methods localiz-ation and background-vs-foreground errors have equal im-pact on the detection quality.They are equally important.

3.3.Improved Caltech-USA annotations

When evaluating our human baseline (and other meth-ods)with a strict IoU ≥0.8we notice in ?gure 3that the performance drops.The original annotation protocol is based on interpolating sparse annotations across multiple frames [7],and these sparse annotations are not necessar-ily located on the evaluated frames.After close inspection we notice that this interpolation generates a systematic off-set in the annotations.Humans walk with a natural up and down oscillation that is not modelled by the linear interpol-ation used,thus in most frames have shifted bounding box annotations.This effect is not noticeable when using the forgiving IoU ≥0.5,however such noise in the annotations is a hurdle when aiming to improve object localization.

10

10

?2

10?1100

10

false positives per image

18.47(33.20)% Checkerboards

15.94(25.49)% Checkerboards (localization oracle)11.92(26.17)% Checkerboards (background oracle)

(a)Original and two oracle curves for Checkerboards de-tector.Legend indicates MR O ?2 MR O ?4 .

(b)Comparison of miss-rate gain (?MR O ?4)for top performing methods.

Figure 6:Oracle cases evaluation over Caltech test set.

Both localization and background-versus-foreground show important room for improvement.

(a)False annotations (b)Poor alignment

Figure 7:Examples of errors in original annotations.New annotations in green,original ones in red.

This localization issues together with the annotation er-rors detected in section 3.2.1motivated us to create a new set of improved annotations for the Caltech pedestrians dataset.Our aim is two fold;on one side we want to provide a more accurate evaluation of the state of the art,in particu-lar an evaluation suitable to close the “last 20%”of the prob-lem.On the other side,we want to have training annotations and evaluate how much improved annotations lead to better detections.We evaluate this second aspect in section 4.1.New annotation protocol Our human baseline focused on a fair comparison with single frame methods.Our new annotations are done both on the test and training 1×set,and focus on high quality.The annotators are allowed to look at the full video to decide if a person is present or not,they are request to mark ignore regions in areas cov-ering crowds,human shapes that are not persons (posters,

statues,etc.),and in areas that could not be decided as cer-tainly not containing a person.Each person annotation is done by drawing a line from the top of the head to the point between both feet,the same as human baseline.The annot-ators must hallucinate head and feet if these are not visible. When the person is not fully visible,they must also annotate a rectangle around the largest visible region.This allows to estimate the occlusion level in a similar fashion as the ori-ginal Caltech annotations.The new annotations do share some bounding boxes with the human baseline(when no correction was needed),thus the human baseline cannot be used to do analysis across different IoU thresholds over the new test set.

In summary,our new annotations differ from the human baseline in the following aspects:both training and test sets are annotated,ignore regions and occlusions are also an-notated,full video data is used for decision,and multiple revisions of the same image are allowed.

After creating a full independent set of annotations,we con-solidated the new annotations by cross-validating with the old annotations.Any correct old annotation not accounted for in the new set,was added too.

Our new annotations correct several types of errors in the existing annotations,such as misalignments(?gure 7b),missing annotations(false negatives),false annotations (false positives,?gure7a),and the inconsistent use of“ig-nore”regions.Our new annotations will be publicly avail-able.Additional examples of“original versus new annota-tions”provided in the supplementary material,as well as visualization software to inspect them frame by frame. Better alignment In table3we show quantitative evid-ence that our new annotations are at least more precisely localized than the original ones.We summarize the align-ment quality of a detector via the median IoU between true positive detections and a give set of annotations.When evaluating with the original annotations(“median IoU O”column in table3),only the model trained with original annotations has good localization.However,when evalu-ating with the new annotations(“median IoU N”column) both the model trained on INRIA data,and on the new an-notations reach high localization accuracy.This indicates that our new annotations are indeed better aligned,just as INRIA annotations are better aligned than Caltech.

Detailed IoU curves for multiple detectors are provided in the supplementary material.Section4.1describes the RotatedFilters-New10×entry.

4.Improving the state of the art

In this section we leverage the insights of the analysis, to improve localization and background-versus-foreground discrimination of our baseline detector.

Detector

Training

data

Median

IoU O

Median

IoU N Roerei[2]INRIA0.760.84

RotatedFilters Orig.10×0.800.77

RotatedFilters New10×0.760.85 Table3:Median IoU of true positives for detectors trained on different data,evaluated on original and new Caltech test.Models trained on INRIA align well with our new an-notations,con?rming that they are more precise than previ-ous ones.Curves for other detectors in the supplement.

Detector Anno.variant MR O?2MR N?2

ACF

Original36.9040.97

Pruned36.4135.62

New41.2934.33 RotatedFilters

Original28.6333.03

Pruned23.8725.91

New31.6525.74 Table4:Effects of different training annotations on detec-tion quality on validation set(1×training set).Italic num-bers have matching training and test sets.Both detectors im-prove on the original annotations,when using the“pruned”variant(see§4.1).

4.1.Impact of training annotations

With new annotations at hand we want to understand what is the impact of annotation quality on detection qual-ity.We will train ACF[5]and RotatedFilters mod-els(introduced in section2.2)using different training sets and evaluate on both original and new annotations(i.e. MR O?2,MR O?4and MR N?2,MR N?4).Note that both detect-ors are trained via boosting and thus inherently sensitive to annotation noise.

Pruning bene?ts Table4shows results when training with original,new and pruned annotations(using a5/6+1/6 training and validation split of the full training set).As ex-pected,models trained on original/new and tested on ori-ginal/new perform better than training and testing on differ-ent annotations.To understand better what the new annota-tions bring to the table,we build a hybrid set of annotations. Pruned annotations is a mid-point that allows to decouple the effects of removing errors and improving alignment. Pruned annotations are generated by matching new and ori-ginal annotations(IoU≥0.5),marking as ignore region any original annotation absent in the new ones,and adding any new annotation absent in the original ones.

From original to pruned annotations the main change is re-moving annotation errors,from pruned to new,the main change is better alignment.From table4both ACF and RotatedFilters bene?t from removing annotation er-rors,even in MR O?2.This indicates that our new training set

Figure 8:Examples of automatically aligned ground truth annotations.Left/right →before/after alignment.1×data 10×data aligned with

MR O ?2(MR O ?4)MR N ?2(MR N

?4)

Orig.?19.20(34.28)17.22(31.65)Orig.Orig.10×19.16(32.28)15.94(29.33)Orig.New 1/2×16.97(28.01)14.54(25.06)New

New 1×

16.77(29.76)12.96(22.20)

Table 5:Detection quality of RotatedFilters on test set when using different aligned training sets.All mod-els trained with Caltech 10×,composed with different 1×+9×combinations.

is better sanitized than the original one.We see in MR N ?2that the stronger detector bene?ts more from better data,and that the largest gain in detection qual-ity comes from removing annotation errors.

Alignment bene?ts The detectors from the ICF family bene?t from training with increased training data [16,23],using 10×data is better than 1×(see section 2.1).To lever-age the 9×remaining data using the new 1×annotations we train a model over the new annotations and use this model to re-align the original annotations over the 9×portion.Be-cause the new annotations are better aligned,we expect this model to be able to recover slight position and scale errors in the original annotations.Figure 8shows example results of this process.See supplementary material for details.Table 5reports results using the automatic alignment pro-cess,and a few degraded cases:using the original 10×,self-aligning the original 10×using a model trained over original 10×,and aligning the original 10×using only a fraction of the new annotations (without replacing the 1×portion).The results indicate that using a detector model to improve overall data alignment is indeed effective,and that better aligned training data leads to better detection quality (both in MR O and MR N ).This is in line with the analysis of section 3.2.Already using a model trained on 1/2of the new annotations for alignment,leads to a stronger model than obtained when using original annotations.

We name the RotatedFilters model trained using the new annotations and the aligned 9×data,Rotated-Filters-New10×.This model also reaches high me-dian true positives IoU in table 3,indicating that indeed it obtains more precise detections at test time.

Conclusion Using high quality annotations for training improves the overall detection quality,thanks both to im-proved alignment and to reduced annotation errors.

4.2.Convnets for pedestrian detection

The results of section 3.2indicate that there is room for improvement by focusing on the core background versus foreground discrimination task (the “classi?cation part of object detection”).Recent work [15,20]showed compet-itive performance with convolutional neural networks (con-vnets)for pedestrian detection.We include convnets into our analysis,and explore to what extent performance is driven by the quality of the detection proposals.

AlexNet and VGG We consider two convnets.1)The AlexNet from [15],and 2)The VGG16model from [12].Both are pre-trained on ImageNet and ?ne-tuned over Cal-tech 10×(original annotations)using SquaresChnFtrs proposals.Both networks are based on open source,and both are instances of the R-CNN framework [13].Albeit their training/test time architectures are slightly different (R-CNN versus Fast R-CNN),we expect the result differ-ences to be dominated by their respective discriminative power (VGG16improves 8pp in mAP over AlexNet in the Pascal detection task [13]).

Table 6shows that as we improve the quality of the detection proposals,AlexNet fails to provide a consistent gain,eventually worsening the results of our ICF detect-ors (similar observation done in [15]).Similarly VGG provides large gains for weaker proposals,but as the pro-posals improve,the gain from the convnet re-scoring even-tually stalls.

After closer inspection of the resulting curves (see sup-plementary material),we notice that both AlexNet and VGG push background instances to lower scores,and at the same time generate a large number of high scoring false positives.The ICF detectors are able to provide high recall proposals,where false positives around the objects have low scores (see [15,supp.material,?g.9]),however convnets have dif?culties giving low scores to these windows sur-rounding the true positives.In other words,despite their ?ne-tuning,the convnet score maps are “blurrier”than the proposal ones.We hypothesise this is an intrinsic limita-tion of the AlexNet and VGG architectures,due to their in-ternal feature pooling.Obtaining “peakier”responses from a convnet most likely will require using rather different ar-chitectures,possibly more similar to the ones used for se-mantic labelling or boundaries estimation tasks,which re-quire pixel-accurate output.

Fortunately,we can compensate for the lack of spatial resolution in the convnet scoring by using bounding box regression.Adding bounding regression over VGG,and ap-plying a second round of non-maximum suppression (?rst NMS on the proposals,second on the regressed boxes),has

Table 6:Detection quality of convnets with different pro-posals.Grey numbers indicate worse results than the input proposals.All numbers are MR N ?2on the Caltech test set.

Figure 9:Oracle case analysis of proposals +convnets (after

second NMS).Miss-rate gain,?MR O ?4.The convnet signi-?cantly improves background errors,while slightly increas-ing localization ones.

the effect of “contracting the score maps”.Neighbour pro-posals that before generated multiple strong false positives,now collapse into a single high scoring detection.We use the usual IoU ≥0.5merging criterion for the second NMS.The last column of table 6shows that bounding box regression +NMS is effective at providing an additional gain over the input proposals,even for our best de-tector RotatedFilters-New10×.On the original annotations RotatedFilters-New10×+VGG reaches 14.7%MR O ?2,which improves over [15,20].

Figure 9repeats the oracle tests of section 3.2.2over our convnet results.One can see that VGG signi?cantly cuts down the background errors,while at the same time slightly increases the localization errors.

Conclusion Although convnets have strong results in image classi?cation and general object detection,they seem to have limitations when producing well localized detection scores around small objects.Bounding box regression (and NMS)is a key ingredient to side-step this limitation with current architectures.Even after using a strong convnet,background-versus-foreground remains the main source of

false positives per image m i s s r a t e

Figure 10:Detection quality on Caltech test set (reasonable subset),using the new annotations (MR N ?2 MR N

?4 ).Fur-ther results in the supplementary material.

Detector aspect MR O ?2(MR O ?4)MR N

?2(MR N ?4)RotatedFilters 19.20(34.28)17.22(31.65)+Alignment §4.116.97(28.01)14.54(25.06)+New annotations §4.116.77(29.76)12.96(22.20)

+VGG §4.216.61(34.79)11.09(25.99)+bbox reg &NMS 14.72(30.86)9.32(21.72)Checkerboards 18.47(33.20)16.13(29.81)

Table 7:Step by step improvements from previ-ous best method Checkerboards to Rotated-Filters-New10x+VGG .

errors;suggesting that there is still room for improvement on the raw classi?cation power of the neural network.

5.Summary

In this paper,we make great efforts on analysing the fail-ures for a top-performing detector on Caltech dataset.Via our human baseline we have quanti?ed a lower bound on how much improvement there is to be expected.There is a 10×gap still to be closed.To better measure the next steps in detection progress,we have provided new sanitized Caltech train and test set annotations.

Our failure analysis of a top performing method has shown that most of its mistakes are well characterised.The error characteristics lead to speci?c suggestions on how to engineer better detectors (mentioned in section 3.2; e.g.data augmentation for person side views,or extending the detector receptive ?eld in the vertical axis).

We have partially addressed some of the issues by meas-uring the impact of better annotations on localization ac-curacy,and by investigating the use of convnets to improve the background to foreground discrimination.Our results

indicate that signi?cantly better alignment can be achieved with properly trained ICF detectors,and that,for pedestrian detection,convnet struggle with localization issues,that can be partially addressed via bounding box regression.Both on original and new annotations,the described detection ap-proach reaches top performance,see progress in table7.

We hope the insights and data provided in this work will guide the path to close the gap between machines and hu-mans in the pedestrian detection task.

References

[1]P.Agrawal,R.Girshick,and J.Malik.Analyzing the

performance of multilayer neural networks for object recognition.In ECCV,2014.2

[2]R.Benenson,M.Mathias,T.Tuytelaars,and L.Van

Gool.Seeking the strongest rigid detector.In CVPR, 2013.6

[3]R.Benenson,M.Omran,J.Hosang,,and B.Schiele.

Ten years of pedestrian detection,what have we learned?In ECCV,CVRSUAD workshop,2014.1, 2,3,8,11

[4]N.Dalal and B.Triggs.Histograms of oriented gradi-

ents for human detection.In CVPR,2005.2

[5]P.Dollár,R.Appel,S.Belongie,and P.Perona.Fast

feature pyramids for object detection.PAMI,2014.1, 2,6,8

[6]P.Dollár,Z.Tu,P.Perona,and S.Belongie.Integral

channel features.In BMVC,2009.1,2

[7]P.Dollár,C.Wojek,B.Schiele,and P.Perona.Ped-

estrian detection:An evaluation of the state of the art.

PAMI,2012.2,3,5,23

[8]M.Enzweiler and D.M.Gavrila.Monocular pedes-

trian detection:Survey and experiments.PAMI,2009.

2

[9]A.Ess,B.Leibe,K.Schindler,and L.Van Gool.A

mobile vision system for robust multi-person tracking.

In CVPR,2008.2

[10]Crete-Roffet F.,Dolmiere T.,Ladret P.,and Nicolas

M.The blur effect:Perception and estimation with

a new no-reference perceptual blur metric.In SPIE

Electronic Imaging Symposium Conf Human Vision and Electronic Imaging,May2007.13

[11]A.Geiger,P.Lenz,and R.Urtasun.Are we ready for

autonomous driving?the kitti vision benchmark suite.

In CVPR,2012.2

[12]R.Girshick.Fast R-CNN.In ICCV,2015.7[13]R.Girshick,J.Donahue,T.Darrell,and J.Malik.Rich

feature hierarchies for accurate object detection and semantic segmentation.In CVPR,2014.2,7

[14]D.Hoiem,Y.Chodpathumwan,and Q.Dai.Diagnos-

ing error in object detectors.In ECCV,2012.2 [15]J.Hosang,M.Omran,R.Benenson,and B.Schiele.

Taking a deeper look at pedestrians.In CVPR,2015.

2,7,8

[16]W.Nam,P.Dollár,and J.H.Han.Local decorrelation

for improved detection.In NIPS,2014.1,2,7,8,11 [17]W.Ouyang and X.Wang.Single-pedestrian detection

aided by multi-pedestrian detection.In CVPR,2013.

1,2

[18]S.Paisitkriangkrai,C.Shen,and A.van den Hengel.

Strengthening the effectiveness of pedestrian detection with spatially pooled features.In ECCV,2014.1 [19]D.Park,C.L.Zitnick,D.Ramanan,and P.Dollár.Ex-

ploring weak stabilization for motion feature extrac-tion.In CVPR,2013.1,2

[20]Y.Tian,P.Luo,X.Wang,and X.Tang.Pedestrian

detection aided by deep learning semantic tasks.In CVPR,2015.2,3,7,8

[21]C.Wojek and B.Schiele.A performance evaluation of

single and multi-feature people detection.In DAGM, May2008.2

[22]S.Zhang,C.Bauckhage,and https://www.doczj.com/doc/2c15460535.html,rmed

haar-like features improve pedestrian detection.In CVPR,2014.1

[23]S.Zhang,R.Benenson,and B.Schiele.Filtered chan-

nel features for pedestrian detection.In CVPR,2015.

1,2,3,4,7,8,11,13

Supplementary material A.Content

This supplementary material provides a more detailed view of some of the aspects presented in the main paper.

?Section B gives details of the RotatedFilters de-tector we used for our experiments(section2.2in main paper).

?Section C provides the detailed curves behind the sum-mary bar plots for different test set subsets(see?gure 3and section3.1in main paper).

?Section D shows examples for each error type from the analysed detector,discusses the scale,blur and contrast evaluations,and revisits the oracle cases experiments in more detail(section3.2in main paper).?Section E shows examples of how the new training an-notations improve over the original ones(section3.3 in main paper).

?Section F discuss the impact of new annotations on the evaluation of existing methods(MR ranking and recall-versus-IoU curves)(section4.1in main paper).?Section G shows the effects of automatically aligning 10×data with1×data(section4.1in main paper).

?Figure26summarises our?nal detection results both in original and new annotations.

B.Rotated ?lters detector

For our experiments we re-implement the ?ltered chan-nel feature Checkerboards detector [23]using the LDCF [16]codebase.The training procedure turns out to be slow due to the large number of ?lters (61?lters per channel).To accelerate the training and test procedures,we design a small set of 9?lters per channel that still provides good performance.We call our new ?ltered channel feature detector;RotatedFilters (see ?gure 11d ).

The rotated ?lters are inspired by the ?lterbank of LDCF (obtained by applying PCA to each feature channel).The ?rst three ?lters of LDCF of each features channel are the constant ?lter and two step functions in orthogonal dir-ections,with the particularities that the oriented gradient channels also have rotated ?lters (see ?gure 11b ).Our ro-tated ?lters are stylized versions of LDCF .The resulting RotatedFilters ?lterbank is somewhat intuitive,while ?lters from Checkerboards ,are less systematic and less clear in their function (see ?gure 11c ).

To integrate richer local information,we repeat each ?l-ter per channel over multiple scales,in the same spirit as SquaresChnFtrs [3](?gure 11a ).

On the Caltech validation set,RotatedFilters ob-tains 31.6%MR O ?2using one scale (4x4);and 28.9%MR O

?2using three scales (4x4,8x8and 16x16).Therefore,we se-lect this 3-scale structure in our experiments.On the test set,the performance of RotatedFilters is 19.2%MR O ?2,i.e.a less than 1%loss with respect to Checkerboards ,yet it is ~6x faster at feature computation.

In this paper,we use RotatedFilters for all experi-ments involving training a new model.

C.Results per test subset

Figure 12contains the detailed curves behind ?gure 3in the main paper (“subsets bar plot”).We can see that Che-ckerboards and RotatedFilters show good per-formance across all subsets.The few cases where they are not top ranked (e.g.?gures 12e and 12h )all methods ex-hibit low detection quality,and thus all have similarly poor scores.

Figure 12shows that Checkerboards is not optim-ized for the most common case on the Caltech dataset,but instead shows good performance across a variety of situ-ations;and is thus an interesting method to analyse.

(a)SquaresChntrs [3]?lters

(b)Some of the LDCF [16]?lters.Each column shows ?lters for one channel.

(c)Some examples of the 61Checkerboards ?lters (from [23])

(d)Illustration of Rotated ?lters applied on each feature channel

Figure 11:Comparison of ?lters between some ?ltered channels detector variants.

(a)Reasonable setting (IoU >=0.5)(b)Reasonable setting (IoU >=0.8)

(c)Pedestrians larger than 80px in height (d)Pedestrian height between 50px and 80px

(e)Pedestrian height between 30px and 50px (f)Non-occluded pedestrians

(g)Pedestrians occluded by up to 35%

(h)Pedestrians occluded by more than 35%and less

than 80%.

Figure 12:Detection quality of top-performing methods on experimental settings depicted in “subsets bar plot”?gure in the main paper.

D.Checkerboards errors analysis

Error examples Figure 17,18,19and 20,show four examples for each error type considered in the analysis of the main paper (for both false positives and false negatives).Blur and contrast measures To enable our analysis re-garding blur and contrast,we de?ne two automated meas-ures.We measure blur using the method from [10],while contrast is measured via the difference between the top and bottom quantiles of the grey scale intensity of the pedestrian patch.

Figures 15and 16show pedestrians ranked by our blur and contrast measures.One can observe that our quantitative measures correlate well with the qualitative notions of blur and contrast.

Scale,blur,or contrast?For false negatives,a major source of error is small scale,but we ?nd small pedestrians are often of low contrast or blurred.In order to investig-ate the three factors separately,we observe the correlation between size/contrast/blur and score,as shown in ?gure 14.We can see that the overlap between false positive and true positive is equally distributed across different levels of con-trast and blur;while for scale,the overlap is quite dense at small scale.To this end,we conclude that small scale is the main factor negatively impacting detection quality;and that blur and contrast are uninformative measures for the detec-tion task.

D.1.Oracle cases

In ?gure 21,we show the standard evaluation and or-acle evaluation curves for state-of-the-art methods.For the localization oracle,false positives that overlap with the ground truth are not considered;for the background-versus-foreground oracle,false positives that do not overlap with the ground truth are not considered.Based on the curves,we have the following ?ndings:

?All methods are signi?cantly improved in each oracle evaluation.

?The ranking of all methods stays relatively stable in each oracle case.

?In terms of MR O ?4,the improvement is comparable for localization or background-versus-foreground or-acle tests;the detection performance can be boosted by ?xing either problem.

We also show some examples of objects with similar scores in ?gure 13.In both low-scoring and high-scoring groups,we can see both pedestrians and background objects,which shows that the detector fails to rank foreground and back-ground

adequately.

(a)Low-scoring objects

(b)High-scoring objects

Figure 13:Failure cases of Checkerboards detector [23].Each group shows image patches of similar scores:some background objects are of high scores;while some persons are of low scores.We aim to understand when the detector fails through analysis.

D.2.Log scale visual distortion

In the paper we show results for so called oracle exper-iments that emulate the case in which we do not make one type of error:we remove either mistakes that touch annot-ated pedestrians (localization oracle)or mistakes that are located on background (background oracle).

It is important to note that these are the only two types of false positives.If we remove both types the only mistakes that remain stem from missing recall and the result would be a horizontal line with very low miss rate.

Because of the double log scale in the performance plots on Caltech the curves look like both oracles improve per-formance slightly but the bulk of mistakes arise from a dif-ferent type of mistakes,which is not the case.

In ?gure 22we illustrate how much double log scales distort areas.We often think of the average miss rate as the area under the curve,so we colour code the false positives in the plots by their type:the plot shows the ratio between loc-alization (blue)and background (green)mistakes at every point on the miss rate,but also for the entire curve.Both curves,22b and 22c show the same data with the only dif-ference that one shows localizations on the left and the other one on the right.Due to the double log scale,the error type that is plotted on the left seems to dominate the metric.

(a)Size versus score

(b)Contrast versus score

(c)Blur versus score

Figure14:Correlation between size/contrast/blur and score.

0.28

0.32

0.34

0.35

0.36

0.37

0.38

0.39

0.4

0.4

0.41

0.42

0.42

0.43

0.44

0.44

0.45

0.45

0.460.46 0.470.470.480.480.490.490.490.50.5

0.5

0.51

0.51

0.52

0.53

0.53

0.53

0.54

0.54

0.54

0.55

0.55

0.56

0.56

0.57

0.57

0.58

0.58

0.59

0.60.62

Figure15:Examples for images with different levels of

blur.

0.11

0.21

0.26

0.29

0.3

0.32

0.33

0.35

0.350.36 0.380.390.410.420.430.460.470.48 0.490.50.510.510.520.530.540.550.560.56 0.570.580.580.590.60.610.620.630.64

0.65

0.67

0.68

0.69

0.7

0.71

0.73

0.74

0.75

0.770.8

Figure16:Examples for images with different levels of contrast.

(a)Double detection

(b)Body parts

(c)Too large bounding boxes

Figure17:Example localization errors,a subset of false positives.False positives in red,original annotations in blue,ignore annotations in dashed blue,true positives in green,and ignored detections in dashed green(because they overlap with ignore

annotations).

(a)Vertical structures

(b)Traf?c lights

(c)Car parts

(d)Tree leaves

(e)Other background

Figure18:Example background errors,a subset of false positives.False positives in red,original annotations in blue,ignore annotations in dashed blue,true positives in green,and ignored detections in dashed green(because they overlap with ignore

annotations).

(a)Fake humans

(b)Missing annotations

(c)Confusing

Figure19:Example annotation errors,a subset of false positives.False positives in red,original annotations in blue,ignore annotations in dashed blue,true positives in green,and ignored detections in dashed green(because they overlap with ignore

annotations).

(a)Small scale

(b)Side view

(c)Cyclists

(d)Occlusion

(e)Annotation errors

(f)Others

Figure20:Example errors for different error types of false negatives.False positives in red,original annotations in blue, ignore annotations in dashed blue,true positives in green,and ignored detections in dashed green(because they overlap with

ignore annotations).

(a)Standard evaluation(reasonable subset)

(b)Localization oracle

(c)Background-vs-foreground oracle

Figure21:Caltech test set error with standard and oracle

case evaluations.Both localization and background-versus-

foreground show important room for improvement.Both

MR O?2and MR O?4are shown for each method at each eval-

uation.

(a)Original and two oracle curves for Checkerboards de-

tector.

(b)Localization FPs on the left.

(c)Background FPs on the left.

Figure22:Checkerboards performance on standard

Caltech annotations,when considering oracle cases.Loc-

alization mistakes are blue,background mistakes green.

海贼王中4皇是谁 海贼王中四皇是谁

海贼王中4皇是谁 1:“白胡子”艾德华·纽杰特(前四皇之一) 这个称霸整个青海的最强海贼团,拥有1600多人的规模。在白胡子的名下,聚集了大批实力强劲的海贼。船员们因他的实力和义气而追随,整个团队坚如磐石。白胡子把他们分为16个分队,每个队长都由王牌的部下来担任。白胡子称呼自己的船员为儿子,家族式的管理是白胡子海贼团的最大优势。 由“四皇”之一的白胡子所率领的海贼团,海贼船名为“莫比迪克号”,船头为鲸鱼的头部装饰。 出处:在动漫151集,红发派新人洛克之星送信时,白胡子初登场,有标明。世界最强男人‘艾德华·纽杰特。这个在世界中威名远扬的大海贼团,现在正在通往海贼王的道路上急速突进,现已是海

贼之王。(漫画234话号称世界上最强男人,在漫画528话,海侠甚平对其艾斯说,白胡子是海贼之王,人人得而诛之)顾名思义,海贼之王就是海贼中最强的男人。 外貌特征:头上戴着黑色的头巾,有着上弦月形状的白色胡子。身披白色的类似海军披风的大衣,只不过背后的不是“正义”,而是白胡子海贼团的标志。 享年:72岁 能力:震震果实地震人(552话对海军发动海啸,可使空气固体化)果实招式:震空,震皇,乌摇,海震 武器:薙刀(类似关公的青龙堰月刀)武器招式:薙刀(迅,牙,战神,扇,旋风,裂,无双,突,罗刹),兜割 16个分队队长: 第一队队长:『不死鸟』马尔科(稀有动物系幻兽种类果实能力者) 人称“不死鸟马尔高”,其果实类型是号称比自然系更稀少的动物系幻兽 种,在完全为不死鸟的形态下,可以浴火重生。(与大将黄猿对打中,黄猿使 用光束攻击,在完全是不死鸟形态下,可以浴火重生而不受影响)。由蓝色的 火焰包围着,会变为鸟的形态,性格上是个非常重情义和蔼的队长,可实力绝 对不容小觑,开战之后第一个阻挡了大将黄猿的攻击,也是红发想拉拢的对象, 与其他队长一起自称是白胡子的“儿子们”,对白胡子也是忠心不二。 首次出现在第234话。当时坐在白胡子身旁在没有任何介绍只有一句台词的他,十足一个龙套模样!到后来确认其一番队队长身份时,被惊呼是隐藏最深的高手。在后来的大事件中,展示了其作为白胡子海贼团一番队队长的实力,和海军大将黄猿对打,黄猿随后自动撤离;后来又把青稚给踢飞,实力堪比大将。 在和黄猿的战斗中因为老爹被赤犬打伤而走神,被黄猿用雷射贯穿,在发动恢复能力之前被海军中将拷上了海楼石手铐,在无法发挥不死鸟能力的情况下又被黄猿用雷射直接贯穿了胸口,使他无法继续战斗。 近况与红发一起在新世界吊唁白胡子与艾斯。 第二队队长:『火拳』波特卡斯·D·艾斯(自然系烧烧果实能力者)(已故) 初次登场:漫画第154话;动画第91集。

关于海贼王的三十个问题

关于海贼王的三十个问题 第一问。路飞吃的是什么恶魔果实? 第二问。路飞的草帽是谁送给路飞的。 第三问。山治是用什么战斗的? 第四问。索隆为什么要用三把刀? 第五问。草帽海贼团第一艘海贼船叫什么名字?) 第六问。乔巴是第几个入伙草帽海贼团的? 第七问。弗兰奇的三角小短裤是什么颜色的? 第八问。娜美的纹身是在左臂还是右臂上? 第九问。山治点烟哪个手点火哪个手挡风? 第十问。司法岛救罗宾的时候CP9的成员谁最先被打败? 第十一问。海底监狱中第三层是什么恶劣的环境? 第十二问。桑尼号的名字决定前是取了个什么雷人的名字? 第十三问。罗宾果实名称叫什么? 第十四问。乔巴有几种强化能力? 第十五问。空岛上是谁最后用冲击贝炸断了巨大蔓藤让娜美带着路飞最后冲上去了撞响了黄金钟? 第十六问。双子岬和拉不在一起的那海贼王船上的船医老头叫什么名字? 第十七问。草帽海贼团在冒险中遇到的第一个自然系是谁? 第十八问。在香波地群岛时超新星中罗的悬赏当时排第几? 第十九问。黑胡子海贼团有一共有多少人? 第20问。白胡子海贼团中第7队队长是谁? 第21问。女帝初次隆重登场一脚踢飞了什么动物? 第22问。两年修炼中路飞刚被雷利带上岛时有多少动物比路飞厉害? 第23问。路飞第一次施展霸王色霸气是对谁? 第24问。两年后**路飞一招秒杀的人形兵器是PX几 第25问。鱼人岛山治因流鼻血失血过多、那么山治是什么血型? 第26问。索隆现在使用的三把刀分别叫什么名字? 第27问。草帽海贼团中染了发的有谁?(以黑色为正常色) 第28问。草帽海贼团第十位成员是谁? 第29问。海贼王最后的boos是谁? 第30问。海贼王的结局是什么?

海贼王问答题问答题

Op-起航新世界,第一届海贼王知识竞答题目 1-10题(普通简单2B题,难度级别1),答对一题2分 1、海贼王中第一男主角是路飞,那第一女主角是谁? 答:娜美 2、布鲁克2年后的年龄是多少? 答案:90岁(出自百度百科,尾田SBS) 3、2年后路飞的身高是多少? 答:174cm(两年前为172cm) 4、山治的外号都有哪些?(至少列举出三个,答案仅限此题的答案中,加长码字时间) 答:外号:黑足山治(黒足のサンジ),白痴卷眉(索隆所取),圈圈眉,褶子靓卷眉,好色河童,Mr.鼻血……7号,Mr·王子,鼻血(在两年后肥皂泡群岛索隆叫的指 到集中地点的先后顺序)索隆所取。 5、阿龙海贼团占据娜美的故乡后,娜美想用多少钱购买村子? 答案:一亿贝利 6、草帽团用的两艘船的全名各叫什么? 答:黄金梅里号,万里阳光号。 7、阳光号的材料是什么?是怎么来的? 答:宝树亚当,弗兰奇买来的(用偷草帽的2亿北里做了造船费) 8、草帽团是靠智商战斗的两位是? 答:1娜美2乌索普(弱小二人组,每次战斗几乎都融入智慧,要不早就死翘翘了) 9、路飞的身体不怕子弹的射击,但是却被某个人的子弹所伤,请问这个人叫什么?什么地方他们展开的战斗? 答:克里克特质子弹锥形,海上餐厅 10(小难度)、在恐怖船上布鲁克用什么招数打败蜘蛛猴的? 答:鼻歌三丁. 燕尾斩

11-20题(知识题,难度难别2)答对一题3分 第十一题:请说出海贼王中,已知其名的两个无上大快刀。 答案:初代鬼徹(达斯琪述)和黑刀夜(鹰眼述) 第十二题:阿龙在东海作威作福的岁月里,娜美作为阿龙海贼旗的干部,为了筹备1E 贝利来买回自己的村庄,从几岁开始成为阿龙的测绘师,共服务了多少年被路飞拯救? 答案:10岁,8年。 第十三题:白胡子海贼团共有多少名成员,共分成几个部队? 答案:1600人,16个部队。 第十四题:路飞的生日是哪一天?与他同一天生日的还有谁? 答案:5月5日,迪马尔-布拉克(假路飞) 第十五题:山治与艾斯分别来自哪两个海域? 答案:北海、南海,他们都是在东海长大。 第十六题:福克西在与路飞团的DBF对决之前,曾有过多少场不败? 答案:920场。 第十七题:在巴基初遇路飞的战斗中,巴基曾经有过做为见习船员岁月的回忆,当时他正在和红发吵架而被雷利揍了,那么请问,巴基与红发因为什么而吵架的呢? 答案:他们正为了到底是北极冷还是南极冷而吵架。 第十八题:与福兰奇同一天生日的两个大人物是谁? 答案:红发,鹰眼(与福兰奇都是3月9日) 第十九题:东海是四海中最弱的海域,因为东海是和平的象征,那么东海的平均悬赏是多少? 答案:300万贝利。

海贼王世界年表

大海贼时代以前 约5000年前以上 ?欧哈拉全知树生出。 约4000年前以上 ?阿尔巴那宫殿建设。 约1100年前以上 ?海圆历402年,古代都市香朵拉处于繁荣时期。 1000年前以上 ?香朵拉“生け贽の祭坛”建设。 约900年前 ?空白的100年开始 约800年前 ?乔博弈在海之森留下无法遵守和鱼人岛之间约定的谢罪文历史本文。 ?世界政府抹杀这一百年间发生的一切,后人称为“空白的100年”。 ?历史本文中隐藏的巨大王国灭亡,20位国王(天龙人的始祖)创造了世界政府。 ?司法岛“艾尼爱斯大厅”建造完成。 约700年前 ?在“天龙人”的命令下,东方蓝开始建造连结岛与岛之间的巨大桥梁。约500年前 ?“魔人”奥兹“力拔山河的传说”发生,同时留下各种传说。 约460年前 ?北海“鲁卜尼尔”爆发“树热”(瘟疫)大流行,瞬间夺走了当地10万条人命,当时被视为死亡率超过90%的怪病,几十年后南海某位的植物学家发现了治疗树热的特效药,目前死亡率降低到少于3%。 400年前前后

?海圆历1120年6月21日,探险家蒙布朗·诺兰德的探险船离开维拉,并从叫卖船上购买威霸。 ?海圆历1122年5月21日,蒙布朗·诺兰德的探险船抵达加亚,帮助扑灭“树热”,与岛民建立友谊。 ?海圆历1126年,加亚岛的一部分被由下往上海流冲上空岛,空岛的人想占领这个土地“阿帕亚多”,开始了和原先的住民香狄亚的抗争。 ?海圆历1127年11月16日,蒙布朗·诺兰德带着鲁卜尼尔的国王和士兵,第2次来到加亚岛,没有发现黄金乡。 ?海圆历1128年,蒙布朗·诺兰德因“欺君之罪”处以死刑。 208年前(第二部的210年前) ?南海的布利斯王国探险船“圣布利斯号”向伟大的航道出航。 200年前(第二部的202年前) ?世界政府发表与鱼人的交友申明,当时鱼人和人鱼被分为“鱼类”,受到全世界人类的迫害。 158年前(第二部的160年前) ?巨人族东利、布洛基诞生。 139年前(第二部的141年前) ?Dr.古蕾娃诞生。 100年前(第二部的102年前) ?东利和布洛基的巨兵海贼团解散,开始在小花园决斗。 ?病原菌凯斯奇亚被确认灭绝(小花园还有没有残留,没有被正式的确认)。88年前(第二部的90年前) ?4月3日,布鲁克诞生[1][2]。 72年前(第二部的74年前) ?4月6日,艾德华·纽盖特诞生[3][4]。 71年前(第二部的73年前) ?可乐克斯诞生。 50年前(第二部的52年前)

海贼王人物介绍

——海贼王人物介绍—— 为海贼王宝藏而挺进的路飞海贼团一共有6名成员,船长路飞、大副奈美、剑侠佐罗、炮手吹萧普、厨师香吉士和小鹿乔巴。每个人背后都有一个催人泪下的故事,每个人又肩负着一个不同的梦想而为了他们共同的目标-传说中的"伟大的航道"前进。 船长路飞:小时候误食了恶魔之果实-变成了橡胶人(代价是一辈子不能游泳)他有一个好大的理想就是:要超过自己最尊敬的海贼·赤发SIYANKUSU,成为海贼王。他的直率且以友谊为重的真诚的心具有不可思议的魅力,他的周围聚了很多可以依赖的真正的朋友。 相关人物:赤发SIYANKUSU:路飞小时候的偶像,身为船长的他在任何人欺负自己时都装作无所谓,但只要有人敢侮辱他的船员,则立即将滋事者一刀结果,因而成为了最受船员爱戴的人。他对路飞的教育使路飞也有了这种颇具魅力的性格。另外,赤发SIYANKUSU曾经为救路飞失去了左臂,这更使路飞对其有着父子般的感情,长大后的路飞一直希望在"伟大的航道"上能再次碰到这位久违的船长。 ---------------------------------------------大副奈美:很漂亮的女孩子,拥有优秀的航海技术,原本是个自称"专偷海贼宝物的盗贼"。表面上好象是一个见钱眼开的家伙,其实这里是另有隐情,奈美最痛恨海贼,因为自己最亲爱的养母被鱼人海贼团杀害。为解救自己村庄更多无辜的人,她答应8年内为鱼人海贼团盗窃1亿元来换取自己和村人的自由,为了这个目的她竟然也偷了路飞的船,凑够1亿元给鱼人团团长恶龙,不料恶龙一直在骗她,贿赂海军把奈美攒了8年的钱抢走...正当奈美对生命失去信心的时候,路飞海贼团及时赶到,将鱼人海贼团斩尽杀绝,挽救了奈美和村民,从此,奈美便死心踏地的跟随路飞向"伟大的航道"进发......奈美还有一个理想,就是画出完整的世界地图。 ---------------------------------------------------------------------------------------- 剑侠佐罗:海贼的大对头佐罗的艺名是三刀流剑豪,他一直遵守着小时候曾跟过早死去的师姐"克伊娜"定下的不朽誓言,在"伟大的航道"上打败世界所有剑客,帮助师姐完成世界第一剑豪的心愿。佐罗心地善良,除恶扬善。一次为救一位小女孩,险些被海军杀害,幸亏被路飞搭救,两人成为生死之交。 ----------------------------------------------------------------------------------------- 炮手吹萧普:射击技术高超,精通火器,爱吹牛。最滑稽的外表下却有着最感人的故事,吹萧普很小的时候父亲就撇下她们母子二人离开村子做海贼去了,吹萧普一直喜欢站在村子的岸边希望父亲回来,可父亲一直没有...直到一天,母亲得了重病,眼看要死掉了,吹萧普开始在屋外大叫"海贼来了!海贼来了!

海贼王资料篇

蓝海被红土大陆(Red Line)和伟大航道(Grand Line)成X形交叉分成四等份,分别东方蓝East Blue(东海)、西方蓝West Blue(西海)、南方蓝South Blue(南海)和北方蓝North Blue(北海)。 分布:伟大航道(Grand Line)被红土大陆(Red Line)分割成两段,分别是伟大航道前半部和伟大航道后半部(新世界),伟大航道和四大海之间还有无风带(是巨大海王类的栖身之所)。 东方蓝East Blue(东海)和南方蓝South Blue(南海)就是分布在伟大航道前半部旁两侧的海域。 西方蓝West Blue(西海)和北方蓝North Blue(北海)就是分布在伟大航道后半部(新世界)旁两侧的海域。 东方蓝(East Blue) 300万贝里海贼的赏金平均值是四海当中最低的,所以被冠上“最弱之海”的称号。 岛名(诸岛名)村子、小镇建筑备注 黎明岛风车村宴会酒吧路飞的出生地,有着巨大风车的村庄。 柯尔波山 达旦一家的 基地 黎明岛附近的山区,占领这座山的是以卡莉·达旦 为首的山贼团“达旦一家”,波特卡斯·D·艾斯在此 地成长。 非确定物 的终点站 无 向北穿过柯尔波山,是个始终弥漫着恶臭,并且堆 积了大量垃圾的地区,这里没有医生,只有犯罪和 疾病肆意蔓延。 不明橘子镇宠物食品店 PET FOOD 巴基海贼团的活动据点,40年前被海贼烧毁,后来 在全镇村民的努力下复兴。 山羊岛无无亚尔丽塔海贼团的活动据点(注:动画版没有)。 不明谢尔兹镇海军第153 支部 以海军基地为中心的小镇,斧手蒙卡统治这里时, 居民全臣服于他的恶政压迫之下。 不明橘子镇宠物食品店 PET FOOD 巴基海贼团的活动据点,40年前被海贼烧毁,后来 在全镇村民的努力下复兴。 不明西罗布村可雅的大房 子 乌索普的出生地。 诸美达岛可可亚西 村 娜美的家 娜美的故乡(但不是出生地)生产橘子,8年前遭 受恶龙海贼团的统治,后来在草帽海贼团的活跃下 解放。 克沙镇 被恶龙海贼 团毁灭 位在可可亚西村隔壁,房屋整个被翻过来。 恶龙领域 (along park) 不明紧邻著大海,属于鱼人们的天堂。 不明罗格镇特别死刑台进入伟大航道的入口,“海贼王”哥尔·D·罗杰的出生地,也是他22年前被处死的地方。别名“开始和结束之镇”,过去这里常因为聚集许多海贼而有“恶魔街道”的称号,当时的海军本部上校白猎人斯摩格来管辖此地后才有了改变。 玻璃球岛不明DOSKOI PANDA本 店 在扉页《香克斯斯的冒险天国》中出现的岛屿。

海贼王分集剧情

第一部分:东海冒险 东海冒险篇 蒙卡篇 (第1集)我是路飞!即将成为海贼王的男人! (第2集)大剑豪出现!海贼猎人罗罗诺亚.索隆! (第3集)摩根VS路飞!神秘的美少女是谁?? 巴基篇 (第4集)路飞的过去!红发香克斯登场 (第5集)恐怖!神秘的力量.小丑海贼巴基船长 (第6集)绝体绝命!猛兽使毛基VS路飞 (第7集)状绝决斗!劒豪索罗VS曲艺师卡巴 (第8集)胜利者使谁?恶魔果实的能力对决! 克洛篇 (第9集)正义的谎言?船长乌索普 (第10集)史上最强的怪人!催眠师强戈 (第11集)隂谋暴漏!海贼管家库洛船长 (第12集)激战!与黑猫海贼团坡道的大攻防 (第13集)恐怖的二人组!小猫兄弟VS索罗 (第14集)路飞复活!卡雅小姐的拼死抵抗 (第15集)把库洛打到!男子汉乌索普悲痛的决定! (第16集)守护卡雅!乌索普海贼团大活跃! (第17集)怒气爆发!库洛VS路飞最后的结果! (第18集)你就是珍兽!盖蒙和他的神奇伙伴 克利克篇 (第19集)三刀流的过去!索罗与库依耐的誓言! (第20集)著名的厨师!海上餐厅的山治 (第21集)不请自来的客人!山治给的饭与金的恩情

(第22集)最强的海贼舰队!提督库力克首领 (第23集)守护巴拉提耶!大海贼.赫足的谢夫 (第24集)鹰眼的米霍尅!劒豪索罗飘散海中 (第25集)必杀足技炸裂!山治VS铁壁的柏路 (第26集)山治与谢夫的梦梦幻的ALL BLUE (第27集)冷彻非常的鬼人!海贼舰队总队长金 (第28集)不会死的!激斗路飞VS库力克 (第29集)死斗的结果!肚子裏的一把枪! (第30集)旅程开始!海上厨师和路飞同行 阿龙篇 (第31集)东海最恶的男人!鱼人海贼阿龙! (第32集)可可亚村的魔女!阿龙的女干部! (第33集)乌索普死了?!路飞还没登陆? (第34集)全员**!乌索普讲出那美的实情 (第35集)不为人知的过去!女战士贝鲁梅露 (第36集)求生!妈妈贝鲁梅露与那美的牵绊 (第37集)路飞站出来!背叛约定的结果! (第38集)路飞危险!鱼人VS路飞海贼团 (第39集)路飞落水!索罗VS八仔 (第40集)伟大的战士!激斗山治V与乌索普 (第41集)路飞全开!那美的决心和草帽! (第42集)炸裂!鱼人阿朗海中的猛烈攻击 (第43集)鱼人帝国的终结!那美是我的伙伴! (第44集)展露笑颜的旅行!告别故乡可可亚村 (第45集)赏金首!草帽路飞世界闻名! 小巴基大冒险 (第46集)追赶草帽小子!小巴基的大冒险 (第47集)久等了!重新复活的巴基船长! 罗格镇篇 (第48集)开始与结束的镇.罗葛镇登陆 (第49集)三代鬼彻与雪走!索罗的新刀与女曹长(第50集)乌索普VS带着小孩的达狄正午的决斗! (第51集)燃烧的料理比赛?山治VS美女厨师

海贼王分析

一群白痴又牛叉的人,没事跟一群大脑有问题的人打架,打了十五年,打了几百集,就打死了两个人!直看到红发救起路飞,临走时把帽子扣在他头上之后,两人的约定,顿时触动了我心底的某些东西。再然后,看到路飞为小狗保护商店,抢回食物的那一段,终于到了一发不可收拾的地步!看了海贼我对日本文化有了新的看法和感悟!我觉得它适应了当代日本青少年的价值取向,其主题反应了当代日本人心中渴望和追求的东西,其中的人物性格和行为都是符合当代日本人的。而这些东西都是文化的组成部分。因此,我们从中必然能窥见日本文化中迷人的一部分。 在日本人看来,个人的力量远远不如集体的力量。这似乎与中国传统文化中强调“集体观念”“以大局为重”等观念相符,但是日本的团体集体意识似乎比中国要强得多,甚至到了一种“崇尚”的地步。就从日本语言中的?人間?这个词来看,“间”这个词指的是人之间相互作用的关系,表明日本人认为“人”最重要的特性并不是独立的,而是相互作用的。 首先,我们分析下卓洛,超级路痴、讲义气、愿为伙伴牺牲一切。这么要强的一个男人,最后为了伙伴的梦想,拜了自己的敌人为师,这是需要怎样的气魄呀!索隆所走的道路,正是一个伟大剑士的道路,流浪磨炼的道路上与人相伴的除了剑,再也没有其他的东西,他是路飞最强的守护者,最忠诚的朋友和战士,守护路飞,就是他心中的义,一个剑士除了追求最强的剑道,也更需要义来照亮他的前路,如果说路飞象征着日本人追求和梦想-大海的王者,那么流传下来的武士道精神,则是日本人实现那个梦想的最强守护神! 为了替贝尔梅尔报仇,以及拯救整个可可西亚村的村民,有制作海图天赋的娜美加入了以阿龙为首的阿龙海贼团,选择了艰辛的生存之路!在钱和橘子面前所有的困难都是小菜一碟,一个典型日本企业家的形象,时代的影子,最完美的伴侣,娜美所做的事情,蛮横似乎不讲道理,倘若你仔细看他,你会发现他贪婪的外表里面隐藏的那颗博大而又慈爱的心,精打细算但是异常的条理分明,你在他身上能看见伟大的日本现代企业的影子,巧取豪夺,工于技术,善于理财,对待员工态度粗暴,正是娜美的存在确保了路飞伟大梦想实现的物质基础,就如同日本现代的起飞,依托的就是日本企业的起飞! 他是山治,西方绅士精神的代表,有骑士道的精神,在日本社会,女权是不被重视的,女人如衣物,地位低下,正是西方人文精神的传入,才使得日本的妇女地位大大提高,一个健全的体魄需要丰富而平衡的营养,香吉士的出现,毫无疑问提供的就是这种平衡,在日本,传统的精神犹如栋梁大柱,而香吉士毫无疑问正是西方人文精神在日本的传播和深化,这种精神的发扬,我们可以看见一个更具有人情味海贼团伙,一个更多样化的日本文化,而汲取了西方传统文化精神现代日本,无疑更具有强大生命力,当然传统与传统之间,难免是要对碰,是要冲突,于是香吉士与索隆之间的针锋相对,从来没有停止过,当然在最危险的时刻,他们只需要相互配合10秒,便可以扫荡任何敌人! 日本的统治阶级甚至在国民的日常生活中强行贯彻精神战胜物质的信条。比如老百姓已经在工厂里干了十二个小时了,还遭到了空袭,已经身心疲惫了,他们还在宣传什么…身体越疲惫,意识就越坚强?,所以日本的百姓一般具有很强民族意识,就拿乌索普来说,他是个很复杂的人,胆小,爱说大话,爱面子,有爱心,思

海贼王-电子报刊教案资料

《one piece》(中译《海 贼王》/海盗路飞)是尾田荣一郎在《周刊少年JUMP》上连载的漫画。“ONE PIECE”在故事中为“一个大秘宝”之意。传说中‘海贼王’哥尔·D·罗杰在死前说出他留下了具有财富、名声、力量世界第一的宝藏“ONE PIECE”,许多人为了争夺“ONE PIECE”,争相出海,许多海贼开始树立霸权,而形成了“大海贼时代”。十年后,蒙其·D·路飞为了要实现与因救他而断臂的四皇‘红发’香克斯的约定而出海,在遥远的路途上找寻着志同道合的伙伴,一起进入“伟大航道”,目标:当上“海贼王”。

作者简介: 尾田荣一郎,男,1975年 出生。故乡在日本熊本县。自 1997年起,在周刊《少年JUMP》 第34期上开始连载代表作《ONE PIECE》。现在集英社漫画《少 年JUMP》的主力作者之一。 主要作品:《WANTED!》(又译: 《尾田荣一郎短篇集》)、 《WANTED!通缉英雄》、 《WANTED!来自上帝的礼物》、 《一鬼夜行》、《MONSTERS》、 《ROMANCE DAWN》(《ロマンス ド-ン》)。 草帽海贼团 【船长】蒙奇·D·路飞(蒙其·D·路飞/蒙其·D·鲁夫/莫奇·D·路飞) 草帽一伙悬赏令(10张)【副船长,剑士】罗罗诺亚·索隆(罗伦罗亚·索隆/罗诺亚·卓洛/罗罗诺亚·佐罗/罗罗亚·索隆)【航海士】娜美(奈美)【狙击手】乌索普(骗人布/乌索布)【厨师】香吉士(山治/香吉/山吉)【船医】托尼托尼·乔巴(多尼多尼·乔巴/东尼·东尼·乔巴)【考古学家】妮可·罗宾(尼可·鲁宾)【船匠】弗兰奇(福兰奇/佛朗基,原名卡迪·佛兰姆)【音乐家】布鲁克 三主力

海贼王真实资料大曝光

先是船长:·· “”地意志,空道篇之前,黑胡子初次遇见路飞时,曾经将“”地意志委婉地解释了出来“梦想不会终结”,“”继承就是地继承,之族将他们地一批后裔送到了地面(后面解释为什麼是送到地面),而像罗杰说地一样,之中地人,会有那麼一个,继承地意志,从新回到他们出来地故乡—— . 龙送路飞出发,是因为希望路飞到海上和龙、罗杰等人一样寻找自己地梦想,而意志继承者会因为基因记忆对海产生不同於常人地热爱. 自由:路飞地愿望,自由地航行於海上,话之后龙对抗世界政府后会更加坚定这个信念,而就是这样地想法,让传教士——艾利克斯欣赏上路飞身上地“善”,从而愿意加入草帽团(等下解释她地作用). 剑客:· .“我要成为世界第一地大剑豪,让自己地名字响彻天堂!”:话之前,克洛克达尔加入草帽海贼团之前,鹰眼会与索隆第二次交手,是平手.鹰眼在战后会说一句类似“三刀流吗?曾经见过使用两把刀地剑豪啊,那是令人怀念地时代!”之后会回忆他坐上小船地原因——寻找曾经击败他地双刀流剑客,也就是索隆地父亲·.当然这件事一开始没有点透,会在八百话之后遇见传教士艾利克斯后说明. .三刀流:龙会指导索隆真正地发挥修罗地概念,将思维多重化(类似弹钢琴),可以将三把刀完全没有关联地使用(也算间接地提高了智商). .成为第一大剑豪地意志在后期会变成带著古伊娜地刀去每一个地方.会在奇怪地情况和古伊娜再见一面. 航海士: .天气:七百话之后见到第四皇魔术师—·和龙之后,会了解关於族能力地详细,从而对自然和元素地使用更进一步,相较现在地间接引导闪电和冰冻,之后会习得让火在水当中燃烧地知识,也可以在极小地范围内制造闪电、爆破(例如手掌). ... .关於地入口,是娜美先看出来地,仔细看地人,可以从整个地大地图里看出来那个标记. ... .橘子在后面会有意想不到地作用,下面说. 狙击手: .成为真正地海上勇敢战士!:小乌地勇士意志,会在后面得到龙地极大认可,认为明明知道自己办不到还要追求梦想是最勇敢地表现,是被龙看得起地几个人之一. .耶稣布之死:红发与黑胡子地大战中耶稣布挂了,是被黑胡子海贼团地杀死地,小乌将此看成是父亲地遗憾,最终爆发,将干掉,而结束黑胡子地一击也是出自小乌之手. .武器:之后会得到不错地武器,但不是冥王什麼地,不过起码可以与鹰眼手上地黑刀一比,在干掉地同时领悟了自己都不清楚地霸气,遇见龙之后学会将霸气作为子弹发射.但即便有了霸气,一般情况下还是很懦弱地性格,稍微有所改变,对霸气地使用也不是很流畅,八百话遇到科学家马修之后得到可以将自身霸气自由使用地手套. 厨师:

相关主题
文本预览
相关文档 最新文档