当前位置:文档之家› Gong CVPR 2013 - Learning Binary Codes for High-Dimensional Data Using Bilinear Projections

Gong CVPR 2013 - Learning Binary Codes for High-Dimensional Data Using Bilinear Projections

Gong CVPR 2013 - Learning Binary Codes for High-Dimensional Data Using Bilinear Projections
Gong CVPR 2013 - Learning Binary Codes for High-Dimensional Data Using Bilinear Projections

Learning Binary Codes for High-Dimensional Data Using Bilinear Projections

Yunchao Gong UNC Chapel Hill yunchao@https://www.doczj.com/doc/623126322.html,

Sanjiv Kumar

Google Research

sanjivk@https://www.doczj.com/doc/623126322.html,

Henry A.Rowley

Google Research

har@https://www.doczj.com/doc/623126322.html,

Svetlana Lazebnik

University of Illinois

slazebni@https://www.doczj.com/doc/623126322.html,

Abstract

Recent advances in visual recognition indicate that to achieve good retrieval and classi?cation accuracy on large-scale datasets like ImageNet,extremely high-dimensional visual descriptors,e.g.,Fisher Vectors,are needed.We present a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensional-ity using compact bilinear projections instead of a single large projection matrix.This method achieves compara-ble retrieval and classi?cation accuracy to the original de-scriptors and to the state-of-the-art Product Quantization approach while having orders of magnitude faster code gen-eration time and smaller memory footprint.

1.Introduction

Today,image search and recognition are being per-formed on ever larger databases.For example,the Im-ageNet database[2]currently contains around15M im-ages and20K categories.To achieve high retrieval and classi?cation accuracy on such datasets,it is necessary to use extremely high-dimensional descriptors such as Fisher Vectors(FV)[23,24,28],Vectors of Locally Aggregated Descriptors(VLAD)[14],or Locality Constrained Linear Codes(LLC)[32].Our goal is to convert such descriptors to binary codes in order to improve the ef?ciency of retrieval and classi?cation without sacri?cing accuracy.

There is a lot of recent work on learning similarity-preserving binary codes[6,9,10,11,15,18,19,20,21, 27,30,31,33],but most of it is focused on relatively low-dimensional descriptors such as GIST[22],which are not suf?cient for state-of-the-art applications.By contrast, we are interested in descriptors with tens or hundreds of thousands of dimensions.Perronnin et al.[24,28]have found that to preserve discriminative power,binary codes for such descriptors must have a very large number of bits.Indeed,Figure1(a)shows that compressing a64K-dimensional descriptor to100-1,000bits(typical code sizes in most similarity-preserving coding papers)results in an unacceptable loss of retrieval accuracy compared to code

FRGH VL]H

5

H

F

D

O

O

#

(a)NN search recall.

FRGH VL]H

(b)Storage requirements. https://www.doczj.com/doc/623126322.html,parison of our proposed binary coding method with state-of-the-art ITQ method[6]for retrieval on50K Flickr images (1K queries)with64,000-dimensional VLAD descriptors.(a)Re-call of10NN from the original feature space at top50retrieved points.(b)Storage requirements for projection matrices needed for coding(note the logarithmic scale of the vertical axis).The dashed curve for ITQ is an estimate,since running ITQ for more than4K bits is too expensive on our system.While ITQ has slightly higher accuracy than our method for shorter code sizes, larger code sizes are needed to get the highest absolute accuracy, and ITQ cannot scale to them due to memory limitations.By con-trast,our method can be used to ef?ciently compute very high-dimensional codes that achieve comparable accuracy to the origi-nal descriptor.

sizes of16K-64K bits.Thus,our goal is to convert very high-dimensional real vectors to long binary strings.To do so,we must overcome signi?cant computational challenges.

A common step of many binary coding methods is lin-ear projection of the data(e.g.,PCA or a Gaussian ran-dom projection).When the dimensionality of both the in-put feature vector and the resulting binary string are suf?-ciently high,the storage and computation requirements for even a random projection matrix become extreme(Figure 1(b)).For example,a64K×64K random projection ma-trix takes roughly16G

B of memory and the projection step takes more than one second.Methods that require the pro-jection matrix to be learned,such as Iterative Quantization (ITQ)[6]become infeasible even more quickly since their training time scales cubically in the number of bits.

There are a few works on compressing high-dimensional descriptors such as FV and VLAD.Perronnin et al.[24]

2013 IEEE Conference on Computer Vision and Pattern Recognition

have investigated a few basic methods including thresh-olding,Locality Sensitive Hashing(LSH)[1],and Spec-tral Hashing(SH)[33].A more powerful method,Prod-uct Quantization(PQ)[13],has produced state-of-the-art results for compressing FV for large-scale image classi?ca-tion[28].However,descriptors like FV and VLAD require a random rotation to balance their variance prior to PQ[14], and as explained above,rotation becomes quite expensive for high dimensions.

In this paper,we propose a bilinear method that takes advantage of the natural two-dimensional structure of de-scriptors such as FV,VLAD,and LLC and uses two smaller matrices to implicitly represent one big rotation or projec-tion matrix.This method is inspired by bilinear models used for other applications[26,29,34].We begin with a method based on random bilinear projections and then show how to ef?ciently learn the projections from data.We demonstrate the promise of our method,dubbed bilinear projection-based binary codes(BPBC),through experiments on two large-scale datasets and two descriptors,VLAD and LLC. For most scenarios we consider,BPBC produces little or no degradation in performance compared to the original con-tinuous descriptors;furthermore,it matches the accuracy of PQ codes while having much lower running time and stor-age requirements for code generation.

2.Bilinear Binary Codes

Most high-dimensional descriptors have a natural matrix or tensor structure.For example,a HOG descriptor is a two-dimensional grid of histograms,and this structure has been exploited for object detection[26].A Fisher Vector can be represented as a k×2l matrix,where k is the visual vocabulary size and l is the dimensionality of the local im-age features(the most common choice is SIFT with l=128). VLAD,which can be seen as a simpli?ed version of FV,can be represented as a k×l matrix.Finally,an LLC descriptor with s spatial bins can be represented as a k×s matrix.

Let x∈R d denote our descriptor vector.Based on the structure and interpretation of the descriptor,1we reorganize it into a d1×d2matrix with d=d1d2:

x∈R d1d2×1→X∈R d1×d2.(1) We assume that each vector x∈R d is zero-centered and has unit norm,as L2normalization is a widely used prepro-cessing step that usually improves performance[25].

We will?rst introduce a randomized method to obtain d-bit bilinear codes in Section2.1and then explain how to learn data-dependent codes in Section2.2.Learning of reduced-dimension codes will be discussed in Section2.3.

1We have also tried to randomly reorganize descriptors into matrices but found it to produce inferior performance.2.1.Random Bilinear Binary Codes

To convert a descriptor x∈R d to a d-dimensional bi-nary string,we?rst consider the framework of[1,6]that applies a random rotation R∈R d×d to x:

H(x)=sgn(R T x).(2) Since x can be represented as a matrix X∈R d1×d2,to make rotation more ef?cient,we propose a bilinear formu-lation using two random orthogonal matrices R1∈R d1×d1

and R2∈R d2×d2:

H(X)=vec

sgn(R T1XR2)

,(3)

where vec(·)denotes column-wise concatenation.

It is easy to show that applying a bilinear rotation to X∈R d1×d2is equivalent to applying a d1d2×d1d2rotation to vec(X).This rotation is given by?R=R2?R1,where?denotes the Kronecker product:

vec(R T1XR2)=(R T2?R T1)vec(X)=?R T vec(X)

follows from the properties of the Kronecker product[16]. Another basic property of the Kronecker product is that if R1and R2are orthogonal,then R2?R1is orthogonal as well[16].Thus,a bilinear rotation is simply a special case of a full rotation,such that the full rotation matrix?R can be reconstructed from two smaller matrices R1and R2.

While the degree of freedom of our bilinear rotation is more restricted than a full rotation,the projection matri-ces are much smaller,and the projection speed is much faster.In terms of time complexity,performing a full ro-tation on x takes O((d1d2)2)time,while our approach is O(d21d2+d1d22).In terms of space for projections,full rotation takes O((d1d2)2),and our approach only takes O(d21+d22).For example,as will be shown in Section 3.4,for a64K-dimensional vector,a full rotation will take roughly16GB of RAM,while the bilinear rotations only take1MB of RAM.The projection time for a full rotation is more than a second,vs.only3ms for bilinear rotations.

2.2.Learning Bilinear Binary Codes

In this section,we present a method for learning the ro-tations R1and R2that is inspired by two-sided Procrustes analysis[29]and builds on our earlier work[5,6,7]. Following[6],we want to?nd a rotation?R such that the angleθi between a rotated feature vector?R T x i= vec(R T1X i R2)and its binary encoding(geometrically,the nearest vertex of the binary hypercube),sgn(?R T x)= vec(sgn(R T1X i R2)),is minimized.Given N training

visual codewords S I F T d i m e n s i o n s

?0.03

?0.02

?0.0100.010.020.03

0.04

0.05(a)Original VLAD descriptor visual codewords

S I F T d i m e n s i o n

(b)Original binary code visual codewords S I F T d i m e n s i o n s

?0.025

?0.02

?0.015

?0.01?0.00500.0050.010.0150.02(c)Bilinearly rotated VLAD visual codewords

S I F T d i m e n s i o n s

(d)Bilinearly rotated binary code

Figure 2.Visualization of the VLAD descriptor and resulting binary code (given by the sign function)before and after learned bilinear rotation.We only show the ?rst 32SIFT dimensions and visual codewords.Before the rotation,we can clearly see a block pattern,with many zero values.After the rotation,the descriptor and the binary code look more whitened.

points,we want to maximize N

i =1

cos(θi )

= N i =1

sgn(?R T x i )T √d

(?R T x i )

(4)= N i =1 vec(sgn(R T 1X i R 2))T

√d

vec(R T

1

X i R 2) =1√d

N i =1 vec(B i )T vec(R T

1X i R 2)

=

1√d

N i =1tr(B i R T 2X T i R 1),(5)

where B i =sgn(R T

1X i R 2).Notice that (4)involves the

large projection matrix ?R

∈R d ×d ,direct optimization of which is challenging.However,after reformulation into bi-linear form (5),the expression only involves the two small matrices R 1and R 2.Letting B ={B 1,...,B N },our ob-jective function is as follows:

Q (B ,R 1,R 2)=max B ,R 1,R 2

N

i =1

tr(B i R T 2X T

i R 1)(6)

s .t .B i ∈{?1,+1}

d 1×d 2

,

R T 1R 1

=I,R T 2R 2=I.

This optimization problem can be solved by block coordi-nate ascent by alternating between the different variables

{B 1,...,B N },R 1,and R 2.We describe the update steps for each variable below,assuming the others are ?xed.

(S1)Update B i .When R 1and R 2are ?xed,we indepen-dently solve for each B i by maximizing

Q (B i )=tr(B i R T 2X T i R 1)= d 1k =1 d 2

l =1B kl i V lk i ,where V lk i denote the elements of V i =R T 2X T i

R 1.Q (B i )is maximized by B i =sgn( V T i ).

(S2)Update R 1.Expanding (6)with R 2and B i ?xed,we have the following:

Q (R 1)

= N

i =1

tr(B i R T 2X T

i R 1)

=

tr

N

i =1

(B i R T 2X T

i )R 1

=tr(D 1R 1),

where D 1= N

i =1(B i R T 2X T

i ).The above expression is maximized with the help of polar decomposition:R 1=

V 1U T

1

,where D 1=U 1S 1V T 1is the SVD of D 1.(S3)Update R 2:Q (R 2)

= N

i =1

tr(B i R T 2X T

i R 1)

= N

i =1tr(R T 2X T i R 1B i )=

tr R T 2 N

i =1

(X T i R 1B i ) =tr(R T

2D 2),

where D 2= N

i =1(X T

i R 1B i ).Analogously to the update rule for R 1,the update rule for R 2is R 2=U 2V T 2,where D 2=U 2S 2V T 2is the SVD of D 2.

We cycle between these updates for several iterations to obtain a local maximum.The convergence of the above pro-gram is guaranteed in ?nite number of iterations as the op-timal solution of each step is exactly obtained,each step is guaranteed not to decrease the objective function value,and the objective function is bounded from above.In our implementation,we initialize R 1and R 2by random rota-tions and use three iterations.We have not found signi?cant improvement of performance by using more iterations.The

time complexity of this program is O (N (d 31+d 3

2))where d 1and d 2are typically fairly small (e.g.,d 1=128,d 2=500).Figure 2visualizes the structure of a VLAD descrip-tor and the corresponding binary code before and after a learned bilinear rotation.

2.3.Learning with Dimensionality Reduction

The formulation of Section 2.2is used to learn d -dimensional binary codes starting from d -dimensional de-scriptors.Now,to produce a code of size c =c 1×c 2,where c 1

R 1∈R d 1×c 1,R 2∈R d 2×c 2such that R T

1

R 1=I and R T

2R 2=I .Each B i is now a c 1×c 2binary variable.Con-sider the cosine of the angle between a lower-dimensional

projected vector ?R

T x i and its binary encoding sgn(?R T x ):cos(θi )=

sgn(?R

T x i )T √c ?R T x i ?R

T x i 2,

where?R∈R d1d2×c1c2and?R T?R=I.This formulation differs from that of(4)in that it contains ?R T x i 2in the denominator,which makes the optimization dif?cult[5].In-stead,we follow[5]to de?ne a relaxed objective function based on the sum of linear correlations

Q(B,R1,R2)= N

i=1

sgn(?R T x i)T

c

(?R T x i)

.

The optimization framework for this objective is similar to that of Section2.2.For the three alternating optimization steps,(S1)remains the same.For(S2)and(S3),we com-pute the SVD of D1and D2as U1S1V T1and U2S2V T2re-spectively,and set the two rotations to R1=?V1U T1and

R2=?U2V T2,where?V1is the top c1singular vectors of V1 and?U2is the top c2singular vectors of U2.To initialize the optimization,we generate random orthogonal directions.

2.4.Distance Computation for Binary Codes

At retrieval time,given a query descriptor,we need to compute its distance to every binary code in the database. The most popular measure of distance for binary codes is the Hamming distance.We compute it ef?ciently by putting bits in groups of64and performing an XOR and bit count (popcount).For improved accuracy,we use asymmetric dis-tance,in which the database points are quantized but the query is not[3,8,13].In this work,we adopt the lower-bounded asymmetric distance proposed in[3]to measure the distance between the query and binary codes.For a query x∈R c and database points b∈{?1,+1}c,the lower-bounded asymmetric distance is simply the L2dis-tance between x and b:d a(x,b)= x 22+c?2x T b. Since x 22is on the query side and c is?xed,in practice, we only need to compute x T b.We do this by putting bits in groups of8and constructing a1×256lookup table to make the dot-product computation more ef?cient.

3.Experiments

3.1.Datasets and Features

We test our proposed approach,bilinear projection-based binary codes(BPBC),on two large-scale image datasets and two feature types.The?rst one is the INRIA Holiday dataset with1M Flickr distractors(Holiday+Flickr1M) [12].There are1,419images in the Holiday dataset cor-responding to500different scene instances,and each in-stance has three images on average.There is a set of500 query images,and the remaining919images together with the1M Flickr images are used as the database.We use the SIFT features of interest points provided by[14]and clus-ter them to500k-means centers.Then we represent each image by a128×500=64,000dimensional VLAD.The vectors are power-normalized(element-wise square-rooted) and L2-normalized as in[25].

The second dataset is the ILSVRC2010subset of Ima-geNet[2],which contains1.2M images and1,000classes. On this dataset,we use the publicly available SIFT features, which are densely extracted from the images at three dif-ferent scales.We cluster the features into200centers and then aggregate them into VLAD vectors of128×200= 25,600dimensions.These vectors are also power-and L2-normalized.In addition,we compute LLC features[32]on this dataset using a5,000-dimensional visual codebook and a three-level spatial pyramid(21spatial bins).The resulting features have5,000×21=105,000dimensions.Unlike VLAD descriptors,which are dense and have both positive and negative values,the LLC descriptors are nonnegative and sparse.For improved results,we zero-center and L2-normalize them.

3.2.Experimental Protocols

To learn binary codes using the methods of Sections2.2 and2.3,we randomly sample20,000training images from each dataset.We then set aside a number of query images that were not used for training and run nearest neighbor searches against all the other images in the dataset.For Holiday+Flickr1M,we sample the training images from the Flickr subset only and use the500prede?ned queries. For ILSVRC2010,we use1,000random queries.For each dataset,we evaluate two types of retrieval tasks:

1.Retrieval of ground-truth nearest neighbors from the

original feature space.These are de?ned as the top ten Euclidean nearest neighbors for each query based on original descriptors.Our performance measure for this task is the recall of10NN for different numbers of retrieved points[9,13].

2.Retrieval of“relevant”or“semantically correct”

neighbors.For Holiday+Flickr1M,these are de?ned as the images showing the same object instance(re-call from Section3.1that each query has around three matches).The standard performance measure for this task on this dataset is mean average precision (mAP)[8,12,14].For ILSVRC2010,we de?ne the ground-truth“relevant”neighbors as the images shar-ing the same category label.In this case,there are very many matches for each query.Following[6,18,20], we report performance using precision at top k re-trieved images(precision@k).

In addition,in Section3.8we perform image classi?cation experiments on the ILSVRC2010dataset.

3.3.Baseline Methods

Our main baseline is the state-of-the-art Product Quan-tization(PQ)approach[13].PQ groups the data dimen-sions in batches of size s and quantizes each group with k codebook centers.In our experiments,we use s=8and

Feature dim.LSH PQ RR+PQ BPBC

128×100.12 2.8 2.920.08

128×1009.3526.535.850.54

128×20029.1447.376.440.86

128×500186.22122.3308.52 3.06

128×1000–269.5–9.53

Table1.Average time(ms)to encode a single descriptor for LSH, PQ,and BPBC.The VLAD feature dimension is l×k.

Feature dim.LSH PQ RR+PQ BPBC

128×10 6.25 1.257.500.06

128×10062512.56370.10

128×200250025.025250.22

128×5001562562.515687 1.02

128×10006250012562625 3.88

Table2.Memory(MB)needed to store the projections(or code-books),assuming each element is a?oat(32bits).

k=256following[14].At query time,PQ uses asym-metric distance to compare an uncompressed query point to quantized database https://www.doczj.com/doc/623126322.html,ly,the distances between the code centers and corresponding dimensions of the query are?rst computed and stored in a lookup table.Then the distances between the query and database points are com-puted by table lookup and summation.

For high-dimensional features with unbalanced variance, J′e gou et al.[14]recommend randomly rotating the data prior to PQ.2This baseline will be referred to as RR+PQ. Whenever the descriptor dimensionality in our experiments is too high for us to perform the full random rotation,we perform a bilinear rotation instead(BR+PQ).3

As simpler baselines,we also consider LSH based on random projection[1]and theα=0binarization scheme proposed in[24],which simply takes the sign of each di-mension.There exist many other methods aimed at lower-dimensional data and shorter codes,e.g.,[6,20,30,33],but on our data,they produce poor results for small code sizes and do not scale to larger code sizes(recall Figure1).

https://www.doczj.com/doc/623126322.html,putation and Storage Requirements

First,we evaluate the scalability of our method compared to LSH,PQ,and RR+PQ for converting d-dimensional vec-tors to d-bit binary strings.For this test,we use the VLAD features.All running times are evaluated on a machine with 24GB RAM and a6-core2.6GHz CPU.Table1reports code generation time for different VLAD sizes and Table2re-ports the memory requirements for storing the projection matrix.It is clear that our bilinear formulation is orders of 2The original PQ paper[13]states that a random rotation is not needed prior to PQ.However,the conclusions of[13]are mainly drawn from low-dimensional data like SIFT and GIST,whose variance already tends to be roughly balanced.

3As another ef?cient alternative to random rotation prior to PQ,we have also considered a random permutation,but found that on our data it has no effect.

Code size SD(binary)ASD(binary)ASD(PQ)Eucl.(est.)

122×1000.33 4.48 4.59~120

128×2000.6011.2911.28~241 Table3.Retrieval time per query(seconds)on the ILSVRC2010 dataset with1.2M images and two different code sizes.This is the time to perform exhaustive computation of distances from a sin-gle query to all the1.2M images in the database.“SD”denotes symmetric(Hamming)and“ASD”asymmetric distance.For Eu-clidean distance,all the original descriptors do not?t in RAM,so the timing is extrapolated from a smaller subset.The actual timing is likely to be higher due to?le I/O.

magnitude more ef?cient than LSH and both versions of PQ both in terms of projection time and storage.

Table3compares the speed of Hamming distance(SD) vs.asymmetric distance(ASD)computation for two code sizes on ILSVRC2010.As one would expect,computing Hamming distance using XOR and popcount is extremely fast.The speed of ASD for PQ vs.our method is com-parable,and much slower than SD.However,note that for binary codes with ASD,one can?rst use SD to?nd a short list and then do re-ranking with ASD,which will be much faster than exhaustive ASD computation.

3.5.Retrieval on Holiday+Flickr1M

Next,we evaluate retrieval performance on the Holi-day+Flickr1M dataset using VLAD features with500×128=64,000dimensions.As explained in Section3.1, we use the prede?ned500Holiday queries.For64,000-dimensional features,evaluating RR+PQ is prohibitively expensive,so instead we try to combine the bilinear rotation with PQ(denoted as BR+PQ).For BPBC with dimension-ality reduction(Section2.3),we use bilinear projections R1∈R500×400,R2∈R128×80.This reduces the dimen-sionality in half.

Figure3(a)shows the recall of10NN from the original feature space for different numbers of retrieved images.PQ without rotation fails on this dataset;BR+PQ is slightly bet-ter,but is still disappointing.This is due to many Flickr im-ages(e.g.,sky and sea images)having few interest points, resulting in VLAD with entries that are mostly zero.Bilin-ear rotation appears to be insuf?cient to fully balance the variance in this case,and performing the full random rota-tion is too expensive.4On the other hand,all versions of BPBC show good performance.For a code size of32,000 (dimensionality reduction by a factor of two),learned ro-4J′e gou et al.[14]report relatively strong performance for RR+PQ on Holiday+Flickr1M,but they use lower-dimensional VLAD(d=2,048 and d=8,192)followed by PCA compression to32-128dimensions. These parameter settings are motivated by the goal of[14]to produce ex-tremely compact image codes.By contrast,our goal is to produce higher-dimensional codes that do not lose discriminative power.Indeed,by raising the dimensionality of the code,we are able to improve the retrieval accu-racy in absolute terms:the mAP for our BPBC setup(Figure3(b))is about 0.4vs.about0.2for the PQ setup of[14].

20

406080100

00.20.4

0.60.8

number of retrieved points

R e c a l l

BPBC (learned, ASD)BPBC (learned, SD)BPBC (random, SD)BPBC (learned, SD, 1/2)BPBC (random, SD, 1/2)sign (SD)

BR + PQ (ASD)PQ (ASD)

(a)Recall of 10NN.

Method Rate mAP VLAD (?oat)139.0Sign (SD)3225.6PQ (with bilinear rotation,ASD)32

24.0PQ (w/o rotation,ASD)

32 2.3BPBC (learned,ASD)3240.1BPBC (learned,SD)32

40.1BPBC (random,SD)

3240.3BPBC (learned,SD,1/2)6438.8BPBC (random,SD,1/2)64

38.6

(b)Instance-level retrieval results.

Figure 3.Results on the Holiday+Flickr1M dataset with 64,000-dimensional VLAD.(a)Recall of ground truth 10NN from the original feature space.(b)Instance-level retrieval results (mAP).“SD”(resp.“ASD”)denote symmetric (resp.asymmetric)dis-tance.“Sign”refers to binarization by thresholding each dimen-sion at zero.“Random”refers to the method of Section 2.1and “learned”refers to the methods of Sections 2.2and 2.3.“1/2”refers to reducing the code dimensionality in half with the method of Section 2.3.“Rate”is the factor by which storage is reduced compared to the original descriptors.

tation works much better than random,while for the full-dimensional BPBC,learned and random rotations perform similarly.Asymmetric distance (ASD)further improves the recall over symmetric distance (SD).

Next,Figure 3(b)reports instance-level image retrieval accuracy measured by mean average precision (mAP),or the area under the recall-precision curve.Both learned and random BPBC produce comparable results to the original descriptor.PQ without rotation works poorly,and BR+PQ is more reasonable,but still worse than our method.Note that for this task,unlike for the retrieval of 10NN,ASD does not give any further improvement over SD.This is good news,since SD computation is much faster (recall Table 3).

3.6.Retrieval on ILSVRC2010with VLAD

As discussed in Section 3.1,our VLAD descriptors for the ILSVRC2010dataset have dimensionality 25,600.Ran-dom rotation for this descriptor size is still feasible,so we

20

406080100

00.2

0.4

0.60.8

number of retrieved points

R e c a l l

BPBC (learned, ASD)BPBC (learned, SD)BPBC (random, SD)BPBC (learned, SD, 1/2)BPBC (random, SD, 1/2)sign (SD)

RR + PQ (ASD)BR + PQ (ASD)PQ (ASD)

(a)Recall of 10NN.

Method Rate P@10P@50VLAD (?oat)117.737.29Sign (SD)3213.15 3.87PQ (with full rotation,ASD)3218.067.41PQ (with bilinear rotation,ASD)3216.98 6.96PQ (w/o rotation,ASD)3211.32 3.14BPBC (learned,ASD)3218.01

7.49BPBC (learned,SD)

3218.077.42BPBC (random,SD)3218.17

7.60BPBC (learned,SD,1/2)

6417.807.25BPBC (random,SD,1/2)6416.85

6.78

(b)Categorical image retrieval results.

Figure 4.Results on ILSVRC2010with 25,600-dimensional VLAD.(a)Recall of ground-truth 10NN from the original feature space.(b)Semantic precision at 10and 50retrieved images.See caption of Figure 3for notation.

are able to evaluate RR+PQ.We randomly sample 1,000query images and use the rest as the database.For BPBC with dimensionality reduction,we construct bilinear pro-jections R 1∈R 200×160,R 2∈R 128×80,which reduces the dimensionality in half.Figure 4(a)compares the recall of 10NN from the original feature space with increasing num-ber of retrieved points.The basic PQ method on this dataset works much better than on Holiday+Flickr1M (in particular,unlike in Figure 3,it is now better than simple thresholding).This is because the images in ILSVRC2010are textured and contain prominent objects,which leads to VLAD with rea-sonably balanced variance.Furthermore,RR+PQ is feasi-ble for the VLAD dimensionality we use on ILSVRC2010.We can see from Figure 4(a)that the improvement from PQ to RR+PQ is remarkable,while BR+PQ is somewhat weaker than RR+PQ.Overall,RR+PQ is the strongest of all the baseline methods,and full-dimensional BPBC with asymmetric distance is able to match its performance while having a lower memory footprint and faster code generation time (recall Tables 1and 2).The relative performance of the other BPBC variants is the same as in Figure 3(a).

Next,we evaluate the semantic retrieval accuracy on this dataset.Figure 4(b)shows the average precision for top k

images retrieved.We can observe that RR+PQ and most

versions of BPBC have comparable precision to the origi-

nal uncompressed features.As in Section3.5,using ASD

as opposed to SD does not give any gains in semantic pre-

cision for our method.Thus,our method has an important

advantage over PQ at retrieval time,since unlike PQ,it can

be used with the faster Hamming distance.

3.7.Retrieval on ILSVRC2010with LLC

To demonstrate that the BPBC method is applicable to

other high-dimensional descriptors besides VLAD,we also

report retrieval results on the ILSVRC2010dataset with

LLC features.As discussed in Section3.1,these fea-

tures have the highest dimensionality yet in all our ex-

periments:5,000×21=105,000.To reduce dimen-

sionality by a factor of two,we use bilinear projections

R1∈R5000×2500,R2∈R21×21(note that the dimension-ality of the second side,representing the number of spatial

bins,is already low at21,and we have found that trying to

reduce it further can lead to unstable behavior for our learn-

ing algorithm).

Figure5(a)reports the recall for10NN.Most of the

trends are similar to those of Section3.6.Full-dimensional

BPBC with ASD once again has the best performance,to-

gether with BR+PQ(evaluating RR+PQ is once again in-

feasible).Learned rotation works signi?cantly better than

the random one.Next,Table5(b)reports the semantic pre-

cision analogously to Table4(b).As in Table4(b),PQ

with rotation and different versions of BPBC work simi-

larly.By comparing Table4(b)and5(b),we can see that

LLC features have higher absolute precision than VLAD,

which con?rms that extremely high-dimensional features

and codes,far from over?tting,are necessary to obtain bet-

ter performance on very large-scale many-category datasets.

3.8.Image Classi?cation

Finally,we demonstrate the effectiveness of our pro-

posed codes for SVM classi?cation on ILSVRC2010.We

adopt the setting of S′a nchez and Perronnin[28]where the

classi?er is trained on compressed or encoded descriptors

but tested on the original ones.For PQ-compressed data,we

perform decoding in order to train the SVM(decoding con-

sists of looking up and concatenating the codewords corre-

sponding to each subset of dimensions)and test on original

descriptors(rotated if necessary).Note that unlike[28],we

use batch training,and decoding all the training descriptors

ahead of time does not actually save us on storage.Instead,

our goal is simply to establish a baseline classi?cation accu-

racy.For BPBC-compressed data,we train directly on the

binarized vectors sgn(?R T vec(X))but test on un-binarized vectors?R T vec(X).

Our classi?er is LIBLINEAR SVM[4]and the feature

is25,600-dimensional VLAD.To limit the running time for

20406080100 0

0.2

0.4

0.6

0.8

number of retrieved images

R

e

c

a

l

l

BPBC (learned, ASD)

BPBC (learned, SD)

BPBC (random, SD)

BPBC (learned, SD, 1/2)

BPBC (random, SD, 1/2)

sign (SD)

BR + PQ (ASD)

PQ (ASD)

(a)Recall of10NN.

Method Rate P@10P@50

VLAD(?oat)121.5010.06

Sign(SD)3210.99 2.58

PQ(with bilinear rotation,ASD)3221.4110.11

PQ(w/o rotation,ASD)3212.67 4.20

BPBC(learned,ASD)3221.7810.11

BPBC(learned,SD)3221.5410.11

BPBC(random,SD)3221.7310.35

BPBC(learned,SD,1/2)6421.229.96

BPBC(random,SD,1/2)6421.279.88

(b)Categorical image retrieval results.

Figure5.Retrieval results on ILSVRC dataset with105,000-dimensional LLC features.(a)Recall for10NN from the origi-nal feature space.(b)Categorical retrieval precision at10and50 retrieved images.See caption of Figure3for notation.

SVM training and parameter tuning,and to make sure we can hold all the training data in memory,we randomly sam-ple100classes from ILSVRC2010.We use?ve random splits into50%for training,25%for validation,and25%for testing.To generate negative data,we sample200points per class,which does not sacri?ce accuracy too much.For each method,we validate the SVM hyperparameter C on a grid of[2×10?5,2×102]with order-of-magnitude increments.

Classi?cation results are reported in Table4.The full-dimensional BPBC incurs very little loss of accuracy over the original features,and the learned and random rotations work comparably.When dimension is reduced in half,the classi?cation accuracy drops,with the random method de-grading more than the learned one.RR+PQ outperforms the best version of BPBC by about0.3%,but it is not clear whether the difference is statistically signi?cant.In-terestingly,PQ without RR still produces reasonable classi-?cation results,while its performance on retrieval tasks is severely degraded.Evidently,the SVM training can com-pensate for the quantization error,whereas nearest-neighbor search without a supervised learning stage cannot.Similar reasoning applies to binarization by taking the sign,whose performance is also fairly good for this task.

Method Rate Classi?cation accuracy

VLAD(?oat)144.87±0.30

Sign3241.10±0.34

PQ3244.05±0.33

RR+PQ3244.64±0.13

BPBC(learned)3244.34±0.21

BPBC(random)3244.27±0.19

BPBC(learned,1/2)6443.06±0.20

BPBC(random,1/2)6441.28±0.20

Table4.Image classi?cation results on100classes randomly

sampled classes from the ILSVRC2010dataset.The set of

classes is?xed but results are averaged over?ve different train-

ing/validation/test splits.The visual feature is25,600-dimensional

VLAD.“Rate”is the factor by which storage is reduced.

An interesting question is as follows:instead of starting

with d-dimensional data and reducing the dimensionality to

d/2through binary coding,can we obtain the same accu-racy if we start with d/2-dimensional features and main-

tain this dimensionality in the coding step?To answer

this question,we have performed classi?cation on12,800-

dimensional VLAD for the same100classes and obtained

an average accuracy of43.08%.This is comparable to

the accuracy of our12,800-dimensional binary descriptor

learned from the25,600-dimensional VLAD.On the other

hand,classifying12,800-dimensional codes computed from

12,800-dimensional VLAD gives an accuracy of41.63%.

Thus,starting with the highest possible dimensionality of

the original features appears to be important for learning

binary codes with the most discriminative power.

4.Discussion and Future Work

This paper has presented a novel bilinear rotation

formulation for learning binary codes for high dimensional

feature vectors that exploits the natural two-dimensional

structure of many existing descriptors.Our approach

matches the accuracy of Product Quantization for retrieval

and classi?cation tasks while being much more ef?cient in

terms of memory and computation.As recent progress on

recognition shows that using descriptors with millions of

dimensions can lead to even better performance[17,28],it

will be interesting to apply our approach on such data.

Acknowledgments.We thank Ruiqi Guo for helpful dis-cussions.Gong and Lazebnik were supported by NSF grant IIS1228082,DARPA Computer Science Study Group (D12AP00305),Microsoft Research Faculty Fellowship, and Xerox.

References

[1]M.S.Charikar.Similarity estimation techniques from rounding al-

gorithms.STOC,2002.

[2]J.Deng,W.Dong,R.Socher,L.-J.Li,K.Li,and L.Fei-Fei.Ima-

geNet:A large-scale hierarchical image database.CVPR,2009.

[3]W.Dong,M.Charikar,and K.Li.Asymmetric distance estimation

with sketches for similarity search in high-dimensional spaces.SI-GIR,2008.

[4]R.-E.Fan,K.-W.Chang,C.-J.Hsieh,X.-R.Wang,and C.-J.Lin.

Liblinear:A library for large linear classi?cation.JMLR,2008. [5]Y.Gong,S.Kumar,V.Verma,and https://www.doczj.com/doc/623126322.html,zebnik.Angular quantization

based binary codes for fast similarity search.NIPS,2012.

[6]Y.Gong and https://www.doczj.com/doc/623126322.html,zebnik.Iterative quantization:A Procrustean ap-

proach to learning binary codes.In:CVPR,2011.

[7]Y.Gong,https://www.doczj.com/doc/623126322.html,zebnik,A.Gordo,and F.Perronnin.Iterative quantiza-

tion:A Procrustean approach to learning binary codes for large-scale image retrieval.PAMI,2012.

[8] A.Gordo and F.Perronnin.Asymmetric distances for binary embed-

dings.CVPR,2011.

[9]J.He,R.Radhakrishnan,S.-F.Chang,and https://www.doczj.com/doc/623126322.html,pact hash-

ing with joint optimization of search accuracy and time.CVPR,2011.

[10]J.-P.Heo,Y.Lee,J.He,S.-F.Chang,and S.-E.Yoon.Spherical

hashing.CVPR,2012.

[11]P.Jain,B.Kulis,and K.Grauman.Fast image search for learned

metrics.CVPR,2008.

[12]H.J′e gou,M.Douze,and C.Schmid.Hamming embedding and weak

geometric consistency for large-scale image search.ECCV,2008.

[13]H.J′e gou,M.Douze,and C.Schmid.Product quantization for nearest

neighbor search.IEEE TPAMI,2011.

[14]H.J′e gou,M.Douze,C.Schmid,and P.Perez.Aggregating local

descriptors into a compact image representation.CVPR,2010. [15] B.Kulis and K.Grauman.Kernelized locality-sensitive hashing for

scalable image search.In ICCV,2009.

[16] https://www.doczj.com/doc/623126322.html,ub.Matrix Analysis for Scientists and Engineers.SIAM.

[17]Y.Lin,L.Cao,F.Lv,S.Zhu,M.Yang,T.Cour,K.Yu,and T.Huang.

Large-scale image classi?cation:Fast feature extraction and SVM training.In CVPR,2011.

[18]W.Liu,J.Wang,R.Ji,Y.-G.Jiang,and S.-F.Chang.Supervised

hashing with kernels.CVPR,2012.

[19]W.Liu,J.Wang,Y.Mu,S.Kumar,and https://www.doczj.com/doc/623126322.html,pact hy-

perplane hashing with bilinear functions.ICML.

[20]M.Norouzi and D.J.Fleet.Minimal loss hashing for compact binary

codes.ICML,2011.

[21]M.Norouzi,R.Salakhutdinov,and D.Fleet.Hamming distance met-

ric learning.In NIPS,2012.

[22] A.Oliva and A.Torralba.Modeling the shape of the scene:a holistic

representation of the spatial envelope.IJCV,2001.

[23] F.Perronnin and C.R.Dance.Fisher kernels on visual vocabularies

for image categorization.CVPR,2007.

[24] F.Perronnin,Y.Liu,J.S′a nchez,and https://www.doczj.com/doc/623126322.html,rge-scale image

retrieval with compressed Fisher vectors.In CVPR,2010.

[25] F.Perronnin,J.S′a nchez,and T.Mensink.Improving the Fisher ker-

nel for large-scale image classi?cation.ECCV,2010.

[26]H.Pirsiavash,D.Ramanan,and C.Fowlkes.Bilinear classi?ers for

visual recognition.NIPS,2009.

[27]M.Raginsky and https://www.doczj.com/doc/623126322.html,zebnik.Locality sensitive binary codes from

sift-invariant kernels.NIPS,2009.

[28]J.S′a nchez and F.Perronnin.High-dimensional signature compres-

sion for large-scale image classi?cation.In CVPR,2011.

[29]P.Sch¨o nemann.On two-sided orthogonal Procrustes problems.Psy-

chometrika,1968.

[30] A.Torralba,R.Fergus,and Y.Weiss.Small codes and large image

databases for recognition.CVPR,2008.

[31]J.Wang,S.Kumar,and S.-F.Chang.Semi-supervised hashing for

scalable image retrieval.CVPR,2010.

[32]J.Wang,J.Yang,K.Yu,F.Lv,T.Huang,and Y.Gong.Locality-

constrained linear coding for image classi?cation.CVPR,2010.

[33]Y.Weiss,A.Torralba,and R.Fergus.Spectral hashing.NIPS,2008.

[34]J.Ye,R.Janardan,and Q.Li.Two-dimensional linear discriminant

analysis.NIPS,2004.

内蒙古自治区人民政府办公厅关于印发《内蒙古自治区2013年至2015年

内蒙古自治区人民政府办公厅关于印发《内蒙古自治区2013年至2015年金属非金属矿山整顿关闭工作实施方案》的通知 【法规类别】矿产资源监督管理 【发文字号】内政办发[2013]17号 【发布部门】内蒙古自治区政府 【发布日期】2013.03.01 【实施日期】2013.03.01 【时效性】现行有效 【效力级别】XP10 内蒙古自治区人民政府办公厅关于印发《内蒙古自治区2013年至2015年金属非金属矿 山整顿关闭工作实施方案》的通知 (内政办发〔2013〕17号2013年3月1日) 各盟行政公署、市人民政府,自治区各有关委、办、厅、局: 经自治区人民政府同意,现将《内蒙古自治区2013年-2015年金属非金属矿山整顿关闭工作实施方案》印发给你们,请认真贯彻执行。 内蒙古自治区2013年至2015年金属非金属矿山整顿关闭工作实施方案按照《国务院办公厅转发安全监管总局等部门关于依法做好金属非金属矿山整顿工作

意见的通知》(国办发〔2012〕54号)要求,为确保我区金属非金属矿山整顿关闭工作(以下简称整顿关闭工作)有序进行,完成整顿关闭工作目标和任务,特制定本方案。 一、总体要求和目标任务 (一)总体要求。以《国务院关于坚持科学发展安全发展促进安全生产形势持续稳定好转的意见》(国发〔2011〕40号)为指导,大力实施安全发展战略,按照严格依法、淘汰落后、标本兼治、稳步推进的原则,统筹采取“关闭、整合、整改、提升”等措施,依法取缔和关闭无证开采、不具备安全生产条件和破坏生态、污染环境等各类矿山尤其是小矿山,全面提高矿山安全生产水平和安全保障能力,促进全区矿山安全生产形势持续稳定好转。 (二)目标任务。到2015年底,力争完成国家安全监管总局2012年下达的整顿关闭1457个非煤矿山企业的工作任务(其中2012年已经关闭253个,2013年至2015年计划关闭1204个),基本改变我区非煤矿山企业“多、小、散、乱、差”的状况,私挖滥采、偷采盗采、无证开采和超层越界开采等非法违法行为得到有效制止,不符合产业政策、安全保障能力低下的小型矿山得到依法整顿关闭,浪费破坏矿产资源、严重污染环境等行为得到根本遏制,小型矿山安全基础工作进一步加强,矿山安全生产条件进一步改善,矿山规模化、机械化、标准化、信息化、科学化水平进一步提高,生产安全事故持续下降,较大事故得到有效遏制,重大以上事故基本杜绝。 二、组织领导机构及相关部门职责 (一)组织领导机构。为确保整顿关闭工作的顺利有序实施,自治区成立金属非金属矿山整顿关闭工作领导小组(以下简称自治区领导小组) 组长: 王波自治区副主席

2012年北京中考数学试卷(含答案)

2012年中考数学卷精析版——北京卷 (本试卷满分120分,考试时间120分钟) 一、选择题(本题共32分,每小题4分)下面各题均有四个选项,其中只有一个是符合题意的. 3.(2012北京市4分)正十边形的每个外角等于【】 A.18?B.36?C.45?D.60? 【答案】B。 【考点】多边形外角性质。 【分析】根据外角和等于3600的性质,得正十边形的每个外角等于3600÷10=360。故选B。4.(2012北京市4分)下图是某个几何体的三视图,该几何体是【】 A.长方体B.正方体C.圆柱D.三棱柱 【答案】D。 【考点】由三视图判断几何体。

【分析】主视图、左视图、俯视图是分别从物体正面、左面和上面看,所得到的图形,由于主视图和左视图为矩形,可得为柱体,俯视图为三角形可得为三棱柱。故选D。 5.(2012北京市4分)班主任王老师将6份奖品分别放在6个完全相同的不透明礼盒中,准备将它们奖给小英等6位获“爱集体标兵”称号的同学.这些奖品中3份是学习文具,2份是科普读物,1份是科技馆通票.小英同学从中随机取一份奖品,恰好取到科普读物的概率是【】 A.1 6 B. 1 3 C. 1 2 D. 2 3 【答案】B。 【考点】概率。 【分析】根据概率的求法,找准两点:①全部等可能情况的总数;②符合条件的情况数目;二者的比值就是其发生的概率。本题全部等可能情况的总数6,取到科普读物的情况是2。∴取到科普读物的概率是 21 63 =。故选B。 6.(2012北京市4分)如图,直线AB,CD交于点O,射线OM平分∠AOD,若∠BOD=760,则∠BOM 等于【】 A.38?B.104?C.142?D.144? 【答案】C。 【考点】角平分线定义,对顶角的性质,补角的定义。 【分析】由∠BOD=760,根据对顶角相等的性质,得∠AOC=760,根据补角的定义,得∠BOC=1040。 由射线OM平分∠AOD,根据角平分线定义,∠COM=380。 ∴∠BOM=∠COM+∠BOC=1420。故选C。 7.(2012北京市4分)某课外小组的同学们在社会实践活动中调查了20户家庭某月的用电量,如下表所示: 用电量(度)120 140 160 180 200 户数 2 3 6 7 2 则这20户家庭该月用电量的众数和中位数分别是【】 A.180,160 B.160,180 C.160,160 D.180,180 【答案】A。 【考点】众数,中位数。 【分析】众数是在一组数据中,出现次数最多的数据,这组数据中,出现次数最多的是180,故这组

详解:同等学力申硕上课时间及方式

详解:同等学力申硕上课时间及方式 通过报考同等学申硕在职人员不仅可以获得硕士学位证书,还能提升自己专业技能,但是不少在职人员担心同等学力申硕学习会影响自己的工作,所以一直没能参加报考。那么,同等学力申硕上课时间及方式是怎样的呢 同等学力申硕上课时间及方式: 相关老师介绍,同等学力申硕在职研究生专业的上课时间不会与在职学员的 日常工作发生冲突的,一般都是安排在工作之余,周末或者是节假日,而且参加学习的人员可以在多种授课方式中,根据自己的实际情况来选择上课方式,合理安排时间。 同等学力申硕授课方式有面授和网络网络学习,其中面授形式主要是周末班或者是假期班,喜欢参加面授的学员,如果能在周末上课的话,就可以考虑选择面授形式。但是提醒大家的是,学员最好在面授地点附近,这样如果是周末上课的话,来回奔波不会浪费太多的时间、精力。因此,选择专业及院校的同时,学 员要提前了解清楚上课地点也是很重要的。 如果实在无法保证在周末参加学习,或者不能在集中假期参加学习,那么选择远程授课方式是最为合适的。目前,很多院校都开设的了远程课程,如中国政法大学在职研究生的法学,首都经济贸易大学在职研究生、中国社会科学院在职研究生等院校的管理类专业,以及中国科学院心理研究所在职研究生的心理学相 关专业,都是可以让学员在家里用网络完成学习及考试事宜的,而且在申硕成功后,获得的证书也是与面授班一样的。

可见,大家不用担心同等学力申硕上课时间及方式会影响自己的工作,只要选择适当,安排合理,一定会让自己学习、工作上实现双赢的。 通过以上的介绍,相信大家对同等学力申硕上课时间及方式有了一定的了 解,如果您还有什么不明白的,可以直接访问我们的网站,咨询我们的在线老师。院校名称热招专业方向推荐院校简介 中国人民大 学 公共管理企业管理中国人民大学是一 所以人文社会科学 为主的综合性研究 型全国重点大学, 直属于教育部,由 教育部与北京市共 建。目前学校是国 家“985工程”和 “211工程”重点 建设的大学之一。 社会学人力资源管理 EMBA 艺术学 技术经济及管理会计学 计算机哲学 市场营销法学 国民经济学网络经济学 历史学项目管理 传播学新闻学 院校名称热招专业方向推荐院校简介 南开大学 劳动经济学金融学南开大学由中华人 民共和国教育部直 属,国家“211工 程”和“985工程”, 入选首批“2011 计划”、“111计金融投资与公司金融金融管理与理财规划 金融投资与证券实务金融投资与理财风控 财务管理与财政税收人力资源管理

浙江省义乌市中考数学试卷(解析版)

2018年浙江省义乌市中考数学试卷 一、选择题(共10小题) 1.(2018义乌市)﹣2的相反数是() A.2B.﹣2C.D. 考点:相反数。 解答:解:由相反数的定义可知,﹣2的相反数是﹣(﹣2)=2. 故选A. 2.(2018义乌市)下列四个立体图形中,主视图为圆的是() ABCD 考点:简单几何体的三视图。 解答:解:A、主视图是正方形,故此选项错误; B、主视图是圆,故此选项正确; C、主视图是三角形,故此选项错误; D、主视图是长方形,故此选项错误; 故选:B. 3.(2018义乌市)下列计算正确的是() A.a3a2=a6B.a2+a4=2a2C.(a3)2=a6D.(3a)2=a6 考点:幂的乘方与积的乘方;合并同类项;同底数幂的乘法。 解答:解:A、a3a2=a3+2=a5,故此选项错误; B、a2和a4不是同类项,不能合并,故此选项错误; C、(a3)2=a6,故此选项正确; D、(3a)2=9a2,故此选项错误; 故选:C. 4.(2018义乌市)一个正方形的面积是15,估计它的边长大小在()A.2与3之间B.3与4之间C.4与5之间D.5与6之间

考点:估算无理数的大小;算术平方根。 解答:解:∵一个正方形的面积是15, ∴该正方形的边长为, ∵9<15<16, ∴3<<4. 故选C. 5.(2018义乌市)在x=﹣4,﹣1,0,3中,满足不等式组的x值是() A.﹣4和0B.﹣4和﹣1C.0和3D.﹣1和0 考点:解一元一次不等式组;不等式的解集。 解答:解:, 由②得,x>﹣2, 故此不等式组的解集为:﹣2<x<2, x=﹣4,﹣1,0,3中只有﹣1、0满足题意. 故选D. 6.(2018义乌市)如果三角形的两边长分别为3和5,第三边长是偶数,则第三边长可以是() A.2B.3C.4D.8 考点:三角形三边关系。 解答:解:由题意,令第三边为X,则5﹣3<X<5+3,即2<X<8, ∵第三边长为偶数,∴第三边长是4或6. ∴三角形的三边长可以为3、5、4. 故选:C. 7.(2018义乌市)如图,将周长为8的△ABC沿BC方向平移1个单位得到△DEF,则四边形ABFD的周长为()

【解析版】2013年北京市中考数学试卷及答案

北京市2013年中考数学试卷 一、选择题(本题共32分,每小题4分。下列各题均有四个选项,其中只有一个是符合题意的。 1.(4分)(2013?北京)在《关于促进城市南部地区加快发展第二阶段行动计划(2013﹣2015)》中,北京市提出了共计约3960亿元的投资计划,将3960用科学记数法表示应为()A.39.6×102B.3.96×103C.3.96×104D.0.396×104 考点:科学记数法—表示较大的数. 分析:科学记数法的表示形式为a×10n的形式,其中1≤|a|<10,n为整数.确定n的值时,要看把原数变成a时,小数点移动了多少位,n的绝对值与小数点移动的位数相同.当原数绝对值>1时,n是正数;当原数的绝对值<1时,n是负数. 解答:解:将3960用科学记数法表示为3.96×103. 故选B. 点评:此题考查科学记数法的表示方法.科学记数法的表示形式为a×10n的形式,其中1≤|a|<10,n为整数,表示时关键要正确确定a的值以及n的值. 2.(4分)(2013?北京)﹣的倒数是() A.B.C. ﹣D. ﹣ 考点:倒数. 分析:根据倒数的定义:若两个数的乘积是1,我们就称这两个数互为倒数. 解答: 解:∵(﹣)×(﹣)=1, ∴﹣的倒数是﹣. 故选D. 点评:本题主要考查倒数的定义,要求熟练掌握.需要注意的是: 倒数的性质:负数的倒数还是负数,正数的倒数是正数,0没有倒数. 倒数的定义:若两个数的乘积是1,我们就称这两个数互为倒数. 3.(4分)(2013?北京)在一个不透明的口袋中装有5个完全相同的小球,把它们分别标号为1,2,3,4,5,从中随机摸出一个小球,其标号大于2的概率为()A.B.C.D. 考点:概率公式. 分析:根据随机事件概率大小的求法,找准两点:①符合条件的情况数目,②全部情况的总数,二者的比值就是其发生的概率的大小. 解答:解:根据题意可得:大于2的有3,4,5三个球,共5个球, 任意摸出1个,摸到大于2的概率是.

【解析版一】2013年浙江省义乌市中考数学试卷及答案

浙江省义乌市2013年中考数学试卷 考生须知: 1. 全卷共4页,有3大题,24小题. 满分为120分.考试时间120分钟. 2. 本卷答案必须做在答题纸的对应位置上,做在试题卷上无效. 3. 请考生将姓名、准考证号填写在答题纸的对应位置上,并认真核准条形码的姓名、准考证号. 4. 作图时,可先使用2B 铅笔,确定后必须使用0.5毫米及以上的黑色签字笔涂黑. 5. 本次考试不能使用计算器. 温馨提示:请仔细审题,细心答题,相信你一定会有出色的表现! 参考公式:二次函数y =ax 2 +bx +c (a ≠0)图象的顶点坐标是)442(2 a b ac a b --,. 卷 Ⅰ 说明:本卷共有1大题,10小题,每小题3分,共30分.请用2B 铅笔在“答题纸”上将 你认为正确的选项对应的小方框涂黑、涂满. 一、选择题(请选出各题中一个符合题意的正确选项,不选、多选、错选,均不给分) 1. 在2,-2,8,6这四个数中,互为相反数的是 A .-2与2 B .2与8 C .-2与6 D .6与8 答案:A 解析:互为相反数的两个数绝对值相等,符号相反,所以,2与-2互为相反数。 2.如图几何体的主视图是 答案:C 解析:从正面看,下面有三个小正方形,左上有一个小正方形,所以主视图是C 。 3.如图,直线a ∥b ,直线c 与a ,b 相交,∠1=55°,则∠2= A .55° B .35° C .125° D .65° 答案:A 解析:两直线平行,同位角相等,以及由对顶角相等, 得,∠2=∠1=55°,选A 。 4.2012年,义乌市城市居民人均可支配收入约为44500元,居全 省县级市之首,数字44500用科学计数法可表示为 A .31045.4? B .41045.4? C .51045.4? D .61045.4? 答案:B 解析:科学记数法的表示形式为a×10n 的形式,其中1≤|a|<10,n 为整数.确定n 的值时, 要看把原数变成a 时,小数点移动了多少位,n 的绝对值与小数点移动的位数相同.当原数绝对值>1时,n 是正数;当原数的绝对值<1时,n 是负数 44500=41045.4? 5.两圆半径分别为2和3,圆心距为5,则这两个圆的位置关系是

(会考)2013年内蒙古自治区普通高中学业水平考试(历史)

机密★启用前 试卷类型A 2013年内蒙古自治区普通高中学业水平考试 历史 注意事项:1.本试卷共8页。满分100分,考试时间90分钟; 2.考生作答时,将答案写在答题卡上。写在本试卷上无效; 3.考试结束后,将本试卷和答题卡一并交回。 一、选择题:共30个小题,每小题2分,共60分。每小题给出的四个选项中,只有一项是最符合题目要求。 1.2011年1月新《老年法》规定“子女要经常回家看望父母”,这一消息引起社会热议。结合下图场景,能判断出影响今天中国人观念行为的古代政治制度是() 台湾蒋孝严赴大陆祭祖 A.世袭制 B.分封制 C.宗法制 D.禅让制 2.“中国式建筑是凝固的思想意识形态,……太和殿内皇帝所用的御座,安置在一个高约2米的基座上,使御座从平地升起,犹如须弥座托着的太和殿的缩影。”这体现的政治思想是() A.民贵君轻 B.皇权至上 C.以法治国 D.天人感应 3.专制主义中央集权制度在封建社会后期(明清时期)消极影响逐渐增大,主要在于() A.官场腐败严重 B.官僚主义之风盛行 C.激化了阶级矛盾 D.严重阻碍了中国社会的进步 4.下列不平等条约中,最早允许外国人在中国通商口岸投资设厂的是() A.《南京条约》 B.《北京条约》 C.<马关条约》 D.《辛丑条约》 5.雅典民主是一种直接民主的典范,但实际上民主也是有限的。有资格入选古雅典陪审团的是() A.成年雅典男性公民 B.奴隶 C.外邦人 D.雅典妇女 6.彻底改变了近代以来中国在反侵略战争中屡战屡败的屈辱史的事件是() A.五四运动爆发 B.红军长征的胜利 C.香港回归 D.抗日战争的胜利 7.为了完成祖国统一大业,邓小平提出了“一个国家,两种制度”的构想。“一

2013年北京中考西城一模数学(含答案)电子版

北京市西城区2013年初三一模试卷 数 学 2013. 5 一、选择题(本题共32分,每小题4分) 下面各题均有四个选项,其中只有一个..是符合题意的. 1.3-的相反数是 A .3 1 - B . 3 1 C .3 D .3- 2.上海原世博园区最大单体建筑“世博轴”被改造成一个综合性商业中心,该项目营业面积约130 000平方米,130 000用科学记数法表示应为 A .1.3×105 B .1.3×104 C .13×104 D .0.13×106 3.如图,AF 是∠BAC 的平分线,EF ∥AC 交AB 于点 E . 若∠1=25°,则BAF ∠的度数为 A .15° B .50° C .25° D .12.5° 4.在一个不透明的盒子中装有3个红球、2个黄球和1个绿球,这些球除颜色外,没有任何其他区别,现从这个盒子中随机摸出一个球,摸到黄球的概率为 A . 2 1 B . 3 1 C . 6 1 D .1 5.若菱形的对角线长分别为6和8,则该菱形的边长为 A .5 B .6 C .8 D .10 6 则该队队员年龄的众数和中位数分别是 A .16,15 B .15,15.5 C .15,17 D .15,16 7.由一些大小相同的小正方体搭成的一个几何体的三视图如图所示,则构成这个几何体 的小正方体共有 A .6个 B .7个 C .8个 D .9个

8.如图,在矩形ABCD 中,AB=2,BC=4.将矩形ABCD 绕点C 沿顺时针方向旋转90°后,得到矩形FGCE (点A 、B 、D 的对应点分别为点F 、G 、E ).动点P 从点B 开始沿BC-CE 运动到点E 后停止,动点Q 从点E 开始沿EF -FG 运动到点G 后停止,这两点的运动速度均为每秒1个单位.若点P 和点Q 同时开始运动,运动时间为x (秒),△APQ 的面积为y ,则能够正确反映y 与x 之间的函数关系的图象大致是 二、填空题(本题共16分,每小题4分) 9.函数y = x 的取值范围是 . 10.分解因式:3 2 816a a a -+= . 11.如图,在梯形ABCD 中,AD ∥BC ,BD ⊥DC ,∠C=45°. 若AD=2,BC=8,则AB 的长为 . 12.在平面直角坐标系xOy 中,有一只电子青蛙在点A (1,0)处. 第一次,它从点A 先向右跳跃1个单位,再向上跳跃1个单位到达点A 1; 第二次,它从点A 1先向左跳跃2个单位,再向下跳跃2个单位到达点A 2; 第三次,它从点A 2先向右跳跃3个单位,再向上跳跃3个单位到达点A 3; 第四次,它从点A 3先向左跳跃4个单位,再向下跳跃4个单位到达点A 4; …… 依此规律进行,点A 6的坐标为 ;若点A n 的坐标为(2013,2012), 则n = . 三、解答题(本题共30分,每小题5分) 13.计算:10345sin 2)13(8-+?--+. 14.解不等式组 4(1)78,2 5,3x x x x +≤-?? -?-

2013年内蒙古包头市中考数学试卷及答案(Word解析版)

2013年内蒙古包头市中考数学试卷及答案(Word解析版) 一、选择题(本大题共12小题,每小题3分,满分36分。每小题只有一个正确选项,请将答题卡上对应题目的答案标号涂黑) D 代入进行计算即可. 解:原式=3×. 3.(3分)(2013?包头)函数y=中,自变量x的取值范围是()

2 知,原方程的两根是( 6.(3分)(2013?包头)一组数据按从大到小排列为2,4,8,x,10,14.若这

8.(3分)(2013?包头)用一个圆心角为120°,半径为2的扇形作一个圆锥的 ,然后解方程即可. ,解得:

9.(3分)(2013?包头)化简÷?,其结果是() ﹣ ?= 10.(3分)(2013?包头)如图,四边形ABCD和四边形AEFC是两个矩形,点B 在EF边上,若矩形ABCD和矩形AEFC的面积分别是S 1、S 2 的大小关系是() 11.(3分)(2013?包头)已知下列命题:①若a>b,则c﹣a<c﹣b; ②若a>0,则=a;

③对角线互相平行且相等的四边形是菱形; ④如果两条弧相等,那么它们所对的圆心角相等. =a 12.(3分)(2013?包头)已知二次函数y=ax2+bx+c(a≠0)的图象如图所示,下列结论:①b<0;②4a+2b+c<0;③a﹣b+c>0;④(a+c)2<b2.其中正确的结论是() >

二、填空题(共8小题,每小题3分,满分24分。请把答案填在各题对应的横线上) 14.(3分)(2013?包头)某次射击训练中,一小组的成绩如表所示:已知该小组的平均成绩为8环,那么成绩为9环的人数是 3 . 15.(3分)(2013?包头)如图,点A、B、C、D在⊙O上,OB⊥AC,若∠BOC=56°,则∠ADB=28 度.

2013北京中考数学试题、答案解析版

2013年北京市高级中等学校招生考试数学试卷 一、选择题(本题共32分,每小题4分) 下面各题均有四个选项,其中只有一个是符合题意的。 1. 在《关于促进城市南部地区加快发展第二阶段行动计划(2013-2015)》中,北京市提出了总计约3 960亿元的投资计划。将3 960用科学计数法表示应为 ( ) A. 39.6×102 B. 3.96×103 C. 3.96×104 D. 3.96×104 考点:科学记数法—表示较大的数 分析:科学记数法的表示形式为a×10n 的形式,其中1≤|a|<10,n 为整数.确定n 的值时,要看把原数变成a 时,小数点移动了多少位,n 的绝对值与小数点移动的位数相同.当原数绝对值>1时,n 是正数;当原数的绝对值<1时,n 是负数. 解答:将3960用科学记数法表示为3.96×103.故选B . 点评:此题考查科学记数法的表示方法.科学记数法的表示形式为a×10n 的形式,其中1≤|a|<10,n 为整数,表示时关键要正确确定a 的值以及n 的值. 2. 43 - 的倒数是 ( ) A. 34 B. 43 C. 43- D. 34 - 考点:倒数 分析:据倒数的定义:若两个数的乘积是1,我们就称这两个数互为倒数 解答:D 点评:本题主要考查倒数的定义,要求熟练掌握.需要注意的是: 倒数的性质:负数的倒数还是负数,正数的倒数是正数,0没有倒数. 倒数的定义:若两个数的乘积是1,我们就称这两个数互为倒数 3. 在一个不透明的口袋中装有5个完全相同的小球,把它们分别标号为1,2,3,4,5,从中随机摸出一个小球,其标号大于2的概率为() A. 51 B. 52 C. 53 D. 54 考点:概率公式 分析:根据随机事件概率大小的求法,找准两点:①符合条件的情况数目,②全部情况的总数,二者的比值就是其发生的概率的大小. 解答:C 点评:本题考查概率的求法与运用,一般方法:如果一个事件有n 种可能,而且这些事件的可 能性相同,其中事件A 出现m 种结果,那么事件A 的概率 n m A P = )(,难度适中。 4. 如图,直线a ,b 被直线c 所截,a ∥b ,∠1=∠2,若∠3=40°,则∠4等于() A. 40° B. 50° C. 70° D. 80° 考点:平行线的性质 分析:根据平角的定义求出∠1,再根据两直线平行,内错角相等解答. 解答: 点评:本题考查了平行线的性质,平角等于180°,熟记性质并求出∠1是解题的关键

全国高考历年各省录取分数线比较与分析

全国高考历年各省录取分数线比较与分析 (2012-01-12 18:02:09) 转载▼ 分类:杂谈 标签: 全国高考 各省 分数 比较 分析 山东 河北 北京 上海 湖北 江苏 浙江 甘肃 陕西 主要以时间序列来考察中央部属大学分省招生的公平性问题,本节主要考察恢复高考以来各省分数线的整体演变趋势,这也是被社会各界广泛关注的焦点问题。具体来说,依据分省招生的数量、基础教育的水平和高等教育资源的丰富程度三个因素来揭示其演变的动因。首先,高考分数线的变化与招生名额的投放有很大关系,即在相同的条件下,招生数量越多,录取分数线就越低;其次,基础教育水平的高低决定了该省生源的优劣程度,在同等条件下,基础教育水平越高,分数线也相应越高;最后,高等教育资源的丰富程度决定了招生数量的多寡,也会影响到分数线的变化,其中,高校的数量,特别是“211工程”院校和“985”工程院校的数量在很大程度上决定了本科一批分数线的高低。本节主要选取这三个因素来反映各省高考录取分数线的变化情况。 一、恢复高考以来各省分数线的变化趋势 高考建制之初,由于招生数在整体上多于高中毕业生数,所以录取分数线也较低,并且实行以大行政区为主的招生体制,所以当时的分数线没有太多实质的意义。1958 年高考制度暂时中断,次年旋即恢复,并从此确立了分省录取制度,至此才出现了分省的高考录取分数线。但因 20 世纪 60 年代强烈的**因素的干扰,高考制度经历了较大的反复,科目改革频

仍,且相关数据散佚难以获取。 故此,只研究恢复高考以来各省分数线的变化情况。笔者选取 1980 年、1991 年和 1999 年的三个时间点的分省高考录取分数线来研究其基本的走势,之所以选取这三个时间点,出于以下考虑: 其一,1977 年到 1979 年考生众多、竞争激烈,属于特殊时期,从 1980 年开始,各项教育事业和高考制度逐步趋于正常; 其二,1999 年除广东实行“3+X”改革和上海单独命题之外,其他省区均采用全国卷,分数易于比较,之后因“3+X”改革方案在全国推广,试卷纷繁多样而难以比较;其三,1991 年大致处于两者之间,且大多数省区采用全国卷,分数易于比较。故此,选取以上三个年份的数据。大体而言,三个时段的分数线基本能够反应各省分数线变化的趋势。 将 1977年至 1999 年的各省录取分数线整理如下

华中师范大学同等学力申硕靠谱吗

华中师范大学同等学力申硕靠谱吗? 华中师范大学是“211工程”、“985工程优势学科创新平台”重点建设院校,每年都有不少在职人员报考华中师范大学在职研究生,随着十月联考的取消,报考华中师范大学同等学力申硕的人员有所增加。那么,华中师范大学同等学力申硕靠谱吗? 据了解,同等学力申硕是在职研究生考试的一种,并且是多数人选择报考的在职研究生形式,很多人认为同等学力申硕不靠谱无非是有这么几个原因:免试入学!大专可入学!课程结束后参加考试!毕业后拿硕士学位单证!下面针对这几个问题逐一解答。 免试入学: 同等学力申硕之所以选择免试入学,是想让更多的在职人员能参与到在职研究生的考试当中,因为毕竟现在社会压力大,能有机会多学点东西也是好的。并且同力统考会对考生进行资格审查的,通过审查的人才能入校学习。 大专可入学: 同力统考招生是允许大专及以上学历的人入校学习的,但仅仅只是学习,同力统考申硕条件还是很严格的,想参加申硕的人必须本科毕业有学士学位,并且有三年工作经验,否则是不能参加申硕考试的。 课程结束后参加考试:

同力统考先参加课程学习,招收同力统考的院校都是先上课后考试的,一般考生在课程期间满足条件就可以参加同力统考的申硕考试了,通过申硕考试的考生,完成院校的考核拿到结业证书、完成论文答辩就可以拿到硕士学位证书了。 毕业后拿硕士学位单证: 同等学力申硕是非学历教育,所以毕业后申硕成功是拿在职研究生学位证书,证书和统研的证书是一样的,有一样的效力,同样被国家所承认。 通过以上的介绍,相信大家对华中师范大学同等学力申硕是否靠谱有了一定的了解,如果您还有什么不明白的,可以直接咨询我们的在线老师。

2008义乌市中考数学卷

第 1 页 共 13 页 浙江省2008年初中毕业生学业考试(义乌市卷) 数学试题卷 考生须知: 1. 全卷共4页,有3大题,24小题. 满分为150分,考试时间120分钟. 2. 本卷答案必须做在答题纸的对应位置上,做在试题卷上无效. 3. 请考生将姓名、准考证号填写在答题纸的对应位置上.并认真核对答题纸上粘贴的条 形码的“姓名、准考证号”与考生本人姓名、准考证号是否一致. 4. 作图时,可先使用2B 铅笔,确定后必须使用0.5毫米及以上的黑色签字笔涂黑. 温馨提示:请仔细审题,细心答题,相信你一定会有出色的表现! 参考公式:二次函数 y =ax 2+bx +c 图象的顶点坐标是)44,2(2 a b a c a b --. 试 卷 Ⅰ 说明:本卷共有1大题,10小题,每小题4分,共40分.请用2B 铅笔在“答题纸”上将你认为正确的选项对应的小方框涂黑、涂满. 一、选择题(请选出各题中一个符合题意的正确选项,不选、多选、错选,均不给分) 1. 计算-2+3的结果是 A .1 B .-1 C .-5 D .-6 2.据统计,2007年义乌中国小商品城市场全年成交额约为348.4亿元,连续第17次蝉联全国批发市场榜首.近似数348.4亿元的有效数字的个数是 A.3个 B. 4个 C.5个 D .6个 3.国家实行一系列惠农政策后,农村居民收入大幅度增加.下表是2003年至2007年我市农村居民年人均收入情况(单位:元),则这几年我市农村居民年人均收入的中位数是 A .6969元 B .7735元 C .8810元 D .10255元 4.下列四个几何体中,主视图、左视图、俯视图都是圆的几何体是 A.正方体 B.圆锥 C.球 D .圆柱 5.不等式组312840 x x ->??-?, ≤的解集在数轴上表示为 A . B . C . D .

北京市2014年中考数学试题及答案

2014年北京市高级中等学校招生考试 数学试卷 学校 姓名 准考证号 下面各题均有四个选项,其中只有一个..是符合题意的. 1.2的相反数是 A .2 B .2- C .1 2 - D . 12 2.据报道, 某小区居民李先生改进用水设备,在十年内帮助他居住小区的居民累计节水 300 000 吨.将300 000 用科学记数法表示应为 A .60.310? B .5310? C .6310? D .43010? 3.如图,有6张扑克处于,从中随机抽取一张,点数为偶数的概率是 A . 16 B . 14 C .13 D . 12 4.右图是几何体的三视图,该几何体是 A.圆锥 B .圆柱 C .正三棱柱 D .正三棱锥 5.某篮球队12名队员的年龄如下表所示: A .18,19 B .19,19 C .18 ,19.5 D .19,19.5 6.园林队在某公园进行绿化,中间休息了一段时间.已知绿化面积S (单位:平方米)与工作时间t (单位:小时)的函数关系的图象如图所示,则休息后园林队每小时绿化面积为 A .40平方米 B .50平方米 C .80平方米 D .100平方米

O E D C B A 7.如图.O e 的直径AB 垂直于弦CD ,垂足是E ,22.5A ∠=?, 4OC =,CD 的长为 A . B .4 C . D .8 8.已知点A 为某封闭图形边界上一定点,动点P 从点A 出发,沿其边界顺时针匀速运动一周.设点P 运动的时间为x ,线段AP 的长为y .表示y 与x 的函数关系的图象大致如右图所示,则该封闭图形可能是 A A D C B A A 二、填空题(本题共16分,每小题4分) 9.分解因式:429______________ax ay -=. 10.在某一时刻,测得一根高为1.8m 的竹竿的影长为3m ,同时测得一根旗杆的影长为25m ,那么这根旗杆的高度为 m . 11.如图,在平面直角坐标系xOy 中,正方形OABC 的边长为2.写 出一个函数(0)k y k x =≠,使它的图象与正方形OABC 有公共 点,这个函数的表达式为 . 12.在平面直角坐标系x Oy 中,对于点()P x y , ,我们把点(11)P y x '-++,叫做点P 的伴随点,已知点1A 的伴随点为2A , 点2A 的伴随点为3A ,点3A 的伴随点为4A ,…,这样依次得到点1A ,2A ,3A ,…,n A ,….若点1A 的坐标为(3,1),则点3A 的坐标为 ,点2014A 的坐标为 ;若点1A 的坐标为(a ,b ),对于任意的正整数n ,点n A 均在x 轴上方,则a ,b 应满足的条件为 . 三、解答题(本题共30分,每小题5分) 13.如图,点B 在线段AD 上, BC DE ∥,AB ED =,BC DB =. 求证:A E ∠=∠. E C B A D

2013年内蒙古自治区鄂尔多斯市中考数学试题

2013年鄂尔多斯市中考数学试题 一、单项选择(本大题共10题,每题3分,共30分.) 1.若“神舟十号”发射点火前15秒记为﹣15秒,那么发射点火后10秒应记为( ) A .-5秒 B .5秒 C .-10秒 D .+10秒 2.中央电视台有一个非常受欢迎的娱乐节目:墙来了!选手需 按墙上的空洞造型(如图所示)摆出相同姿势,才能穿墙而 过,否则会被推入水池.若墙上的三个空洞恰是某个几何体的 三视图,则该几何体为( ) A B C D 3.2013年,鄂尔多斯市计划新建、改扩建中小学15所,规划投入资金计10.2亿元. 数据“10.2亿”用科学记数法表示为( ) A .1.02×107 B .1.02×108 C .1.02×109 D .10.2×108 4.下列汽车标志中,既是轴对称图形,又是中心对称图形的是( ) A B C D 5.不等式组?? ?≤--+<-3)1(21112x x x 的解集在数轴上表示,正确的是( ) A B C D 6.一次数学模考后,李老师统计了20名学生的成绩. 记录如下:有6人得了85分, 有5人得了80分,有4人得了65分,有5人得了90分.则这组数据的中位数和 平均数分别是( ) A .82.5,82.5 B .85,81 C .82.5,81 D .85,82.5 7.下列说法中,正确的有( ) (1)25的平方根是5±. (2)五边形的内角和是540°. (3)抛物线432+-=x x y 与x 轴无交点.

(4)等腰三角形两边长为6cm 和4cm ,则它的周长是16cm. (5)若⊙O 1与⊙O 2的半径分别是方程0342 =+-x x 的两根,且O 1O 2=3,则两圆相 交. A .2个 B .3个 C .4个 D .5个 8.如图,A 和B 两地在一条河的两岸,现要在河上造一座桥MN , 使从A 到B 的路径AMNB 最短的是(假定河的两岸是平行直 线,桥要与河岸垂直)( ) A B C D 9.如图,小明随机地在对角线为6cm 和8cm 的菱形区域内投针,则针扎到其内切圆 区域的概率是( ) A .25 7π B .253π C .256π D .254π 10.某校校园内有一个大正方形花坛,它由四个边长均为3米的 小正方形组成,如图(1),且每个小正方形的种植方案相同. 其中的一个小正方形ABCD 如图(2),DG =1米,AE=AF=x 米, 在五边形EFBCG 区域上种植花卉,则大正方形花坛种植 花卉的面积y 与x 的函数图象大致是( ) A B C D 二、填空(本大题共8题,每题3分,共24分.)

2016年北京市中考数学试卷(解析版)

2016年北京市中考数学试卷 一、选择题(本题共30分,每小题3分) 1.(3分)(2016?北京)如图所示,用量角器度量∠AOB,可以读出∠AOB的度数为() A.45°B.55°C.125°D.135° 2.(3分)(2016?北京)神舟十号飞船是我国“神州”系列飞船之一,每小时飞行约28000公里,将28000用科学记数法表示应为() A.2.8×103B.28×103C.2.8×104D.0.28×105 3.(3分)(2016?北京)实数a,b在数轴上的对应点的位置如图所示,则正确的结论是() A.a>﹣2 B.a<﹣3 C.a>﹣b D.a<﹣b 4.(3分)(2016?北京)内角和为540°的多边形是() A. B.C. D. 5.(3分)(2016?北京)如图是某个几何体的三视图,该几何体是() A.圆锥 B.三棱锥C.圆柱 D.三棱柱 6.(3分)(2016?北京)如果a+b=2,那么代数(a﹣)?的值是() A.2 B.﹣2 C.D.﹣ 7.(3分)(2016?北京)甲骨文是我国的一种古代文字,是汉字的早期形式,下列甲骨文中,不是轴对称的是()

A.B.C.D. 8.(3分)(2016?北京)在1﹣7月份,某种水果的每斤进价与出售价的信息如图所示,则出售该种水果每斤利润最大的月份是() A.3月份B.4月份C.5月份D.6月份 9.(3分)(2016?北京)如图,直线m⊥n,在某平面直角坐标系中,x轴∥m,y轴∥n,点A的坐标为(﹣4,2),点B的坐标为(2,﹣4),则坐标原点为() A.O1B.O2C.O3D.O4 10.(3分)(2016?北京)为了节约水资源,某市准备按照居民家庭年用水量实行阶梯水价.水价分档递增,计划使第一档、第二档和第三档的水价分别覆盖全市居民家庭的80%,15%和5%,为合理确定各档之间的界限,随机抽查了该市5万户居民家庭上一年的年用水量(单位:m3),绘制了统计图.如图所示,下面四个推断合理的是() ①年用水量不超过180m3的该市居民家庭按第一档水价交费; ②年用水量超过240m3的该市居民家庭按第三档水价交费; ③该市居民家庭年用水量的中位数在150﹣180之间; ④该市居民家庭年用水量的平均数不超过180.

北京市2015年中考数学试题及答案

2015年北京市高级中等学校招生考试 数学试卷 一、选择题 下面各题均有四个选项,其中只有一个 ..是符合题意的。 1.截止到2015年6月1日,北京市已建成34个地下调蓄设施,蓄水能力达到1 40 000立方平米。将1 40 000用科学记数法表示应为 A.14×104B.1.4×105 C.1.4×106 D.0.14×106 2.实数a,b,c,d在数轴上的对应点的位置如图所示,这四个数中,绝对值最大的是 A.a B.b C.c D.d 3.一个不透明的盒子中装有3个红球,2个黄球和1个绿球,这些球除了颜色外无其他差别,从中随机摸出一个小球,恰好是黄球的概率为 A.B.C.D. 4.剪纸是我国传统的民间艺术,下列剪纸作品中,是轴对称图形的为 5.如图,直线l1,l2,l3交于一点,直线l4∥l1,若∠1=124°,∠2=88°, 则∠3的度数为 A.26°B.36° C.46°D.56° 6.如图,公路AC,BC互相垂直,公路AB的中点M与点C被湖隔开,若测得AM的长为1.2km,则M,C两点间的距离为 A.0.5km B.0.6km C.0.9km D.1.2km 7.某市6月份日平均气温统计如图所示,则在日平均气温这组数 据中,众数和中位数分别是 A.21,21 B.21,21.5 C.21,22 D.22,22 8.右图是利用平面直角坐标系画出的故宫博物院的主要建

筑分布图。若这个坐标系分别以正东、正北方向为x轴、y轴的正方向。表示太和门的点坐标为(0,-1),表示九龙壁的点的坐标为(4,1),则表示下列宫殿的点的坐标正确的是 A.景仁宫(4,2) B.养心殿(-2,3) C.保和殿(1,0) D.武英殿(-3.5,-4) 9.一家游泳馆的游泳收费标准为30元/次,若购买会员年卡,可享受如下优惠: 会员年卡类型办卡费用(元)每次游泳收费(元) A类50 25 B类200 20 C类400 15 例如,购买A类会员卡,一年内游泳20次,消费50+25×20=550元,若一年内在该游泳馆游泳的次数介于45~55次之间,则最省钱的方式为 A.购买A类会员年卡B.购买B类会员年卡 C.购买C类会员年卡D.不购买会员年卡 10.一个寻宝游戏的寻宝通道如图1所示,通道由在同一平面内的AB,BC,CA,OA,OB,OC组成。为记录寻宝者的进行路线,在BC的中点M处放置了一台定位仪器,设寻宝者行进的时间为x,寻宝者与定位仪器之间的距离为y,若寻宝者匀速行进,且表示y与x的函数关系的图象大致如图2所示,则寻宝者的行进路线可能为 A.A→O→B B.B→A→C C.B→O→C D.C→B→O 二、填空题 11.分解因式:5x2-10x2=5x=_________. 12.右图是由射线AB,BC,CD,DE,组成的平面图形,则∠1+∠2+∠3+∠4+∠5=_____.

2013年内蒙古银行考试真题及答案详解

2013年内蒙古银行真题及答案详解 一、填空题 1、按照贷款期限划分,可分是以下几种(短期贷款)、(中期贷款)、(长期贷款)。 2、(资产) 、( 负债)、(所有者权益)是构成资产负债表的三大要素。 3、农村信用社的资金来源包括( 资本金) 、(存款) 、(金融市场筹资) 。 4、《担保法》中规定的担保方式是(保证)、(抵押)、(质押)、(留置)、(定金)。 5、如合同无其他约定,保证担保的范围包括(主债权及利息)、(违约金)、(损害赔偿金)和实现债权的费用。 6、保证的方式有(一般保证)、(连带责任保证)。 7、按贷款方式划分的贷款种类有(信用贷款)、( 担保贷款)、(票据贴现)。 8、票据贴现的贴现期限最长不得超过( 6 ) 个月,贴现期限是从贴现之日起到票据到期日止。 9、农村信用社贷款风险分类有( 正常、关注、次级、可疑和损失)5个档次,后3个档次是不良贷款。 10、《银行贷款损失准备计提指引》规定贷款损失专项准备计提准备标准是:关注类贷款,计提比例是2%;次级类贷款,计提比例是 (25%);可疑类贷款,计提比例是50%;损失类贷款,计提比例是(100%)。 11、耕地、宅基地、自留地、自留山等集体所有的土地(所有权和使用权)不得做抵押。 12、《贷款通则》规定的自营贷款期限最长一般不得超过(十)年,超过的应当报中国人民银行备案。 13、按贷款风险分类法尽管借款人目前有能力偿还贷款本息,但存在一些可能对偿还产生不利影响的因素的贷款应归是( 关注)类贷款 14、按贷款风险分类法借款人的还款能力出现明显问题,完全依靠其正常营业收入无法足额偿还贷款本息,即使执行担保,也可能会造成(一定损失)的贷款应归是次级类贷款。 15、抵押人以房产抵押的,应当办理抵押物登记,抵押合同自(登记之日)起生效。 16、贷款的种类有(自营贷款)、(委托贷款)和(特定贷款)。 17、违反规定徇私向亲属、朋友发放贷款或者提供担保造成损失的,应当承担(全部)或(部分)赔偿责任。 18、借款人与贷款人的借贷活动应当遵循(平等)、(自愿)、(公平)、和(诚实信用的)原则。 二、单选题 1、流动比率是一项重要的财务指标,其参考值应是( A )。 A、100% B、200% C、50% D、90% 2、短期贷款展期期限累计不超过(A)。 A、原贷款期限 B、原贷款期限的一半 C、2年 D、3年 3、不得利用贷款从事(B)。 A、房地产项目 B、股本权益性投资(国家另有规定除外) C、厂房建设、大修 D、设备更新换代 4、《担保法》规定,保证合同没有约定保证期间的,保证期间是主债务履行期届满之日起个月。个月。(C)。

相关主题
文本预览
相关文档 最新文档