2011年美国大学生数学建模竞赛优秀作品
- 格式:pdf
- 大小:569.41 KB
- 文档页数:10
2011高教社杯全国大学生数学建模竞赛题目(请先阅读“全国大学生数学建模竞赛论文格式规范”)A题城市表层土壤重金属污染分析随着城市经济的快速发展和城市人口的不断增加,人类活动对城市环境质量的影响日显突出。
对城市土壤地质环境异常的查证,以及如何应用查证获得的海量数据资料开展城市环境质量评价,研究人类活动影响下城市地质环境的演变模式,日益成为人们关注的焦点。
按照功能划分,城区一般可分为生活区、工业区、山区、主干道路区及公园绿地区等,分别记为1类区、2类区、……、5类区,不同的区域环境受人类活动影响的程度不同。
现对某城市城区土壤地质环境进行调查。
为此,将所考察的城区划分为间距1公里左右的网格子区域,按照每平方公里1个采样点对表层土(0~10 厘米深度)进行取样、编号,并用GPS记录采样点的位置。
应用专门仪器测试分析,获得了每个样本所含的多种化学元素的浓度数据。
另一方面,按照2公里的间距在那些远离人群及工业活动的自然区取样,将其作为该城区表层土壤中元素的背景值。
附件1列出了采样点的位置、海拔高度及其所属功能区等信息,附件2列出了8种主要重金属元素在采样点处的浓度,附件3列出了8种主要重金属元素的背景值。
现要求你们通过数学建模来完成以下任务:(1) 给出8种主要重金属元素在该城区的空间分布,并分析该城区内不同区域重金属的污染程度。
(2) 通过数据分析,说明重金属污染的主要原因。
(3) 分析重金属污染物的传播特征,由此建立模型,确定污染源的位置。
(4) 分析你所建立模型的优缺点,为更好地研究城市地质环境的演变模式,还应收集什么信息?有了这些信息,如何建立模型解决问题?B题交巡警服务平台的设置与调度“有困难找警察”,是家喻户晓的一句流行语。
警察肩负着刑事执法、治安管理、交通管理、服务群众四大职能。
为了更有效地贯彻实施这些职能,需要在市区的一些交通要道和重要部位设置交巡警服务平台。
每个交巡警服务平台的职能和警力配备基本相同。
重庆大学2011—2012年度学生“争先创优”候选标兵风采展示优秀学生候选标兵流水不腐,户枢不蠹——法学院法学专业李蒙蒙辅导员:刘力强中共党员,通过大学英语四级(642分)、六级(610分)和计算机二级,前三年平均学分绩点3.79,专业排名第5,综合排名第1。
该生始终秉持着‚流水不腐,户枢不蠹‛的心态,坚持以高标准严格要求自己,努力向自己的梦想靠近。
她成绩优异,多次荣获国家奖学金、重庆大学综合奖学金、祥生奖学金等;积极参加各类学科竞赛,曾获全国大学生英语竞赛特等奖、世界级赛事Jessup国际法模拟法庭比赛全国一等奖等奖项。
科研方面,该生所参与项目获第二届SRTP优秀项目;成功申报第五届国家大学生创新性实验计划,并担任项目负责人;其论文已被《法制与经济》录用。
该生曾担任法学院辩论队成员、英语辩论协会会长、班级学习委员等职务;同时多次参加关爱留守儿童、普法宣传等志愿活动,为社会发展积极奉献一己之力。
获奖情况:国家级奖励(7项)——2012年5月,获2012年全国大学生英语竞赛特等奖;2012年2月,代表重庆大学参加‚中国第十届Jessup国际法模拟法庭全国选拔赛‛获一等奖;2011年5月,获2011年全国大学生英语竞赛一等奖;2011年2月,代表重庆大学参加‚中国第九届Jessup国际法模拟法庭全国选拔赛‛获二等奖;2010-2011年,连续2次获国家奖学金;2010年5月,获全国大学生英语竞赛二等奖。
校级奖励(17项)——2009年至今,连续6次获重庆大学综合奖学金;2010-2012年连续3次被评为‚优秀学生‛;2011年12月,参与完成的《大学校园暴力犯罪的成因及防控》获第二届‚重庆大学大学生科研训练计划‛优秀项目二等奖。
朴实而饶有兴致,执着而富有激情——机械工程学院机械设计制造及其自动化专业何俊艺辅导员:赵林中共党员,以综合名次第2的好成绩(404人)保送上海交通大学硕博连读。
累计获得国家级奖励5项,重庆市级奖励2项,校级奖励7项,国家实用新型专利2项。
1985 年美国大学生数学建模竞赛MCM 试题1985年MCM:动物种群选择适宜的鱼类和哺乳动物数据准确模型。
模型动物的自然表达人口水平与环境相互作用的不同群体的环境的重要参数,然后调整账户获取表单模型符合实际的动物提取的方法。
包括任何食物或限制以外的空间限制,得到数据的支持。
考虑所涉及的各种数量的价值,收获数量和人口规模本身,为了设计一个数字量代表的整体价值收获。
找到一个收集政策的人口规模和时间优化的价值收获在很长一段时间。
检查政策优化价值在现实的环境条件。
1985年MCM B:战略储藏管理钴、不产生在美国,许多行业至关重要。
(国防占17%的钴生产。
1979年)钴大局部来自非洲中部,一个政治上不稳定的地区。
1946年的战略和关键材料储藏法案需要钴储藏,将美国政府通过一项为期三年的战争。
建立了库存在1950年代,出售大局部在1970年代初,然后决定在1970年代末建立起来,与8540万磅。
大约一半的库存目标的储藏已经在1982年收购了。
建立一个数学模型来管理储藏的战略金属钴。
你需要考虑这样的问题:库存应该有多大?以什么速度应该被收购?一个合理的代价是什么金属?你也要考虑这样的问题:什么时候库存应该画下来吗?以什么速度应该是画下来吗?在金属价格是合理出售什么?它应该如何分配?有用的信息在钴政府方案在2500万年需要2500万磅的钴。
美国大约有1亿磅的钴矿床。
生产变得经济可行当价格到达22美元/磅(如发生在1981年)。
要花四年滚动操作,和thsn六百万英镑每年可以生产。
1980年,120万磅的钴回收,总消费的7%。
1986 年美国大学生数学建模竞赛MCM 试题1986年MCM A:水文数据下表给出了Z的水深度尺外表点的直角坐标X,Y在码(14数据点表省略)。
深度测量在退潮。
你的船有一个五英尺的草案。
你应该防止什么地区的矩形(75200)X(-50、150)?1986年MCM B:Emergency-Facilities位置迄今为止,力拓的乡牧场没有自己的应急设施。
For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number7018Problem ChosencFor office use onlyF1________________F2________________F3________________F4________________ SummaryThe article is aimed to research the potential impact of the marine garbage debris on marine ecosystem and human beings,and how we can deal with the substantial problems caused by the aggregation of marine wastes.In task one,we give a definition of the potential long-term and short-term impact of marine plastic garbage. Regard the toxin concentration effect caused by marine garbage as long-term impact and to track and monitor it. We etablish the composite indicator model on density of plastic toxin,and the content of toxin absorbed by plastic fragment in the ocean to express the impact of marine garbage on ecosystem. Take Japan sea as example to examine our model.In ask two, we designe an algorithm, using the density value of marine plastic of each year in discrete measure point given by reference,and we plot plastic density of the whole area in varies locations. Based on the changes in marine plastic density in different years, we determine generally that the center of the plastic vortex is East—West140°W—150°W, South—North30°N—40°N. According to our algorithm, we can monitor a sea area reasonably only by regular observation of part of the specified measuring pointIn task three,we classify the plastic into three types,which is surface layer plastic,deep layer plastic and interlayer between the two. Then we analysis the the degradation mechanism of plastic in each layer. Finally,we get the reason why those plastic fragments come to a similar size.In task four, we classify the source of the marine plastic into three types,the land accounting for 80%,fishing gears accounting for 10%,boating accounting for 10%,and estimate the optimization model according to the duel-target principle of emissions reduction and management. Finally, we arrive at a more reasonable optimization strategy.In task five,we first analyze the mechanism of the formation of the Pacific ocean trash vortex, and thus conclude that the marine garbage swirl will also emerge in south Pacific,south Atlantic and the India ocean. According to the Concentration of diffusion theory, we establish the differential prediction model of the future marine garbage density,and predict the density of the garbage in south Atlantic ocean. Then we get the stable density in eight measuring point .In task six, we get the results by the data of the annual national consumption ofpolypropylene plastic packaging and the data fitting method, and predict the environmental benefit generated by the prohibition of polypropylene take-away food packaging in the next decade. By means of this model and our prediction,each nation will reduce releasing 1.31 million tons of plastic garbage in next decade.Finally, we submit a report to expediction leader,summarize our work and make some feasible suggestions to the policy- makers.Task 1:Definition:●Potential short-term effects of the plastic: the hazardeffects will be shown in the short term.●Potential long-term effects of the plastic: thepotential effects, of which hazards are great, willappear after a long time.The short- and long-term effects of the plastic on the ocean environment:In our definition, the short-term and long-term effects of the plastic on the ocean environment are as follows.Short-term effects:1)The plastic is eaten by marine animals or birds.2) Animals are wrapped by plastics, such as fishing nets, which hurt or even kill them.3)Deaden the way of the passing vessels.Long-term effects:1)Enrichment of toxins through the food chain: the waste plastic in the ocean has no natural degradation in theshort-term, which will first be broken down into tinyfragments through the role of light, waves,micro-organisms, while the molecular structure has notchanged. These "plastic sands", easy to be eaten byplankton, fish and other, are Seemingly very similar tomarine life’s food,causing the enrichment and delivery of toxins.2)Accelerate the greenhouse effect: after a long-term accumulation and pollution of plastics, the waterbecame turbid, which will seriously affect the marineplants (such as phytoplankton and algae) inphotosynthesis. A large number of plankton’s deathswould also lower the ability of the ocean to absorbcarbon dioxide, intensifying the greenhouse effect tosome extent.To monitor the impact of plastic rubbish on the marine ecosystem:According to the relevant literature, we know that plastic resin pellets accumulate toxic chemicals , such as PCBs、DDE , and nonylphenols , and may serve as a transport medium and soure of toxins to marine organisms that ingest them[]2. As it is difficult for the plastic garbage in the ocean to complete degradation in the short term, the plastic resin pellets in the water will increase over time and thus absorb more toxins, resulting in the enrichment of toxins and causing serious impact on the marine ecosystem.Therefore, we track the monitoring of the concentration of PCBs, DDE, and nonylphenols containing in the plastic resin pellets in the sea water, as an indicator to compare the extent of pollution in different regions of the sea, thus reflecting the impact of plastic rubbish on ecosystem.To establish pollution index evaluation model: For purposes of comparison, we unify the concentration indexes of PCBs, DDE, and nonylphenols in a comprehensive index.Preparations:1)Data Standardization2)Determination of the index weightBecause Japan has done researches on the contents of PCBs,DDE, and nonylphenols in the plastic resin pellets, we illustrate the survey conducted in Japanese waters by the University of Tokyo between 1997 and 1998.To standardize the concentration indexes of PCBs, DDE,and nonylphenols. We assume Kasai Sesside Park, KeihinCanal, Kugenuma Beach, Shioda Beach in the survey arethe first, second, third, fourth region; PCBs, DDE, andnonylphenols are the first, second, third indicators.Then to establish the standardized model:j j jij ij V V V V V min max min --= (1,2,3,4;1,2,3i j ==)wherej V max is the maximum of the measurement of j indicator in the four regions.j V min is the minimum of the measurement of j indicatorstandardized value of j indicator in i region.According to the literature [2], Japanese observationaldata is shown in Table 1.Table 1. PCBs, DDE, and, nonylphenols Contents in Marine PolypropyleneTable 1 Using the established standardized model to standardize, we have Table 2.In Table 2,the three indicators of Shioda Beach area are all 0, because the contents of PCBs, DDE, and nonylphenols in Polypropylene Plastic Resin Pellets in this area are the least, while 0 only relatively represents the smallest. Similarly, 1 indicates that in some area the value of a indicator is the largest.To determine the index weight of PCBs, DDE, and nonylphenolsWe use Analytic Hierarchy Process (AHP) to determine the weight of the three indicators in the general pollution indicator. AHP is an effective method which transforms semi-qualitative and semi-quantitative problems into quantitative calculation. It uses ideas of analysis and synthesis in decision-making, ideally suited for multi-index comprehensive evaluation.Hierarchy are shown in figure 1.Fig.1 Hierarchy of index factorsThen we determine the weight of each concentrationindicator in the generall pollution indicator, and the process are described as follows:To analyze the role of each concentration indicator, we haveestablished a matrix P to study the relative proportion.⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=111323123211312P P P P P P P Where mn P represents the relative importance of theconcentration indicators m B and n B . Usually we use 1,2,…,9 and their reciprocals to represent different importance. The greater the number is, the more important it is. Similarly, the relative importance of m B and n B is mn P /1(3,2,1,=n m ).Suppose the maximum eigenvalue of P is m ax λ, then theconsistency index is1max --=n nCI λThe average consistency index is RI , then the consistencyratio isRICI CR = For the matrix P of 3≥n , if 1.0<CR the consistency isthougt to be better, of which eigenvector can be used as the weight vector.We get the comparison matrix accoding to the harmful levelsof PCBs, DDE, and nonylphenols and the requirments ofEPA on the maximum concentration of the three toxins inseawater as follows:⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=165416131431P We get the maximum eigenvalue of P by MATLAB calculation0012.3max =λand the corresponding eigenvector of it is()2393.02975.09243.0,,=W1.0042.012.1047.0<===RI CI CR Therefore,we determine the degree of inconsistency formatrix P within the permissible range. With the eigenvectors of p as weights vector, we get thefinal weight vector by normalization ()1638.02036.06326.0',,=W . Defining the overall target of pollution for the No i oceanis i Q , among other things the standardized value of threeindicators for the No i ocean is ()321,,i i i i V V V V = and the weightvector is 'W ,Then we form the model for the overall target of marine pollution assessment, (3,2,1=i )By the model above, we obtained the Value of the totalpollution index for four regions in Japanese ocean in Table 3T B W Q '=In Table3, the value of the total pollution index is the hightest that means the concentration of toxins in Polypropylene Plastic Resin Pellets is the hightest, whereas the value of the total pollution index in Shioda Beach is the lowest(we point up 0 is only a relative value that’s not in the name of free of plastics pollution)Getting through the assessment method above, we can monitor the concentration of PCBs, DDE and nonylphenols in the plastic debris for the sake of reflecting the influence to ocean ecosystem.The highter the the concentration of toxins,the bigger influence of the marine organism which lead to the inrichment of food chain is more and more dramatic.Above all, the variation of toxins’ concentration simultaneously reflects the distribution and time-varying of marine litter. We can predict the future development of marine litter by regularly monitoring the content of these substances, to provide data for the sea expedition of the detection of marine litter and reference for government departments to make the policies for ocean governance.Task 2:In the North Pacific, the clockwise flow formed a never-ending maelstrom which rotates the plastic garbage. Over the years, the subtropical eddy current in North Pacific gathered together the garbage from the coast or the fleet, entrapped them in the whirlpool, and brought them to the center under the action of the centripetal force, forming an area of 3.43 million square kilometers (more than one-third of Europe) .As time goes by, the garbage in the whirlpool has the trend of increasing year by year in terms of breadth, density, and distribution. In order to clearly describe the variability of the increases over time and space, according to “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999—2008”, we analyze the data, exclude them with a great dispersion, and retain them with concentrated distribution, while the longitude values of the garbage locations in sampled regions of years serve as the x-coordinate value of a three-dimensional coordinates, latitude values as the y-coordinate value, the Plastic Count per cubic Meter of water of the position as the z-coordinate value. Further, we establish an irregular grid in the yx plane according to obtained data, and draw a grid line through all the data points. Using the inverse distance squared method with a factor, which can not only estimate the Plastic Count per cubic Meter of water of any position, but also calculate the trends of the Plastic Counts per cubic Meter of water between two original data points, we can obtain the unknown grid points approximately. When the data of all the irregular grid points are known (or approximately known, or obtained from the original data), we can draw the three-dimensional image with the Matlab software, which can fully reflect the variability of the increases in the garbage density over time and space.Preparations:First, to determine the coordinates of each year’s sampled garbage.The distribution range of garbage is about the East - West 120W-170W, South - North 18N-41N shown in the “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”, we divide a square in the picture into 100 grids in Figure (1) as follows:According to the position of the grid where the measuring point’s center is, we can identify the latitude and longitude for each point, which respectively serve as the x- and y- coordinate value of the three-dimensional coordinates.To determine the Plastic Count per cubic Meter of water. As the “Plastic Count per cubic Meter of water” provided by “Count Densities of P lastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”are 5 density interval, to identify the exact values of the garbage density of one year’s different measuring points, we assume that the density is a random variable which obeys uniform distribution in each interval.Uniform distribution can be described as below:()⎪⎩⎪⎨⎧-=01a b x f ()others b a x ,∈We use the uniform function in Matlab to generatecontinuous uniformly distributed random numbers in each interval, which approximately serve as the exact values of the garbage density andz-coordinate values of the three-dimensional coordinates of the year’s measuring points.Assumptions(1)The data we get is accurate and reasonable.(2)Plastic Count per cubic Meter of waterIn the oceanarea isa continuous change.(3)Density of the plastic in the gyre is a variable by region.Density of the plastic in the gyre and its surrounding area is interdependent , However, this dependence decreases with increasing distance . For our discussion issue, Each data point influences the point of each unknown around and the point of each unknown around is influenced by a given data point. The nearer a given data point from the unknown point, the larger the role.Establishing the modelFor the method described by the previous,we serve the distributions of garbage density in the “Count Pensities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”as coordinates ()z y,, As Table 1:x,Through analysis and comparison, We excluded a number of data which has very large dispersion and retained the data that is under the more concentrated the distribution which, can be seen on Table 2.In this way, this is conducive for us to get more accurate density distribution map.Then we have a segmentation that is according to the arrangement of the composition of X direction and Y direction from small to large by using x co-ordinate value and y co-ordinate value of known data points n, in order to form a non-equidistant Segmentation which has n nodes. For the Segmentation we get above,we only know the density of the plastic known n nodes, therefore, we must find other density of the plastic garbage of n nodes.We only do the sampling survey of garbage density of the north pacificvortex,so only understand logically each known data point has a certain extent effect on the unknown node and the close-known points of density of the plastic garbage has high-impact than distant known point.In this respect,we use the weighted average format, that means using the adverse which with distance squared to express more important effects in close known points. There're two known points Q1 and Q2 in a line ,that is to say we have already known the plastic litter density in Q1 and Q2, then speculate the plastic litter density's affects between Q1、Q2 and the point G which in the connection of Q1 and Q2. It can be shown by a weighted average algorithm22212221111121GQ GQ GQ Z GQ Z Z Q Q G +*+*=in this formula GQ expresses the distance between the pointG and Q.We know that only use a weighted average close to the unknown point can not reflect the trend of the known points, we assume that any two given point of plastic garbage between the changes in the density of plastic impact the plastic garbage density of the unknown point and reflecting the density of plastic garbage changes in linear trend. So in the weighted average formula what is in order to presume an unknown point of plastic garbage density, we introduce the trend items. And because the greater impact at close range point, and thus the density of plastic wastes trends close points stronger. For the one-dimensional case, the calculation formula G Z in the previous example modify in the following format:2212122212212122211111112121Q Q GQ GQ GQ Q Q GQ Z GQ Z GQ Z Z Q Q Q Q G ++++*+*+*=Among them, 21Q Q known as the separation distance of the known point, 21Q Q Z is the density of plastic garbage which is the plastic waste density of 1Q and 2Q for the linear trend of point G . For the two-dimensional area, point G is not on the line 21Q Q , so we make a vertical from the point G and cross the line connect the point 1Q and 2Q , and get point P , the impact of point P to 1Q and 2Q just like one-dimensional, and the one-dimensional closer of G to P , the distant of G to P become farther, the smaller of the impact, so the weighting factor should also reflect the GP in inversely proportional to a certain way, then we adopt following format:221212222122121222211111112121Q Q GQ GP GQ GQ Q Q GQ GP Z GQ Z GQ Z Z P Q Q Q Q G ++++++*+*+*=Taken together, we speculated following roles:(1) Each known point data are influence the density of plastic garbage of each unknown point in the inversely proportional to the square of the distance;(2) the change of density of plastic garbage between any two known points data, for each unknown point are affected, and the influence to each particular point of their plastic garbage diffuse the straight line along the two known particular point; (3) the change of the density of plastic garbage between any two known data points impact a specific unknown points of the density of plastic litter depends on the three distances: a. the vertical distance to a straight line which is a specific point link to a known point;b. the distance between the latest known point to a specific unknown point;c. the separation distance between two known data points.If we mark 1Q ,2Q ,…,N Q as the location of known data points,G as an unknown node, ijG P is the intersection of the connection of i Q ,j Q and the vertical line from G to i Q ,j Q()G Q Q Z j i ,,is the density trend of i Q ,j Q in the of plasticgarbage points and prescribe ()G Q Q Z j i ,,is the testing point i Q ’ s density of plastic garbage ,so there are calculation formula:()()∑∑∑∑==-==++++*=Ni N ij ji i ijGji i ijG N i Nj j i G Q Q GQ GPQ Q GQ GP G Q Q Z Z 11222222111,,Here we plug each year’s observational data in schedule 1 into our model, and draw the three-dimensional images of the spatial distribution of the marine garbage ’s density with Matlab in Figure (2) as follows:199920002002200520062007-2008(1)It’s observed and analyzed that, from 1999 to 2008, the density of plastic garbage is increasing year by year and significantly in the region of East – West 140W-150W, south - north 30N-40N. Therefore, we can make sure that this region is probably the center of the marine litter whirlpool. Gathering process should be such that the dispersed garbage floating in the ocean move with the ocean currents and gradually close to the whirlpool region. At the beginning, the area close to the vortex will have obviously increasable about plastic litter density, because of this centripetal they keeping move to the center of the vortex ,then with the time accumulates ,the garbage density in the center of the vortex become much bigger and bigger , at last it becomes the Pacific rubbish island we have seen today.It can be seen that through our algorithm, as long as the reference to be able to detect the density in an area which has a number of discrete measuring points,Through tracking these density changes ,we Will be able to value out all the waters of the density measurement through our models to determine,This will reduce the workload of the marine expedition team monitoring marine pollution significantly, and also saving costs .Task 3:The degradation mechanism of marine plasticsWe know that light, mechanical force, heat, oxygen, water, microbes, chemicals, etc. can result in the degradation of plastics . In mechanism ,Factors result in the degradation can be summarized as optical ,biological,and chemical。
美国大学生数学建模竞赛竞赛(MCMICM)介绍创新工程办公室编总第13期 2021年第1期 2021年5月3日我校在2021年美国大学生数学建模竞赛(MCM/ICM)中获得大奖(证书)数学建模竞赛是锻炼大学生分析、解决复杂实际问题能力的有效手段和途径,对于培养大学生的实践能力、创新能力、团队意识、合作精神、顽强意志和综合素质具有显著作用和效果。
近年来,我校特别重视该项赛事的组织指导工作,学校领导亲自过问,给以指导,教务处、创新办、理学院及其它相关院系及单位科学谋划,精心组织,积极配合,做好竞赛的组织、指导、宣传及动员工作。
同时,学习借鉴省内外高校先进的经验与做法,强化对参赛学生的培训与指导,积极参与各种建模竞赛,不断提高学生的建模水平与能力。
2021年,我校首次精选两支代表队参加2021年美国大学生数学建模竞赛,经过指导教师悉心指导,队员们克服重重困难,坚持不懈,顽强拼搏、协同作战,终于在3000多支来自世界各地的参赛队伍中脱颖而出,全部获奖,其中由理学院史加荣老师指导,信控学院自动化0802班宋君毅、计算机0802班宋亚鹏、环境学院环工0901班姚青三名同学组队的参赛小组荣获二等奖一项;由理学院王玉英老师指导,土木学院土木0908班卢俊凡、土木0906班孙泓毅、土木0907班刘敏三名同学组队的参赛小组荣获鼓励奖一项,取得较为优异成绩。
我们将以本次获奖为契机,采取有力措施,加强培养,不断扩大数学建模在学生中的影响,促使学生更好地应用数学、品味数学、理解数学和热爱数学,使学生在知识、能力及素质三方面迅速成长的同时,使我校的数学建模竞赛成绩取得新的突破。
附件:美国大学生数学建模竞赛简介主题词:数学建模竞赛获奖竞赛组织指导抄送:西安建筑科技大学各位校领导、校长办公室各院(系)、有关处室、校教学督导组共印100份承办科室:创新工程办公室电话:82205351附件:美国大学生数学建模竞赛简介美国大学生数学建模竞赛(MCM/ICM)是数学领域的一项国际级的竞赛。
美国数学建模竞赛题目1985年:A题:动物群体的管理B题:战略物资储备的管理问题1986年:A题:海底地型测量问题B题:应急设施的优化选址问题1987年:A题:堆盐问题(盐堆稳定性问题)B题:停车场安排问题1988年:A题:确定毒品走私船位置B题:平板列车车厢的优化装载1989年:A题:蠓虫识别问题;最佳分类与隔离B题:飞机排队模型1990年:A题:脑中多巴胺的分布B题:铲雪车的路径与效率问题1991年:A题:估计水塔的水流量B题:通信网络费用问题1992年:A题:雷达系统的功率与设计式样B题:紧急修复系统的研制1993年:A题:堆肥问题B题:煤炭装卸场的最优操作1994年:A题:保温房屋设计问题B题:计算机网络的最小接通时间1996年:A题:大型水下物体的探测B题:快速遴选优胜者问题1997年:A题:恐龙捕食问题B题:会议混合安排问题1998年:A题:MRI图象处理问题B题:分数贬值问题1999年:A题:小星体撞击地球问题B题:公用设施的合法容量问题C题:确定环境污染的物质、位置、数量和时间的问题2000年:A题:空间交通管制B题:无线电信道分配C题:大象群落的兴衰2001年:A题:选择自行车车轮B题:逃避飓风怒吼C题:我们的水系-不确定的前景2002年:A题:风和喷水池B题:航空公司超员订票C题:如果我们过分扫荡自己的土地,将会失去各种各样的蜥蜴。
2003年:A题:特技演员B题:Gamma刀治疗方案C题:航空行李的扫描对策2004年:A题:指纹是独一无二的吗?B题:更快的快通系统C题:安全与否?2005年:A题:flood planningB题:tollboothsC题: Nonrenewable Resources2006年:A题:Positioning and Moving SprinklerSystems for IrrigationB题:Wheel Chair Access at AirportsC题:Trade-offs in the fight againstHIV/AIDS2007年:A题:GerrymanderingB题:The Airplane Seating ProblemC题:Organ Transplant: The Kidney Exchange Problem2008年:A题:Take a BathB题:Creating Sudoku PuzzlesC题:Finding the Good in Health Care Systems2009年:A题:Designing a Traffic CircleB题:Energy and the Cell PhoneC题:Creating Food Systems: Re-Balancing Human-Influenced Ecosystems。
2013年美国(国际)大学生数学建模竞赛我校再创佳绩近日,2013年度美国(国际)大学生数学建模竞赛(Mathematical Contest in Modeling)和跨学科建模竞赛(Interdisciplinary Contest in Modeling)成绩揭晓,南京理工大学10支参赛队伍表现优异。
其中4个团队获得一等奖(Meritorious Winner),3个团队获得二等奖(Honorable Mention)。
我校在本次竞赛的获奖等级、获奖项数、获奖比例均创新高。
美国国际大学生数学建模竞赛(MCM)和跨学科建模竞赛(ICM)是由美国数学学会、美国工业与应用数学学会、美国国家安全局联合举办的国际性大学生数学建模竞赛,比赛分别开始于1985年和1999年,每年2月份举行,为期四天,经过严格的层层选拔,成绩于当年4月初揭晓。
竞赛要求每支参赛团队的3名同学对一个实际问题进行研究、建模并提交结果。
本次大赛的参赛者分别来自美国、中国、英国、加拿大、新加坡等14个国家,共有6593个代表队。
来自哈佛、麻省理工和我国的北京大学、清华大学等知名高校学生参与了此项赛事的角逐。
本次我校组队参赛是在学校教务处和理学院的大力支持下,在理学院应用数学系、信息与计算科学系、统计与金融数学系多位教师的参与培训和指导下完成的。
由李宝成、张军、徐元、朱元国老师指导的团队分别获一等奖,由许春根、刘力维、窦本年老师指导的团队分别获二等奖。
近年来,我校本科生数学建模竞赛的成绩取得稳步提高,本次竞赛成绩是继2011年(美国大学生数学建模竞赛取得2项一等奖、4项二等奖)和2012年(全国大学生数学建模竞赛取得2项一等奖、5项二等奖)取得优异成绩后的又一次突破。
获奖名单如下:。
AbstractThis paper presents one case study to illustrate how probability distribution and genetic algorithm and geographical analysis of serial crime conducted within a geographic information system can assist crime investigation.Techniques are illustrated for predicting the location of future crimes and for determining the possible residence of offenders based on the geographical pattern of the existing crimes and quantitative method,which is PSO.It is found that such methods are relatively easy to implement within GIS given appropriate data but rely on many assumptions regarding offenders’behaviour.While some success has been achieved in applying the techniques it is concluded that the methods are essentially theory-less and lack evaluation.Future research into the evaluation of such methods and in the geographic behaviour of serial offenders is required in order to apply such methods to investigations with confidence in their reliability.1.IntroductionThis series of armed robberies occurred in Phoenix,Arizona between13September and5December1999and included35robberies of fast food restaurants,hotels and retail businesses.The offenders were named the“Supersonics”by the Phoenix Police Department Robbery Detail as the first two robberies were of Sonic Drive-In restaurants.After the35th robbery,the offenders appear to have desisted from their activity and at present the case remains unsolved.The MO was for the offenders to target businesses where they could easily gain entry,pull on a ski mask or bandanna, confront employees with a weapon,order them to the ground,empty the cash from a safe or cash register into a bag and flee on foot most likely to a vehicle waiting nearby. While it appears that the offenders occasionally worked alone or in pairs,the MO, weapons and witness descriptions tend to suggest a group of at least three offenders. The objective of the analysis was to use the geographic distribution of the crimes to predict the location of the next crime in an area that was small enough to be suitable for the Robbery Detail to conduct stakeouts and surveillance.After working with a popular crime analysis manual(Gottleib,Arenberg and Singh,1994)it was found that the prescribed method produced target areas so large that they were not operationally useful.However,the approach was attractive as it required only basic information and relied on simple statistical analysis.To identify areas that were more useful for the Robbery Detail,it was decided to use a similar approach combined with other measurable aspects of the spatial distribution of the crimes.As this was a“live”case, new crimes and information were integrated into the analysis as it came to hand.2.AssumptionIn order to modify the model existed,we apply serial new assumptions to the principle so that our rectified model can be much more practical.Below are the assumptions:1.C riminals prefer something about the locations where previous crimes werecommitted committed..We supposed the criminals have a greater opportunity to ran away if they choose to crime in the site they are familiar with.In addition,the criminals probably choose previous kill sites where their target potential victims live and work.2.Offenders regard it safer to crime in their previous kill site as time went by.This is true that the site would be severely monitored by police when a short term crime happened and consequently the criminal would suffer a risk of being arrested in that site.And as mentioned above ,the police would reduce the frequency of examining the previous kill sites as time went by.3.Criminals are likely to choose the site that have optimal distance .This is a reasonable assumption since it is probably insecure to crime in the site that stays far away and that costs an amount of energy to escape and adds the opportunity to be arrested in such an unfamiliar terrain.And it is also impossible to crime in the site nearby since it increases the probability of being recognized or being trapped.As a result,we can measure a optimal distance in series perpetrations.4.Crimes are committed by individual.We assume that all the case in the model are committed by individuals instead of by organized members.In this way the criminal is subject to the assumptions mentioned above due to his insufficient preparation.5.Criminals Criminals''movements unconstrained.Because of the difficulty of finding real-world distance data,we invoke the “Manhattan assumption”:There are enough streets and sidewalks in a sufficiently grid-like pattern that movements along real-world movement routes is the same as “straight-line”movement in a space be discrete into city blocks.It is demonstrated that across several types of serial crime,the Euclidean and Manhattan distances are essentially interchangeable in predicting anchor points.3.The prediction of the next crime site3.1The measure of the optimal distanceDue to the fact that the mental optimal distance of the criminal is related to whether he is a careful person or not,it is impossible for him to make a fixed constant.Besides,the optimal distance will change in different moment.However,such distance should be reflected on the distances of the former crime sites.Presume that the coordinates of the n crime sites is respectively ),(11y x 、),(22y x 、……、),(n n y x ,and define the distance between the th i crime site and the th j one as j D ,i .The distance above we first consider it as Euclid distance,which is:22,)()(j i j i j i y y x x D −+−=With that,we are able to measure the distance between the th n crime site and the th 1-n one respectively.According to the assumption 2,the criminal believes that the earlier crime sites have became saferfor him to commit a crime again,so we can define his mental optimal distance,giving the sites the weights from little to much according to when the offenses happened in time sequence,as:∑−==11,n i ni i D w SD Satisfying 121......−<<<n w w w ,111=∑−=n i i w .Presuming the th i crime happens in i t ,whichis measured by week,we can have ∑−==11n i i kk t t w .SD can reflect the criminal's mental condition to some extent,so we can use it to predict the mental optimal distance of the criminal in the th n 1+case.While referring to the th n crime site,the criminal is able to use SD to estimate the optimal distance in the next time,and while referring to the rest crime sites,the optimal distances reduce as time goes back.Thus,the optimal security of the th i crime site can be measured as the following:n ni i SD t t SD *=3.2The measure of the probability distributionGiven the crime sites and location,we can estimate tentatively the probability density distribution of the future crimes,which equals to that we add some small normal distribution to every scene of crime to produce a probability distribution estimate function.The small normal distribution uses the SD mentioned above as the mean,which is:∑=−−=n i i i SD r n y x f 122)2)(exp(211),(σσπi r is defined as the Euclid distance between the site to the th i crime site,and the standard difference of the deviation of the criminal's mental optimal distance is defined as σ,which also reflects the uncertainty of the deviation of the criminal's mental optimal distance,involves the impacts of many factors and can not be measured quantitatively.The discussion of the standard difference is as following:3.3The quantization of the standard differenceThe standard difference is identified according to the following goal,which is,every prediction of the next crime site according to the crime sites where the crimes were committed before should have the highest rate of success.When having to satisfying such optimization objective,it isimpossible to make the direct analysis and exhaustivity.Instead,we have to use the optimized solutions searching algorithm,which is genetic algorithm.\Figure1:The Distribution of the Population of the Last GenerationAccording to the figure,the population of the last generation is mostly concentrated near80, which is used as the standard distance and substituted to the*formula.With the*formula,we are able to predict the probability density of Whether the zones will be the next crime site.Case analysis:5crime site according to the4ones happened before Figure2:The prediction of theth6crime site according to the5ones happened before Figure3:The prediction of theth6crime site according to the5ones happened before Figure4:The prediction of thethAccording to the predictions happened before,the predictions of the outputs based on the models are accurate relatively,and they are able to be the references of the criminal investigations to some extent.However,when is frequency of such crime increases,the predictions of the outputs23crime site according deviated the actual sites more and more,such as the prediction of thethto the22ones happened before,which is:23crime site according to the22ones happened before Figure5:the prediction of thethConclusion according to analysis:It may not be able to predict the next crime site accurately if we use Euclid distance to measure the probability directly.So,we should analyze according to the actual related conditions.For example,we can consider the traffic commutes comprehensively based on the conveniences of the escapes,such as the facilities of the express ways network and the tunnels.According to the hidden security of the commitments,we should consider the population of the area and the distance from the police department.Thus,we should give more weights to the commute convenience,hidden security and less population.In addition,when the commitments increases,the accuracy of the model may decrease,resulted from the fact that when the criminal has more experience,he will choose the next crime sites more randomly.4.Problems and further improvementsWith23crimes in the series the predictions tended to provide large areas that included the target crime but were too large to be useful given the limited resources the police had at their disposal.At this stage,a more detailed look was taken at the directionality and distances between crimes.No significant trends could be found in the sequential distance between crimes so an attempt was made to better quantify the relationship between crimes in terms of directionality.The methodology began by calculating the geographic center of the existing crimes. The geographic center is a derived point that identifies the position at which the distance to each crime is minimized.For applications of the geographic center to crime analysis.Once constructed,the angle of each crime from the north point of the geographic center was calculated.From this it was possible to calculate the change indirection for the sequential crimes.It was found that the offenders were tending to pattern their crimes by switching direction away from the last crime.It appears that the offenders were trying to create a random pattern to avoid detection but unwittingly created a uniform pattern based upon their choice of locations.This relationship was quantified and a simple linear regression used to predict what the next direction would be.The analysis was once again applied to the data.While the area identified was reduced from previous versions and prioritized into sub-segments,the problem remained that the areas predicted were still too large to be used as more than a general guide to resource deployment.A major improvement to the methodology was to include individual targets.By this stage of the series,hotels and auto parts retailers had become the targets of choice.A geo-coded data set became available that allowed hotels and retail outlets to be plotted and compared to the predicted target areas.Ideally those businesses falling within the target areas could be prioritized as more likely targets.However,in some cases the distribution of the likely businesses appeared to contradict the area predicted.For example,few target hotels appeared in the target zone identified by the geographic analysis.In this case,more reliance was placed upon the location of individual targets. From this analysis it was possible to identify a prioritized list of individual commercial targets,which was of more use operationally.Maps were also provided to give an indication of target areas.Figure6demonstrates a map created using this methodology.It is apparent from the above discussion that the target areas identified were often too large to be used as more than a general guide by the Robbery Detail.However,by including the individual targets,it was possible to restrict the possible target areas to smaller,more useful areas,and a few prioritized targets.However,such an approach has the danger of being overly restrictive and it is not the purpose of the analysis to restrict police operations but to suggest priorities.This problem was somewhat dealt with by involving investigators in the analysis and presenting the results in an objective manner,such that investigators could make their own judgments about the results.To be more confident in using this kind of analysis a stronger theoretical background to the methods is required.What has been applied here is to simply exploit the spatial relationships in the information available without considering what the connection is to the actual behaviour of the offenders.For example,what is the reason behind a particular trend observed in the distance between crimes?Why would such a trend be expected between crimes that occur on different days and possibly involve different individuals?While some consideration was given to identifying the reason behind the pattern of directionality and while it seems reasonable to expect offender’s to look for freeway access,such reasoning has tended to follow the analysis rather than substantiate it.Without a theoretical background the analysis rests only on untested statistical relationships that do not provide an answer to the basic question:why this pattern?So next we will apply a quantitative method,which is PSO,based on a theoretical background,to locate the residence of the criminal's residence.5.The prediction of the residenceParticle Swarm Optimization is a evolutionary computation,invented by Dr.Eberhart and Dr.Kennedy.It is a tool of optimization based on iteration,resulted from the research on the behaviors of the bird predation.Initiating a series of random number,the PSO is able to catch the optimization with iteration.Like PSO,the resolution of our residence search problem is the criminal,whose serial crime sites have been abstracted into 23particles without volume and weight and extended to the 2-D space.Like bird,the criminal is presumed to go directly home when he committed a crime.So,there are 23criminals who commit the crimes in the 23sites mention before and then they will go home directly.The criminals are defined as a vector,so are their speed.All criminals have a fittness decided by the optimized functions,and every of them has a according speed which can decide their direction and distance.All the criminals know the best position (pbest,defined as the residence known by the individual),which has been discovered so far,and where they are now.Besides,every criminals also know the best position which has been found by the group (gbest,defined as the residence known by the group).Such search can be regarded as the experience of other criminals.The criminals are able to locate the residence by the experience of itself and the whole criminals.PSO computation initiates the 23criminals and then the offenders will pursue the optimized one to search in the space.In other words,they find the optimized solutions by iteration.Presume that in the 2-D space the location and speed of the ith crime site is relatively ),(2,1,i i i x x X =and ),(2,1,i i i v v V =.In every iteration,the criminals will pursue the two best positions to update themselves.The two best positions are relatively the individual peak (pbest),),(2,1,i i i p p P =,which is found by the criminal himself,and the group optimized solution (gbest),g P ,which has been found to be the optimized solution by the whole group so far.When the criminals found the two optimized solutions,they will update their speed and new position based on the following formulas.2,1),1()()1()]([)]([)()1(,,,,,22,,11,,=++=+−+−+=+j t v t x t x t x p r c t x p r c t wv t V j i j i j i j i j g j i j i j i j i In the above,the w is inertial weighted factor,21c andc are positive learning factors,21r andr are random number which are distributed uniformly between 0and 1.The learning factor can make the criminals have self-conclude ability and ability of learning from others.Here we make both of them be 2,as what they always are in PSO.The inertial weighted factor w decides the extent of the inheritance of the current speed of the crime sites.The appropriate choice can make them have balanced searching and exploring ability.For balancing the global searching ability and the local improving ability of the criminal in the PSO algorithm,here we adopt one of the self-adapted methods,which is Non-linear Dynamic Inertial Weight Coefficient to choose the inertial weight.The expression is as following:⎪⎩⎪⎨⎧=≤−−−−>avg avg avg f f f f f f w w w f f w w ,))*((,minmin min max min max In the above,the max w and min w are defined respectively as the maximum and minimum of w,f means the current functional value of the criminal,and the avg f and min f respectively means the average value and minimum value of all the current criminals.In addition,the inertial weight will change automatically according to the objective value,which gives the name self-adapted method.When the final values,which are estimations of the criminal's residence,become consistent,it will make the inertial weight increase.When they become sparser,it will make the inertial weight decrease.In the meantime,referring to the criminals whose final values are worse than the average value,its according inertial weighted factor will become smaller,which protect the crime site.Oppositely,when referring to the criminals whose final values are better than the average value,its according inertial weighted factor will become bigger,which makes the criminal nearer to the searching zone.So now,with the PSO of Non-linear Dynamic Inertial Weight Coefficient,we can calculate the minimum value of22,)()(j j j i y y x x R −+−=,j=1,2,3 (23)In the above,j ,i R is the residence of the criminal.Thus,we have the output (x,y)as(2.368260870656715,3.031739124610613).We can see the residence in the figure 7.Figure7:The residence in the map6.ConclusionThis paper has presented one case study to illustrate how probability distribution and geographical analysis of serial crime conducted can assist crime investigation. Unfortunately,in the Supersonic armed robbery investigation the areas identified were too large to have been of much use to investigators.Further,because of the number of assumptions applied the method does not inspire enough confidence to dedicate resources to comparing its results to the enormous amount of suspect data collected on the case.While the target areas predicted tended to be large,the mapping of individual commercial targets appears to offer a significant improvement to the method.However,as they stand,these methods lack a theoretical basis that would allow the results to be judged and applied in investigations.Limitations such as these can be offset to some degree by the involvement of investigators in the analysis.In the end,we used a quantitative method to locate the residence of the criminal to make the identified areas smaller.So,due to the advantages and drawbacks of the above methods,we suggest that we should use different methods to help us fight again the crimes comprehensively.。