PROACTIVE RELIABLE BULK DATA DISSEMINATION IN SENSOR NETWORKS 1
- 格式:pdf
- 大小:79.02 KB
- 文档页数:6
Package‘hdImpute’August7,2023Type PackageTitle A Batch Process for High Dimensional ImputationVersion0.2.1BugReports https:///pdwaggoner/hdImpute/issuesMaintainer Philip Waggoner<*************************>Description A correlation-based batch process for fast,accurate imputation forhigh dimensional missing data problems via chained random forests.See Waggoner(2023)<doi:10.1007/s00180-023-01325-9>for more on'hdImpute',Stekhoven and Bühlmann(2012)<doi:10.1093/bioinformatics/btr597>for more on'missForest', and Mayer(2022)<https:///mayer79/missRanger>for more on'missRanger'. License MIT+file LICENSEEncoding UTF-8Imports missRanger,plyr,purrr,magrittr,tibble,dplyr,tidyselect,tidyr,cliSuggests testthat(>=3.0.0),knitr,rmarkdown,usethis,missForest,tidyverseVignetteBuilder knitrRoxygenNote7.2.3Config/testthat/edition3URL https:///pdwaggoner/hdImputeNeedsCompilation noAuthor Philip Waggoner[aut,cre]Repository CRANDate/Publication2023-08-0721:20:02UTCR topics documented:check_feature_na (2)check_row_na (2)12check_row_na feature_cor (3)flatten_mat (4)hdImpute (4)impute_batches (5)mad (6)Index8 check_feature_na Find features with(specified amount of)missingnessDescriptionFind features with(specified amount of)missingnessUsagecheck_feature_na(data,threshold)Argumentsdata A data frame or tibble.threshold Missingness threshold in a given column/feature as a proportion bounded be-tween0and1.Default set to sensitive level at1e-04.ValueA vector of column/feature names that contain missingness greater than threshold.Examples##Not run:check_feature_na(data=any_data_frame,threshold=1e-04)##End(Not run)check_row_na Find number of and which rows contain any missingnessDescriptionFind number of and which rows contain any missingnessUsagecheck_row_na(data,which)feature_cor3Argumentsdata A data frame or tibble.which Logical.Should a list be returned with the row numbers corresponding to each row with missingness?Default set to FALSE.ValueEither an integer value corresponding to the number of rows in data with any missingness(if which =FALSE),or a tibble containing:1)number of rows in data with any missingness,and2)a list of which rows/row numbers contain missingness(if which=TRUE).Examples##Not run:check_row_na(data=any_data_frame,which=FALSE)##End(Not run)feature_cor High dimensional imputation via batch processed chained randomforests Build correlation matrixDescriptionHigh dimensional imputation via batch processed chained random forests Build correlation matrixUsagefeature_cor(data,return_cor)Argumentsdata A data frame or tibble.return_cor Logical.Should the correlation matrix be printed?Default set to FALSE.ValueA cross-feature correlation matrixReferencesWaggoner,P.D.(2023).A batch process for high dimensional putational Statistics, 1-22.doi:<10.1007/s00180-023-01325-9>van Buuren S,Groothuis-Oudshoorn K(2011)."mice:Multivariate Imputation by Chained Equa-tions in R."Journal of Statistical Software,45(3),1-67.doi:<10.18637/jss.v045.i03>4hdImputeExamples##Not run:feature_cor(data=data,return_cor=FALSE)##End(Not run)flatten_mat Flatten and arrange cor matrix to be dfDescriptionFlatten and arrange cor matrix to be dfUsageflatten_mat(cor_mat,return_mat)Argumentscor_mat A correlation matrix output from running feature_cor()return_mat Logical.Should theflattened matrix be printed?Default set to FALSE.ValueA vector of correlation-based ranked featuresExamples##Not run:flatten_mat(cor_mat=cor_mat,return_mat=FALSE)##End(Not run)hdImpute Complete hdImpute process:correlation matrix,flatten,rank,createbatches,impute,joinDescriptionComplete hdImpute process:correlation matrix,flatten,rank,create batches,impute,joinUsagehdImpute(data,batch,pmm_k,n_trees,seed,save)impute_batches5Argumentsdata Original data frame or tibble(with missing values)batch Numeric.Batch size.pmm_k Integer.Number of neighbors considered in imputation.Default set at5.n_trees Integer.Number of trees used in imputation.Default set at15.seed Integer.Seed to be set for reproducibility.save Should the list of individual imputed batches be saved as.rdsfile to working directory?Default set to FALSE.DetailsStep1.group data by dividing the row_number()by batch size(batch,number of batches set by user)using integer division.Step2.pass through group_split()to return a list.Step3.impute each batch individually and time.Step4.generate completed(unlisted/joined)imputed data frame ValueA completed,imputed data setReferencesWaggoner,P.D.(2023).A batch process for high dimensional putational Statistics, 1-22.doi:<10.1007/s00180-023-01325-9>Stekhoven,D.J.,&Bühlmann,P.(2012).MissForest—non-parametric missing value imputation for mixed-type data.Bioinformatics,28(1),112-118.doi:<10.1093/bioinformatics/btr597> Examples##Not run:impute_batches(data=data,batch=2,pmm_k=5,n_trees=15,seed=123,save=FALSE)##End(Not run)impute_batches Impute batches and return completed data frameDescriptionImpute batches and return completed data frameUsageimpute_batches(data,features,batch,pmm_k,n_trees,seed,save)6mad Argumentsdata Original data frame or tibble(with missing values)features Correlation-based vector of ranked features output from running flatten_mat() batch Numeric.Batch size.pmm_k Integer.Number of neighbors considered in imputation.Default at5.n_trees Integer.Number of trees used in imputation.Default at15.seed Integer.Seed to be set for reproducibility.save Should the list of individual imputed batches be saved as.rdsfile to working directory?Default set to FALSE.DetailsStep1.group data by dividing the row_number()by batch size(batch,number of batches set by user)using integer division.Step2.pass through group_split()to return a list.Step3.impute each batch individually and time.Step4.generate completed(unlisted/joined)imputed data frame ValueA completed,imputed data setReferencesWaggoner,P.D.(2023).A batch process for high dimensional putational Statistics, 1-22.doi:<10.1007/s00180-023-01325-9>Stekhoven,D.J.,&Bühlmann,P.(2012).MissForest—non-parametric missing value imputation for mixed-type data.Bioinformatics,28(1),112-118.doi:<10.1093/bioinformatics/btr597> Examples##Not run:impute_batches(data=data,features=flat_mat,batch=2,pmm_k=5,n_trees=15,seed=123,save=FALSE)##End(Not run)mad Compute variable-wise mean absolute differences(MAD)betweenoriginal and imputed dataframes.DescriptionCompute variable-wise mean absolute differences(MAD)between original and imputed dataframes.mad7Usagemad(original,imputed,round)Argumentsoriginal A data frame or tibble with original values.imputed A data frame or tibble that has been imputed/completed.round Integer.Number of places to round MAD scores.Default set to3.Value‘mad_scores‘as‘p‘x2tibble.One row for each variable in original,from1to‘p‘.Two columns:first is variable names(‘var‘)and second is associated MAD score(‘mad‘)as percentages for each variable.Examples##Not run:mad(original=original_data,imputed=imputed_data,round=3)##End(Not run)Indexcheck_feature_na,2check_row_na,2feature_cor,3flatten_mat,4hdImpute,4impute_batches,5mad,68。
专利名称:与降低对细胞凋亡诱导性死亡受体激动剂的抗性有关的试剂和方法
专利类型:发明专利
发明人:T·周,R·P·金伯利
申请号:CN200680010114.8
申请日:20060131
公开号:CN101495646A
公开日:
20090729
专利内容由知识产权出版社提供
摘要:本文提供了反转或阻止靶细胞对死亡受体激动剂的抗性的方法。
也提供了筛选死亡受体激动剂的生物标记抗性的方法,和监控对死亡受体激动剂的抗性的方法。
也提供了选择性地诱导靶细胞的细胞凋亡、治疗患有癌症、自身免疫病或炎性疾病的受试者的方法,其包含,施用本文提供的组合物。
还提供了组合物,其包含调节含有CARD的蛋白的试剂。
申请人:UAB研究基金会
地址:美国阿拉巴马州
国籍:US
代理机构:中国专利代理(香港)有限公司
更多信息请下载全文后查看。
AccuMelt™ HRM SuperMixCat No. 95103-250Size: 250 x 20-µL reactions (2 x 1.25 mL)Store at -25ºC to - 15°C 95103-0121250 x 20-µL reactions (10 x 1.25 mL)protected from light DescriptionAccuMelt HRM SuperMix is a 2X concentrated, ready-to-use reaction cocktail for detection of genetic variations using high resolution melting (HRM) analysis. It includes all required components except for primers and DNA template. HRM is a closed tube, rapid and cost effective procedure for characterization of sequence differences immediately following PCR amplification. It is based on the melting (dissociation) behavior of a PCR product as it transitions from double-stranded to single-stranded DNA in the presence of a fluorescent dsDNA-binding dye. The melting properties of a given PCR product are dependent upon the base composition, length, and strand base-pairing. HRM analysis tools exploit differences in melt curve shapes and DNA melting temperature (Tm) to discriminate sequence differences between samples.AccuMelt HRM SuperMix contains the green-fluorescent dye SYTO® 9 in a stabilized master mix that eliminates the need for time consuming optimization of critical PCR components. This dye provides strong fluorescent signal upon binding to dsDNA at saturating concentrations without inhibiting PCR. The unique chemical composition of this SuperMix further enhances and maximizes the impact of sequence variations on melt curve behavior. This facilitates discrimination of all sequence variations including difficult to resolve base-neutral transversions such as A>T class 4 SNPs. The dNTP mix in AccuMelt HRM SuperMix includes an optimized blend of dTTP and dUTP. This feature supports the optional use of uracil-DNA glycosylase (UNG) to prevent amplification of carry-over contamination, while providing high product yield and reliable PCR performance.Highly specific amplification with high product yield from complex genomic DNA template is critical for successful HRM studies. A key component of this SuperMix is AccuStart™ Taq DNA polymerase, which contains monoclonal antibodies that bind to the polymerase and keep it inactive prior to the initial PCR denaturation step. Upon heat activation at 95ºC, the antibodies denature irreversibly, releasing fully active, unmodified Taq DNA polymerase. This enables specific and efficient primer extension with the convenience of room temperature reaction assembly. AccuMelt HRM SuperMix can be used with all currently available HRM analysis systems. For HRM applications and more detailed product information, please visit our web site at .ComponentsAccuMelt HRM SuperMix (2X): 2X reaction buffer containing optimized concentrations of MgCl2, dNTPs(including dUTP), AccuStart Taq DNA Polymerase, SYTO 9 green-fluorescentdye, and stabilizers.[free Mg ++] = 0.8 mM at 1X final concentrationStorage and StabilityStore components in a constant temperature freezer at -25°C to -15°C protected from light upon receipt.For lot specific expiry date, refer to package label, Certificate of Analysis or Product Specification Form.Guidelines for PCR amplification and HRM analysis:The design of highly specific primers is the single most important parameter for successful PCR amplification and HRM analysis. The use of computer aided primer design programs is encouraged in order to minimize the potential for internal secondary structure and complementation at 3’-ends within each primer and the primer pair. Primer T m should be between 56 to 63ºC and the T m difference between the forward and reverse primers should be less than 2ºC.Amplicon size should be less than 250 bp. Smaller amplicons (60 to 100 bp) generally facilitate HRM discrimination of homozygote samples. We recommend designing and evaluating multiple primer set designs for any given application.Optimal primer concentration may vary between 100 and 500 nM. A final concentration of 300 nM for each primer is effective for most applications.Some primer set designs may require asymmetric primer concentrations.Always include a negative or no template control to evaluate the specificity of a given primer set and amplification protocol. Sequence specificity of the PCR should be confirmed by a method other than generation of a single melt peak. This includes confirmation of PCR product size, diagnostic restriction endonuclease fragment pattern, or sequencing.Preparation of a reaction cocktail is recommended to reduce pipetting errors and obtain reproducible HRM results. Assemble the reaction cocktail with all required components, except template DNA, and dispense equal aliquots into each reaction tube. Add DNA template as a final step. A minimum of 3 technical replicates for each DNA sample is recommended. Include appropriate positive controls for each sequence variant.Guidelines for PCR amplification and HRM analysis continued:Use approximately 10,000 copies of template DNA. A suggested input quantity for human genomic DNA is 10 to 30 ng.After sealing each reaction, vortex gently to mix contents. Centrifuge briefly to collect components at the bottom of the reaction tube.PCR amplification can be carried out in a conventional or real-time thermal cycler. Monitoring the reaction in real-time allows one to access the quality and PCR performance of a sample before HRM. All samples should produce comparable Cqs and fluorescence signal. Samples with delayed Cq (>30) or aberrant fluorescence signal should be excluded from the HRM analysis. Optimal cycling conditions will depend on the properties of your primers. Hold assembled reactions on ice, protected from light, if not proceeding immediately to PCR.Reaction AssemblyComponent Volume for 20-µL rxn. Final ConcentrationAccuMelt HRM SuperMix (2X) 10.0 µL 1xForward primer Variable 100 – 500 nMReverse primer Variable 100 – 500 nMNuclease-free water VariableDNA Template 2-5 µL ~10,000 copiesFinal Volume (µL) 20 µLPCR Cycling Protocol2-Step Cycling Protocol 3-Step Cycling Protocol Initial Denaturation 95ºC, 5 min*95ºC, 5 min*PCR cycling (40 to 45 cycles)Denaturation 95ºC, 5 to 10 s 95ºC, 5 to 10 sAnnealing60ºC, 30 s†15 s at 55 to 65ºCExtension 10 to 30s at 70ºC†HRM analysis§Consult instructions for your instrument* Full activation of AccuStart Taq DNA polymerase occurs within 30s at 95ºC; however, optimal initial denaturation time is template dependent and will affect PCR efficiency and sensitivity. Amplification of genomic DNA or supercoiled DNA targets may require 5 to 10 min at 95ºC to fully denature the template.† If monitoring PCR in real-time, collect and analyze kinetic PCR data at the end of the extension step. Extension time is dependent upon amplicon length and minimal data collection time requirement for your qPCR instrument. Some primer sets may require a 3-step cycling protocol for optimal performance. Optimal annealing temperature and time may need to be empirically determined for any given primer set.§ High Resolution Melting Analysis should be carried out immediately following PCR amplification. Please consult the instructions for your HRM instrument for procedural details. If not proceeding immediately to HRM, store plates at +4C protected from light. Mix and centrifuge samples immediately before HRM.Quality ControlKit components are free of contaminating DNase and RNase. AccuMelt HRM SuperMix is functionally tested to amplify a single copy gene in human genomic DNA and resolve homozygote samples for class 4 (A>T) and class 3 (C>G) SNPs in a model test system.Limited Label LicensesUse of this product signifies the agreement of any purchaser or user of the product to the following terms:1.The product may be used solely in accordance with the protocols provided with the product and this manual and for use with components contained in the kitonly. QIAGEN Beverly, Inc. grants no license under any of its intellectual property to use or incorporate the enclosed components of this kit with any components not included within this kit except as described in the protocols provided with the product, this manual, and additional protocols available at . Some of these additional protocols have been provided by Quantabio product users. These protocols have not been thoroughly tested or optimized by QIAGEN Beverly, Inc.. QIAGEN Beverly, Inc. neither guarantees them nor warrants that they do not infringe the rights of third-parties.2.Other than expressly stated licenses, QIAGEN Beverly, Inc. makes no warranty that this kit and/or its use(s) do not infringe the rights of third-parties.3.This kit and its components are licensed for one-time use and may not be reused, refurbished, or resold.4.QIAGEN Beverly, Inc. specifically disclaims any other licenses, expressed or implied other than those expressly stated.5.The purchaser and user of the kit agree not to take or permit anyone else to take any steps that could lead to or facilitate any acts prohibited above. QIAGEN Beverly,Inc. may enforce the prohibitions of this Limited License Agreement in any Court, and shall recover all its investigative and Court costs, including attorney fees, in any action to enforce this Limited License Agreement or any of its intellectual property rights relating to the kit and/or its components.The use of this product is covered by at least one claim of U.S. Patent No. 7,687,247 owned by Life Technologies Corporation. The purchase of this product conveys to the buyer the non-transferable right to use the purchased amount of the product and components of the product in research conducted by the buyer (whether the buyer is an academic or for-profit entity). The buyer cannot sell or otherwise transfer (a) this product, (b) its components, or (c) materials made by the employment of this product or its components to a third party or otherwise use this product or its components or materials made by the employment of this product or its components for Commercial Purposes. Commercial Purposes means any activity for which a party receives or is due to receive consideration and may include, but is not limited to: (1) use of the product or its components in manufacturing; (2) use of the product or its components to provide a service, information, or data; (3) use of the product or its components for therapeutic, diagnostic or prophylactic purposes; or (4) resale of the product or its components, whether or not such product or its components are resold for use in research. The buyer cannot use this product, or its components or materials made using this product or its components for therapeutic, diagnostic or prophylactic purposes. Further information on purchasing licenses under the above patents may be obtained by contacting the Licensing Department, Life Technologies Corporation, 5791 Van Allen Way, Carlsbad, CA 92008. Email: *************************©2018 QIAGEN Beverly Inc. 100 Cummings Center Suite 407J Beverly, MA 01915Quantabio brand products are manufactured by QIAGEN, Beverly Inc.Intended for molecular biology applications. This product is not intended for the diagnosis, prevention or treatment of a disease.AccuMelt and AccuStart are trademarks of QIAGEN Beverly, Inc. SYTO is a registered trademark of Life Technologies Corporation (Molecular Probes Labeling and Detection Technologies).。
particular stage collection plate, whilst smaller particles with insufficient inertia will remain entrained in the air stream and pass to the next impaction stage.By analysing the amount of active drug deposited on the various stages, it is then possible to calculate the Fine Particle Dose (FPD) and Fine Particle Fraction (FPF) and following further manipulation, the Mass Median Aerodynamic Distribution (MMAD) and Geometric Standard Deviation (GSD)of the active drug particles collected.IMPACTOR USE (DRY POWDER INHALERS)The same impactor can be used for determining the particle size of Dry Powder Inhalers (DPIs).In this instance, however, a preseparator is interposed between the induction port and stage 0 of the impactor in order to collect the large mass of non-inhalable powder boluses typically emitted from a DPI prior to their entry into the impactor.In the case of Dry Powder Inhalers (DPIs), a number of additional factors must be taken into account when testing:• The pressure drop generated bythe air drawn through the inhalerduring inspiration• The appropriate flow rate, Q, to give a pressure drop of 4 kPa• The duration of simulated inspiration to give a volume of 4 litres• Flow rate stability in terms of critical (sonic) flowThese factors require the use of the “General Control Equipment” for DPIs specified in USP chapter <601> and “Experimental Set Up” for testing DPIs in Ph.Eur. 2.9.18 which take all of these factors into account.These specifications form the basis of the Critical Flow Controllers (see Page 80) which incorporate all of the equipment required into a single integrated system.Andersen Cascade Impactor (ACI)Schematic of ACI for DPIs complete with Preseparator,Critical Flow Controller and PumpAndersen CascadeImpactor for DPIs(with Preseparator)version, stages 0, 6 and 7 are removed andreplaced with three additional stages, -0, -1and -2.Changes are also made to the configurationof the collection plates (with and withoutcentre holes).This results in a set of cut-points as per thetable below.QUALITYA number of papers published in the late1990s highlighted concerns relating to themanufacture and performance of the ACImanufactured by Graseby-Andersen between1992 and 1998.These focused on the choice of materialused in their design, their construction, easeof use, accuracy, calibration and the ability tosuitably qualify the impactors prior to use.Because of these criticisms, Copleycommenced manufacturing the ACI usingthe latest state-of-the-art productiontechniques.These techniques ensure that 100% of thejets of every stage of every Copley impactorconform to the published critical dimensionsfor the ACI stated in USP Chapter <601> andPh.Eur. Chapter 2.9.18.The validity of this data is guaranteed bydimensional verification using the verylatest vision inspection technology havinga demonstrated optical reproducibility of 1MATERIALS OF CONSTRUCTIONThe ACI was originally designed for environmental air sampling and is traditionally constructed from aluminium. However its adoption by the pharmaceutical industry has placed far harsher demands on the material because of the use of organic solvents in the drug recovery process.Recent advances in automated, high precision machining techniques now mean that the ACI can be manufactured from 316 stainless steel (the pharmaceutical industry’s preferred material) and also titanium.The main advantage of 316 stainless steelis that of superior corrosion resistance and durability, meaning that 316 stainless steel impactors manufactured by Copley arenot only very competitively priced but also highly cost effective, helping to maintain accuracy and extend impactor life by reducing mechanical and chemical wear. Electrically conductive, stainless steel can also help reduce the unwanted effects of electrostatics in the impactor.Where the weight of 316 stainless steel isa concern, Copley can also offer titanium, which has the durability of 316 stainless steel but with a 40% reduction in weight.Copley continues to offer aluminium ACIs for those users who prefer a lower cost option or for those cases where their methods are such that corrosion resistance and durability are not an issue. Leak-free inter-stage sealingis achieved through the use of high qualityFDA approved silicone rubber O-rings.Preseparators feature a one-piece seamlessconstruction and, together with the inductionports, come with mensuration certificatesas standard.All collection plates are manufactured from316 stainless steel. They are individuallyinspected for surface roughness and laseretched on the underside with batch numberfor traceability.Also available as options are a one-piece 316stainless steel induction port and speciallymodified ‘O-ring free’ 316 stainless steel inletcone and preseparator lids for accepting theNGI style induction port.EASE OF USEThe “Quick Clamp” is an optional accessoryfor use with the ACI which can also beretrofitted to existing impactors.Constructed from stainless steel, the “QuickClamp” provides a quick and easy means ofassembling, clamping and dis-assembling allor part of the impactor stack (for example,less stages 6 and 7) during routine operation.Once the assembled stack is in position, theclamping action is achieved very simply byturning a small knob through 90 degrees. ACI System for testing DPIs* Rounded to 0.013 in the case of USP Andersen Cascade Impactor (ACI) - Standard 28.3 L/min ConfigurationStage Number Nozzles Ph.Eur. Nozzle Diameter (mm) 096 2.55 +/- 0.025196 1.89 +/- 0.02524000.914 +/- 0.0127*34000.711 +/- 0.0127*44000.533 +/- 0.0127*54000.343 +/- 0.0127*64000.254 +/- 0.0127*(including induction ports and preseparators), manufactured by Copley, are checked at every stage of manufacture using the verymetrology equipment and arecertificate andprior to release.ACIs manufactured by Copley are:Available in aluminium, 316 stainlessCapable of operation at 28.3, 60 orManufactured to USP and Ph.Eur.Supplied with full stage mensurationcertificate, certificate of conformity toUSP/Ph.Eur. and leak test certificateANCILLARIESThe following ancillaries are required inaddition to the ACI to complete a fullyoperating test system for determining theAPSD of MDIs:• Mouthpiece Adapter (see Page 90)• Induction Port (see Page 49)• Vacuum Pump (see Page 91)• Flow Meter (see Page 88)• Data Analysis Software (see Page 94)Additionally to test DPIs:• Preseparator (see Page 49)• Critical Flow Controller (see Page 80)Options:• Automation (see Page 113)ACI Carrying / Wash Rack Andersen Cascade Impactor (Aluminium,316 Stainless Steel and Titanium)Modified 28.3 and 60 L/minPreseparator Lids (Cat. No.8421/8422) and Inlet Cone(Cat. No. 8366) for use with NGIInduction Port (Cat. No. 5203)Quick ClampACI Collection Plate RackACI Collection Plates(featuring batch numbering) Preseparator, 60 L/min Conversion Kit and Collection Plates。
2025年全国大学英语CET四级考试模拟试卷及答案指导一、写作(15分)Writing (30 points)Part A (10 points)Directions: For this part, you are allowed 30 minutes to write an essay on the topic “The Impact of Technology on Education”. You should write at least 120 words and base your essay on the outline given below:1.Briefly describe the role of technology in modern education.2.Discuss the positive effects of technology on education.3.Present some challenges faced by technology in education.4.Give your own opinion on how to effectively integrate technology into education.Example:The Impact of Technology on EducationIn the 21st century, technology has become an indispensable part of our lives, and its influence on education is no exception. Technology has transformed the way we learn and teach, bringing both benefits and challenges.Firstly, technology has significantly enhanced the role of education. Withthe advent of the internet, students can access a vast amount of information from all over the world, which broadens their horizons and deepens their understanding of various subjects. Moreover, educational technology tools, such as online learning platforms, virtual classrooms, and interactive software, have made learning more engaging and personalized.The positive effects of technology on education are numerous. For one, it allows for flexibility in learning, as students can study at their own pace and schedule. Additionally, technology can help students with special needs, such as those with disabilities, by providing customized learning materials and resources.However, technology in education also poses challenges. One major concern is the digital divide, where students from low-income families may not have access to the necessary technology. Another challenge is the potential for technology to distract students from their studies, leading to decreased focus and productivity.In my opinion, to effectively integrate technology into education, schools should ensure that all students have equal access to technology resources. Moreover, teachers should be trained to use technology appropriately to enhance learning outcomes. Additionally, parents and students should be educated on the responsible use of technology to avoid its negative consequences.Part B (20 points)Directions: For this part, you are allowed 30 minutes to write a letter.Suppose you are Zhang Wei, a student of English at a university. You have just won a scholarship to study in the UK for one year. Write a letter to your friend Li Hua, who is planning to apply for the same scholarship. In your letter, you should:1.Congratulate Li Hua on his success in the application.2.Share your experiences and advice for applying for the scholarship.3.Express your hopes for Li Hua’s success in the future.Example:Dear Li Hua,I hope this letter finds you well. I am writing to share some exciting news with you. I have just won a scholarship to study in the UK for one year, and I couldn’t be more thrilled!I want to start by congratulating you on your success in the application process. It’s fantastic to see that you have achieved such a commendable goal.I am sure that your hard work and dedication have paid off.Now, I would like to share some of my experiences and advice for applying for the scholarship. Firstly, it’s essential to thoroughly research the scholarship program and ensure that your application meets all the requirements. Secondly, make sure to highlight your achievements, skills, and experiences that are relevant to the scholarship. Thirdly, be prepared for the interview process, as it is often a crucial step in securing the scholarship.I am confident that you will do exceptionally well in your application. Yourpassion for learning and your determination to excel make you a perfect candidate for this opportunity. I hope that you will follow in my footsteps and achieve great success.Lastly, I wish you all the best in your future endeavors. I am looking forward to hearing about your success story soon.Best regards,Zhang Wei二、听力理解-短篇新闻(选择题,共7分)第一题新闻内容:A new study conducted by the National Institute of Health has found that regular exercise can significantly improve the cognitive function of elderly individuals. The study involved 1,500 participants aged 60 or over, who were divided into two groups. The first group was asked to engage in at least 30 minutes of moderate-intensity aerobic exercise, such as walking or cycling, three times a week. The second group was asked to maintain their current lifestyle with no additional exercise. After one year, the study found that the group participating in regular exercise showed a 30% improvement in their cognitive scores, compared to the group that did not exercise.The researcher, Dr. John Smith, explained that the improvement was particularly noticeable in areas such as memory and problem-solving skills. Headded that the benefits were consistent regardless of the type of exercise performed, as long as the participants adhered to a regular routine.题目:1、What was the main finding of the study conducted by the National Institute of Health?A)Regular exercise can improve the cognitive function of elderly individuals.B)Walking and cycling have different effects on cognitive function.C)The benefits of regular exercise are only seen in people under 60.答案:1、A) Regular exercise can improve the cognitive function of elderly individuals.第二题News Item OneThe popular cartoon character, Tom and Jerry, might soon become a major player in the movie industry. According to a recent report, a new live-action film adaptation of the classic cartoon series is in the works. The movie is expected to be a blend of animated and live-action sequences, with well-known actors set to voice the iconic characters. The producers announced that they have s ecured a major deal with a top Hollywood studio to finance the film’s production. The film is scheduled for release in the fall of 2023.Questions:1、Who will voice the iconic characters in the upcoming live-action film adaptation of Tom and Jerry?A) Unknown actorsB) Well-known actorsC) Famous singersD) Rising stars2、What will the new live-action Tom and Jerry film be a blend of?A) Live-action and animated sequencesB) entirely live-action sequencesC) entirely animated sequencesD) live-action and silent sequences3、When is the movie set for release?A) winter of 2023B) summer of 2023C) fall of 2023D) spring of 2024Answers:1.B2.A3.C三、听力理解-长对话(选择题,共8分)第一题听力原文:M: Hi, Linda, how was your vacation in Beijing?W: It was fantastic! I visited the Forbidden City, the Great Wall, and the Summer Palace. The architecture was amazing.M: Really? I’ve heard the Great Wall is a must-see. Did you go there?W: Yes, I did. It was quite an experience. The wall is so long and the scenery along the way is stunning.M: Did you take any photos?W: Of course. I took a lot of photos, but the best one was the view of the wall from a distance.M: That sounds great. I hope to visit Beijing one day. It’s such a historic city.W: You sh ould definitely go. It’s a place you won’t forget.选择题:1、What is the main topic of the conversation?A) The woman’s vacation in BeijingB) The woman’s favorite place in BeijingC) The man’s plan to visit BeijingD) The architecture of Beijing2、Which place did the woman visit first during her vacation?A) The Forbidden CityB) The Great WallC) The Summer PalaceD) The man’s house3、How did the woman feel about the Great Wall?A) She was boredB) She was disappointedC) She was amazedD) She was afraid4、W hat does the woman suggest about the man’s plan to visit Beijing?A) He should wait until next yearB) He should bring a cameraC) He should go on a guided tourD) He should not expect it to be as memorable as the woman’s trip答案:1、A2、A3、C4、D第二题Directions:In this section, you will hear six dialogues. Each dialogue will be spoken only once. After each dialogue, you will be asked a question about what was said. The dialogues and questions will be spoken two times. Choose the best answer from the four choices marked A, B, C and D.DialogueWoman: Hi, Tom. Do you like the new restaurant we went to last night? Man: Yes, I do. The food was great and the atmosphere was perfect. Woman: Did you see the girl at the corner table with long curly hair? Man: Yes, I did. She was very attractive, wasn’t she?Woman: Yes, what a nice dress she was wearing!Man: And I think her date was a bit rough around the edges.Woman: Poor guy. I heard he works in IT, but he seems to have a rough disposition. Man: Hey, what time is it? I have to catch the last train back to our college. Woman: It’s a quarter to ten. We have plenty of time, don’t worry.Questions1、What activity were they discussing?A、A new store opening in the area.B、A movie they watched together.C、A meal they had at a restaurant.D、A book they read recently.2、What can we learn about the girl from the dialogue?A、She came with a friend who had a difficult personality.B、She arrived late and missed the train.C、She worked in IT.D、She preferred to sit at the corner table.3、What is the man’s concern?A、They need to finish their homework.B、They have limited time to meet their friends.C、They need to get back to their college.D、They need to buy something for a party.4、What does the woman imply about the man?A、He has a strong will.B、He is quite friendly.C、He is a bit rushed.D、He is considerate.Answers1、C、2、A、3、C、4、C、四、听力理解-听力篇章(选择题,共20分)第一题Title: The Story of the Great Wall of ChinaIntroduction:The Great Wall of China is one of the most remarkable architectural achievements in human history. Stretching over 21,196 kilometers, it was built to protect the Chinese empire from invasions. Its construction began over 2,200 years agoand was completed over a period of several centuries.Text:In the 7th century B.C., warlords built the initial wall to safeguard their kingdoms. However, it was Emperor Qin Shi Huang who initiated the expansion of the wall into the grand structure it is today. Over two million workers, including soldiers, convicts, and local people, contributed to its construction. The wall is made up of bricks, tamped earth, and wood, depending on the region. It is equipped with watchtowers, camps, and signal stations to allow for communication and quick military response.Despite its defensive purpose, the Great Wall has also been a symbol of strength and unity for China. Over the centuries, it has faced numerous challenges, including natural erosion, human vandalism, and relentless weathering by wind and rain.Questions:1、What is the primary purpose of the Great Wall of China?A. It served as a toll road.B. It was constructed for military protection.C. It was built as a monument to the emperor.D. It served as a trade corridor.2、Who initiated the expansion of the wall into the grand structure it is today?A. The warlords of the 7th centuryB.C.B. Emperor Qin Shi HuangC. Local peopleD. Soldiers3、According to the passage, what material primary composed the Great Wall?A. StoneB. Brick, tamped earth, and woodC. Iron and steelD. Wood and leatherAnswers:1、B. It was constructed for military protection.2、B. Emperor Qin Shi Huang3、B. Brick, tamped earth, and wood第二题Passage OneYou probably know that the Great Wall of China is the most famous ancient architectural wonder in the world. It is also one of the longest man-made structures ever built. The Great Wall was built over a period of more than 2,000 years. It was originally constructed to protect the Chinese Empire from invasions by various nomadic groups from the north.The construction of the Great Wall began in the 7th century BC, during the Warring States period. It was mainly built of earth and stone. Over time, different dynasties added their own sections to the wall, which resulted in thevarious styles and designs we see today.1、What was the primary purpose of building the Great Wall?A) To serve as a tourist attraction.B) To protect the Chinese Empire from invasions.C) To expand the territory of the Chinese Empire.D) To store food and water for the soldiers.2、When did the construction of the Great Wall begin?A) During the Han Dynasty.B) During the Warring States period.C) During the Tang Dynasty.D) During the Qing Dynasty.3、What materials were mainly used in the construction of the Great Wall?A) Iron and wood.B) Marble and glass.C) Earth and stone.D) Concrete and steel.Answers:1、B) To protect the Chinese Empire from invasions.2、B) During the Warring States period.3、C) Earth and stone.第三题Listening Comprehension - PassagePassage: During the early years of World War II, a British civilian, named John Smith, found himself stationed in a British base in the Middle East. He was assigned to a group tasked with providing support to soldiers. One day, he heard about an opportunity to provide intelligence support to Allied forces by secretly gathering and delivering intelligence to allied bases. Initially, John was skeptical about the proposal, but when he learned that the information could significantly impact the war effort, he decided to take a risk. John was given a cipher machine and instructed to deliver intelligence to a nearby allied camp located in a remote area. The camp was known to be under constant surveillance, making the mission dangerous. Despite the risks, John felt a strong sense of duty and embarked on his mission.1、Why did John Smith initially hesitate to take the opportunity to provide intelligence support?A)He was unsure about the safety of the mission.B)He thought the information was not useful.C)He was concerned about the complexity of the cipher machine.D)He was skeptical about the proposal.Answer: D2、What was the primary motivation for John Smith to accept the mission?A)He wanted to prove his bravery.B)He thought it would bring him fame.C)He was afraid of being assigned to a menial task.D)He felt a strong sense of duty.Answer: D3、What made John Smith’s mission to the allied camp particularly dangerous?A)The remote location of the camp.B)The constant surveillance of the camp.C)The high level of security at the British base.D)The complexity of the cipher machine.Answer: B五、阅读理解-词汇理解(填空题,共5分)第一题Reading PassageThe modern office environment is a product of the Industrial Revolution. With the advent of machines, employees were no longer required to perform manual labor. They were now expected to multitask, communicating with colleagues, managing emails, and using a variety of technologies. This shift in the nature of work required employees to develop new skills and adapt to a more dynamic work environment. As a result, companies began to emphasize the importance of education and training for their employees. Today, the office environment is characterized by the presence of diverse technology, increasing workloads, and the need for continuous professional development.Vocabulary Understanding1、The shift in the nature of work required employees to_________to a moredynamic work environment.a.adhere tob.adapt2、The Industrial Revolution led to the_________of machines, which changed the way employees worked.a.manifestationb.development3、Employees in the modern office environment are expected to_________multiple tasks, such as communicating and using technology.a.reflectb.multitask4、One of the reasons companies began to emphasize education and training is because they wanted their employees to_________the new skills required in the modern work environment.a.acquireb.maintain5、The office environment today is characterized by the presenceof_________technology, diverse workloads, and the need for professional development.a.variedb.advanced第二题The following passage is followed by some questions or unfinished statements, each with four suggested answers. Choose the one that fits best according tothe passage.The Internet has become an integral part of our daily lives. From shopping and banking to communication and entertainment, we rely on it for a variety of purposes. However, along with its benefits, the Internet also brings along some challenges that we need to be aware of.1、( ) 1. The word “integral” in the first se ntence can be best replaced by:a) indispensableb) occasionalc) occasionald) occasional2、( ) 2. The phrase “a variety of purposes” in the second sentence can be replaced by:a) many different usesb) limited usesc) common usesd) single use3、( ) 3. T he word “challenges” in the second paragraph can be defined as:a) opportunitiesb) problemsc) benefitsd) solutions4、( ) 4. The sentence “However, along with its benefits, the Internet also brings along some challenges” suggests that:a) the Internet has no negative aspectsb) the Internet is purely beneficialc) the Internet has both positive and negative aspectsd) the Internet is a source of frustration5、( ) 5. The word “aware” in the last sentence can be best replaced by:a) knowledgeableb) indifferentc) unawared) uninterestedAnswers:1、a) indispensable2、a) many different uses3、b) problems4、c) the Internet has both positive and negative aspects5、a) knowledgeable六、阅读理解-长篇阅读(选择题,共10分)First QuestionReading PassageMachine Learning: Tackling the Big Data DilemmaWith the rapid growth of data generation due to the increasing use of smartphones, the Internet of Things (IoT), and social media, industries face a major challenge in managing and analyzing this Big Data. Traditional data processing methods are no longer sufficient to handle the sheer volume of data being generated. Machine learning (ML) provides a solution by allowing computers to learn from data without explicit programming, enabling them to make predictions, recognize patterns, and improve their performance over time.One of the most prevalent applications of ML is in recommendation systems, used by social media platforms and e-commerce websites to suggest content or products to users. This system analyzes user behavior and preferences, then recommends items that might be of interest. Another application is in healthcare, where ML can be used to predict patient outcomes and identify potential health issues before they become serious.However, ML also has its challenges. One of the major issues is the need for large amounts of high-quality data, which can be time-consuming and expensive to gather. Additionally, ML models are often opaque, making it difficult for users to understand how their data is being used and what insights are being extracted from it. Ethical concerns also arise, such as the potential for biased predictions based on flawed or biased training data.1、Which of the following is the main idea of the passage?1、The role and challenges of machine learning in data analysis2、The importance of data quality in machine learning3、The ethical concerns surrounding machine learning4、The applications of machine learning in various industriesAnswer: 1、2、What is one of the major challenges of using machine learning in data analysis?2、The need for high-quality data3、Lack of transparency in the decision-making process4、Ethical concerns5、The cost of data storageAnswer: 2、3、Which application of machine learning is mentioned in the passage?3、Recommendation systems4、Image recognition5、Fraud detection6、Speech recognitionAnswer: 3、4、What is a potential problem with machine learning models as described in the passage?4、They require large amounts of data5、They are difficult to develop6、They are too transparent7、They are ineffective in large datasetsAnswer: 4、5、What does the passage suggest as a key challenge for using machine learning in healthcare?5、The need to predict patient outcomes6、The potential for biased predictions7、The difficulty in gathering patient data8、The complexity of healthcare dataAnswer: 6第二题Many factors contribute to the high rate of childhood obesity in the United States. One significant factor is the environment in which children live and grow. This passage discusses various aspects of the environment that contribute to childhood obesity and proposes some solutions.Structured neighborhoods without sidewalks, playgrounds, or safe routes to school discourage physical activity and increase the likelihood of obesity. Children spend more time sitting in front of screens, playing video games or watching television, rather than engaging in active play. Access to fast food restaurants is abundant, making it easy for families to choose high-calorie, low-nutrition m eals. Finally, parental involvement in children’s activities has decreased, leading to a lack of guidance and监督 in healthy lifestyles.Solutions to address childhood obesity involve a multi-faceted approach.For example, communities could redesign their neighborhoods to include more parks and playgrounds, sidewalks, and safe walking routes to schools. School districts could promote physical education and after-school sports programs to encourage children to be active. Additionally, parents can be involved in creating healthy eating environments by planning family meals, setting a healthy menu, and limiting screen time.Reading the passage, answer the questions below:1、What is one of the factors contributing to childhood obesity according to the passage?A、Lack of physical activityB、Excessive screen timeC、Parental involvementD、High-calorie fast food2、How does the environment in which children live contribute to obesity?A、It encourages physical activity and leads to healthier lifestyles.B、It discourages physical activity and increases the likelihood of obesity.C、It provides access to healthy food and exercise facilities.D、It promotes healthy eating and physical exercise through community programs.3、What is one solution proposed to address childhood obesity?A、Designing neighborhoods with more parks and playgrounds.B、Reducing the number of fast food restaurants.C、Increasing parental involvement in children’s activities.D、Strengthening physical education programs in schools.4、What is the author’s view on the role of parents in their children’s healthy lifestyles?A、Parents have no influence on their children’s lifestyle choices.B、Parents should strictly regulate their children’s screen time.C、Parents play a crucial role in creating and maintaining a healthy home environment.D、Parents should prioritize physical education over other extracurricular activities.5、Which of the following statements is NOT mentioned in the passage as a factor contributing to childhood obesity?A、Lack of physical activityB、Increased screen timeC、Healthy school meal programsD、Reduced parental involvementAnswer Key:1、A2、B3、A4、C5、C七、阅读理解-仔细阅读(选择题,共20分)第一题Reading Passage 1Questions 1 to 5 are based on the following passage.In the United States, the four-year college degree is the most common form of higher education. However, in recent years, there has been a growing interest in alternative forms of higher education. One of these alternatives is the two-year community college, which provides a less expensive and more flexible option for students.Community colleges offer a variety of courses, from basic academic subjects to vocational training. Many students choose to attend community colleges because they are less expensive than four-year institutions. Additionally, community colleges often have more flexible schedules, which allow students to work or take care of family responsibilities while pursuing their education.Despite the benefits of community colleges, there are some challenges associated with them. One of the main challenges is the lack of resources compared to four-year colleges. For example, community colleges may have fewer faculty members, smaller libraries, and less advanced technology. This can make it difficult for students to receive the level of education they desire.Another challenge is the perception that community colleges are less prestigious than four-year colleges. This perception can make it difficult for students to transfer to four-year institutions after completing their two-yearprograms. However, many community colleges have agreements with four-year colleges that allow students to transfer easily and continue their education.The following questions are based on the above passage.1、What is the main topic of the passage?A. The importance of a four-year college degreeB. The growing interest in alternative forms of higher educationC. The challenges faced by students attending community collegesD. The benefits of attending a community college2、Why do many students choose to attend community colleges?A. They offer advanced technologyB. They provide a less expensive and more flexible optionC. They have prestigious faculty membersD. They have larger libraries3、Which of the following is a challenge associated with community colleges?A. They have more faculty members than four-year collegesB. They offer vocational trainingC. They have fewer resources than four-year collegesD. They have more flexible schedules4、What is one way community colleges are trying to overcome the perception of being less prestigious?A. They are increasing their tuition feesB. They are improving their technologyC. They are entering into agreements with four-year collegesD. They are offering more academic courses5、What can be inferred about the future of community colleges from the passage?A. They will become more expensive and less flexibleB. They will become less common and more prestigiousC. They will continue to grow in popularity and importanceD. They will merge with four-year collegesAnswers:1、B2、B3、C4、C5、C第二题PassageThe world of technology is rapidly evolving, and artificial intelligence (AI) is at the forefront of this change. AI has a wide range of applications in different fields, including healthcare, finance, manufacturing, and transportation. One of the most significant areas of AI development is natural language processing (NLP), which allows machines to understand and process human language in a more sophisticated and nuanced way. This has led to the creationof virtual assistants, chatbots, and language translators that can assist businesses and individuals in diverse ways. However, with the rapid development of AI, concerns about ethics and privacy have also risen.1、Which of the following fields is NOT mentioned as an application area of AI in the passage?A、HealthcareB、FinanceC、ManufacturingD、EducationAnswer: D2、What does NLP allow machines to do?A、Understand and process human language in a sophisticated and nuanced way.B、Create visual images.C、Perform physical tasks.D、Drive autonomous vehicles.Answer: A3、What kind of assistance can virtual assistants and chatbots provide?A、Technical support for computer problems.B、Assistance in diverse ways for businesses and individuals.C、Financial management.D、Medical diagnosis.Answer: B。
美国FDA提交药品和生物制品稳定性文件的指南1987年Ⅰ.导言本指南提供:——举荐稳固性研究的设计,以制订适宜的有效期和储存要求(见Ⅲ部分)。
——举荐为新药临床试验申请(IND's)和生物制品(Ⅳ部分)、新药申请(NDA's)(V部分)和生物制品许可证申请(PLA's)(Ⅵ部分)所提交给药品和生物制品中心(CDB)的稳固性资料和数据。
那个指南是依照21CFRl0.90公布的,申请人可依照指南提交用于人类药品和生物制品的稳固性文件,也可遵循其他方法。
当选择其他方法时,劝说申请人预先与FDA讨论此方法,以防止财务的支出和预备的全部工作提交后被决定不能同意。
目的是提供符合规章要求的方法,如下所列:新药临床试验申请21 CFR 312.23(a)(7)新药申请21 CFR 314.50简略的新药申请21 CFR 314.55生物制品许可证申请21 CFR 601.2增补21 CFR 314.70本指南提供的制定有效期方法来自至少三个不同批次的药品,以保证统计可同意的提议期限可靠。
不管如何样,生产者认识到通过稳固特性的不断评判来进—步确定估量有效期的可靠是重要的,在生产者的稳固性方案方面,如此连续确定有效期将是重点考虑的事。
Ⅱ.定义1.加速试验(Accelerated resting):通过采纳最不利的贮存条件,对原料药或药物制剂的化学或物理降解增长的速率进行研究设计,其目的是确定动力学参数,以便预言暂定的有效期。
“加速试验”又称“应力试验”。
2.批准的稳固性研究草案(Approved stability study protocol):编写详细的打算以符合批准的新药申请,同时应用于产生和分析在整个有效期内的可同意的稳固性数据;也能够利用所产生的相近的数据延长有效期。
3.批次(Batch):按照2l CFR210.3(b)(2)定义,“批次”的意思是一种特定质量的药品或其他物质,在规定的范畴内具有同一的特性和质量,同时是在相同生产周期内按照相同的生产程序生产的产品。
Regulatory Basis:FDA Quality Systems RegulationsReference: FDA CFR - Code of Federal Regulations Title 21General Discussion:This Document sets out guidelines for the determination and validation of in-process and bulk product holding times.Maximum allowable hold times should be established for bulk and in-process drug products (where applicable). Typically one lot can be used for validating hold times. Data to justify the hold time can be collected during development on pilot scale batches, during process validation, via a historical review of batch data, or as part of a deviation with proper testing.Although there are no specific regulations or document documents on bulk product holding times, good manufacturing practice dictates that holding times should be validated to ensure that in-process and bulk product can be held, pending the next processing step, without any adverse effect to the quality of the material. This practice is supported by indirect references made to determining holding times in various FDA document documents, FDA regulations as follows:__ “if a firm plans to hold bulk drug products in storage…..stability data should be provided to demonstrate that extended storage in the described containersdoes not adversely affect the dosage form”.__ “stability data also may be necessary when the finished dosage form is stored in interim containers prior to filling into the marketed package. If the dosageform is stored in bulk containers for over 30 days, real-time stability dataunder specified conditions should be generated to demonstrate comparablestability to the dosage form in the marketed package. Interim storage of thedosage form in bulk containers should generally not exceed six months”.__ “when appropriate, time limits for the completion of each phase of production shall be established to assure the quality of the drug product.” .Thisregulation could be interpreted to include the time for holding bulk product aspart of the production process. “holding times (includes storage times) studiesmay be conducted during development or carried out in conjunction withprocess validation lots and shall be representative of full scale holding conditions”.For purposes of clarification, refer to Appendix A for definitions relating to bulkholding time. Holding time data may be generated in the following situations:•Bulk holding studies may be conducted on product developmental pilot scale batches to demonstrate comparable stability to the dosage form in the marketedpackage.•Holding data may be generated as part of a process validation study. Data can be collected on the bulk product itself after holding or collected after the heldproduct has been packaged.Typically, if these in-process products are used within 24 hours of manufacturing, no bulk holding time studies are deemed necessary. An in-process product that is held for longer than 24 hours should be monitored for physical characteristics and microbialcontamination. A solution/suspension should be held for the defined hold period. Atthe test points, a sample should betaken from the storage container and tested. Results obtained should be compared with the initial baseline data of the solution/suspensioncontrol sample results.Typical tests include the following: Microbial count; Yeast/Mould count; SpecificGravity; and Viscosity.3) Holding time considerations for Tablet Cores, Extended-Release Beads orPellets.Typically, in-process products such as cores, extended-release beads or pellets may be held for up to 30 days from their date of production without being retested prior touse. An in-process product that is held for longer than 30 days should be monitoredfor stability under controlled, long-term storage conditions for the length of theholding period. A representative portion of the core/bead/pellet should be held for the defined hold period. At the test points, a sample should be taken from the storagecontainer and tested. Results obtained should be compared with the initial baselinedata of the core/bead/pellet control sample results.Typical tests include the following: Hardness; Friability; Appearance;Dissolution/Disintegration; Assay; Degradation Products (where applicable); andMoisture Content.4)Holding time considerations for Bulk Tablets and Capsules.Typically, bulk tablets and capsules may be held for up to 30 days from their date ofproduction without being retested prior to use. A bulk product that is held for longerthan 30 days should be monitored for stability under controlled, long-term storageconditions for the length of the holding period. Interim storage of the dosage form inbulk containers should generally not exceed six months. At the test points, a sampleshould be taken from the storage container and tested. Results obtained should becompared with the initial baseline data of the tablet/capsule control sample results.Typical tests include the following: Hardness; Friability; Appearance; Dissolution (in the case of controlled and extended release products, the establishment of adissolution profile is recommended); Disintegration; Assay; Degradation Products(where applicable); Moisture Content, and microcount (where applicable).5) Holding time considerations for Oral Liquids and Semi-Solids (Suspensions,Creams, and Ointments).Typically, liquid and semi-solid dosage form products should be held for no morethan 5 days without a hold time study. Full scale batches should be used for thesestudies. Samples should be taken from the holding vessel after transfer from themanufacturing vessel, and again at the completion of the holding period. Multiplesamples should be taken at each time point if holding can impact product uniformity.Samples would be taken to prove that product uniformity of actives and preservatives。
基于美国FDA不良事件数据库的依瑞奈尤单抗不良反应信号挖掘基于美国FDA不良事件数据库的依瑞奈尤单抗不良反应信号挖掘依瑞奈尤单抗(Infliximab)是一种用于治疗自身免疫性疾病的重要药物,包括类风湿性关节炎、克罗恩病、溃疡性结肠炎等。
然而,如同其他药物一样,依瑞奈尤单抗也存在一定的潜在风险与不良反应。
为了及时发现和评估依瑞奈尤单抗的不良反应,美国食品药品监督管理局(FDA)建立了一个公开的不良事件数据库,通过挖掘这个数据库中的信号,可以帮助我们更好地了解该药物的安全性并采取相应的措施。
不良事件数据库包含了从世界各地收集到的与特定药物有关的患者报告的不良事件信息。
这些报告包括相关患者的年龄、性别、用药剂量、不良事件发生时间等细节。
通过对这些数据进行分析和挖掘,可以发现药物的不良反应信号,并进行比较和评估。
在进行不良事件信号挖掘之前,需要对数据库进行预处理,例如去除重复报告、清洗和标准化数据等。
接下来,可以使用各种数据挖掘技术和统计方法来探索数据中的模式和关联规则。
例如,可以使用关联规则挖掘算法来发现依瑞奈尤单抗与其他药物或疾病之间的关联。
此外,还可以采用聚类分析来发现患者在不同不良事件上的类别和模式。
通过对不良事件数据库进行依瑞奈尤单抗不良反应信号挖掘,我们可以得到一些重要的结果。
首先,可以发现某些不良事件与依瑞奈尤单抗的使用存在相关性,从而确定该药物的潜在安全问题。
例如,可能发现与依瑞奈尤单抗使用有关的肝功能损伤、感染等。
其次,可以评估不同剂量和疗程下不良事件发生的频率和严重程度,从而确定最佳用药方案。
此外,还可以发现一些特定人群,如年龄、性别、基因型等对依瑞奈尤单抗的不良反应更加敏感。
然而,不良事件数据库存在一些局限性和挑战。
首先,报告的准确性和完整性可能存在差异,可能有遗漏或虚假报告。
其次,数据库中的数据来自不同地区,患者人口特征和用药习惯的差异可能影响结果的泛化性。
此外,数据中可能存在一些未知的混淆因素,如其他药物的使用、患者的基础疾病等,这可能导致误报不良事件信号。
PROACTIVE RELIABLE BULK DATA DISSEMINATION IN SENSORNETWORKS1Limin Wang Sandeep S.KulkarniSoftware Engineering and Network Systems LaboratoryDepartment of Computer Science and EngineeringMichigan State UniversityEast Lansing MI48824USAAbstractOne of the problems in network reprogramming is to guar-antee100percent delivery of a large amount of data(the en-tire binary image)in a lossy wireless channel.All the ex-isting protocols on network reprogramming[1–6]use auto-matic repeat request(ARQ)scheme to recover from packet losses.We propose a new reliability scheme which is a hy-brid approach of forward error correction(FEC)and ARQ. We perform a case study on MNP,a multihop network repro-gramming protocol,and study the effect of adding two differ-ent FEC codes:simple XOR code and Reed-Solomon(RS) codes[7],to MNP.We evaluate the new reliability approach using TOSSIM.The simulation results show that adding sim-ple XOR code to MNP can achieve up to10%improvement on reprogramming speed and up to18%reduction on active radio time(the major part of energy consumption).And we show that RS codes perform even better.Keywords:Sensor networks,Forward error correction,Data dissemination1.IntroductionSensor networks have been proposed for a wide variety of applications.A sensor network needs to operate unattended for long periods of time.This requirement introduces sev-eral difficulties.First,the environment evolves over time. Predicting the whole set of actions that a sensor node might need to perform is impossible in most applications.Second, requirements are also likely to change.For example,with growing understanding of the environment or with new tech-nological advances,some assumptions are found to be in-correct,and,hence,the specification needs to be modified accordingly.Thus,reprogramming sensor nodes,i.e.,chang-ing the software running on sensor nodes after deployment, is necessary for sensor networks.1Email:{wanglim1,sandeep}@.Web:/˜{wanglim1,sandeep}.Tel:+1-517-355-2387,Fax:+1-517-432-1061.This work was partially sponsored by NSF CAREER CCR-0092724, DARPA Grant OSURS01-C-1901,ONR Grant N00014-01-1-0744,NSF equipment grant EIA-0130724,and a grant from Michigan State work reprogramming requires100percent delivery of the entire binary image(on the order of kilobytes),and hence consumes significant communication bandwidth.However, the transmissions are performed over the radio,which is known as a low-bandwidth and lossy medium.Therefore the reliability issues need to be addressed.There are two basic methods to recover lost packets.One way is to use automatic repeat request(ARQ).In ARQ schemes,a receiver detects its own losses,and informs the sender of the missing packets,either by sending requests(NACK)or ac-knowledgements(pure ACK,implicit ACK,selective ACK, etc.).The sender retransmits the repair packet if it knows that a packet is lost.Another way to recover errors is to use forward error correction(FEC).FEC provides reliability by transmitting redundant packets in a proactive manner.The most commonly used FEC scheme is(n,k)FEC.The fun-damental of(n,k)FEC is to add n−k additional packets to a group of k source packets(called a transmission group)so that the receipt of any k packets at the receiver permits recov-ery of the original k ones.There are different levels of FEC schemes:packet-level,byte-level,bit-level.In the context of reprogramming,we only examine the packet-level FEC. The existing protocols on network reprogramming include the single-hop reprogramming protocol XNP[1],and multi-hop reprogramming protocols MOAP[2],Deluge[3],MNP [4],Infuse[5],and Sprinkler[6].All these protocols use au-tomatic repeat request(ARQ)scheme to recover from packet losses.ARQ is an effective reliability scheme,as an error can always be recovered as long as the network is connected. However,if the error rate is high,the requests and retrans-missions for the missing packets consume significant energy. In this paper,we examine the issue of adding FEC to the ARQ-based reliability scheme of a reprogramming protocol. We perform a case study on our previous work,MNP[4], a multihop network reprogramming service for sensor net-works.The proposed reliability scheme is a hybrid approach of FEC and ARQ.By adding FEC,we expect to reduce the er-ror probability experienced at the receivers,and ARQ scheme performs the remaining error corrections.Organization of the paper.In Section2,we give a brief overview of the MNP protocol,with emphasis on the reli-ability issues.In Section3,we propose a hybrid reliabil-ity scheme using FEC and ARQ,and describe the imple-mentation details of adding simple XOR code and Reed-Solomon(RS)codes[7]to MNP.In Section4,we present the simulation results on the performance of(MNP+XOR) and(MNP+RS codes).We review related work in Section5 and conclude in Section6.2.A Brief Overview of MNPIn[4],we presented MNP,a multihop network reprogram-ming protocol,which provides a reliable and energy effi-cient service to propagate new program images to all the sensor nodes in the network over the radio.MNP applies an advertise-request-data handshake interface.To achieve pipelining,a program is divided into segments,each of which contains afixed number of packets.A source node adver-tises the availability of the segments of a new program.Once a node receives an advertisement,if it needs the segment, it sends a request.To reduce the message collision prob-lem,MNP uses a sender selection algorithm,in which source nodes compete with each other based on the number of dis-tinct requests they receive.The node that has received the most number of requests is selected as the sender,and all of its neighbors either receive the segment from the sender,or go to“sleep”state to save energy.Since the focus of this paper is the reliability issues,we refer the readers to[4]for the details of the sender selection algorithm and pipelining scheme.In MNP,each packet has a unique ID,from1to the size of the segment,SegSize.Each receiver is responsible for de-tecting its own loss.Since the size of the segment is small and pre-determined,a node maintains a bitmap(which we call MissingVector)of the current segment it is receiving in memory.Each bit in MissingVector corresponds to a packet. All the bits are initially set to1.When a node receives a packet for thefirst time,it stores that packet in EEPROM and sets the corresponding bit in MissingVector to0.A node that is advertising maintains a ForwardVector,which is a bitmap of the advertised segment,and is an indicator of the packets the node needs to send if it becomes a sender. When a node sends a request message,it puts the loss infor-mation(its MissingVector)in the request message.When the advertising node receives the request,it marks its Forward-Vector according to the loss information.Therefore,the For-wardVector of an advertising node is the union of the Miss-ingVectors in the request messages that the node has received.A node only sends the packets indicated in the ForwardVec-tor.We restrict the length of the segment to be no longer than128packets,so that the maximal size of MissingVector is only16bytes,and thusfits into a radio packet.We implemented MNP on TinyOS Mica-2[8]and XSM[9] mote platforms.In the remaining part of this paper,we use TOSSIM[10],a discrete event simulator for TinyOS wireless networks,to investigate the effect of adding FEC to the cur-rent ARQ scheme.Before adding FEC,we ran a simulation to see the packet loss pattern,which is shown in Figure1.The simulation was conducted in a10x10network with10feet inter-node distance.The program size is8.4KB(3segments, 384packets).The x-axis is the number of missing packets indicated by the MissingVector contained in a request mes-sage.The y-axis is the number of request messages.Since the segment size is128packets,the number of missing pack-ets ranges from1to128packets.The peak at the right is the number of requests that ask for the whole segment(128pack-ets).These are protocol requests,used in the sender selection algorithm.We only consider repair requests,the requests that ask for less than128packets.We note that most of the repair requests only ask for a few number of packets.For example, about50%of the requests are asking for less than8miss-ing packets,70%of the requests are asking for less than16 missing packets,83%of the requests are asking for less than 32missing packets.The fact that the majority of the losses are small losses(involving a few number of missing packets) suggests that a better link is desirable to reduce the number of requests and retransmissions.FEC can be used to provide an abstraction of an enhanced link at the cost of transmitting additional parity packets.Figure1.Packet loss pattern in MNP.10x10network,inter-node distance:10feet,program size:8.4KB(384packets). 3.A Hybrid Reliability Scheme for MNPIn Section3.1,we briefly introduce the two(n,k)FEC cod-ing schemes:simple XOR code and Reed-Solomon(RS) codes.In Section3.2,we present the new reliability scheme for MNP,that is,adding XOR/RS FEC codes to MNP.3.1(n,k)FEC Coding SchemesWe consider(n,k)FEC-based approaches.There are two commonly used(n,k)FEC codes:XOR code,and Reed-Solomon(RS)codes[7].For simple XOR code,each trans-mission group has only one parity packet,which is the XOR of all the source packets in the group.Therefore,simple XOR code is a(k+1,k)code.XOR code is very simple to im-plement.However,it can only repair a single packet loss in a transmission group.RS codes are moreflexible.RS codes are based on alge-braic methods usingfinitefields.A transmission group can have multiple parity packets(i.e.,n can be any number that is larger than k).Thus RS codes provide better protec-tion against losses.However,theflexibility of RS codes is achieved at high processing costs,in terms of computation complexity and memory space.3.2Adding FEC to MNPThere are two approaches regarding the calculation of the parity packets.Thefirst approach is to calculate the parity packets based on the actual packets that are sent in the current transmission.In this case,for every transmission,the parity packets are computed and sent.This incurs a lot of encoding and decoding overhead.Moreover,the receivers have to be aware of the ForwardVector of the current sender(cf.Sec-tion2),that is,the packets that are sent in the current trans-mission,in order to decode the parity packets.The sender might need to send the ForwardVector several times in order to make sure that it is received by all the receivers.The sec-ond approach is to compute the parity packets based on the full pared to thefirst approach,this approach is more efficient because the encoding process is needed only once for each segment,rather than performed for each trans-mission;and the senders do not need to send ForwardVector to the receivers.Therefore,we use the second approach. For each segment,there are a set of parity packets.The par-ity packets are assigned unique IDs that start from SegSize+1.A receiver keeps a bitmap of the parity packets,ParityMiss-ingVector,in memory.The ParityMissingVector is operated just as the MissingVector in the original ARQ-based approach (cf.Section2).All the bits in ParityMissingVector are set to 1initially.When a node receives a parity packet for thefirst time,it set the corresponding bit to0.We limit the size of the ParityMissingVector to be no larger than4bytes,thus the number of parity packets ranges from1to32.When a receiver sends a request,it puts MissingVector,as well as ParityMissingVector,in the request message.Corre-spondingly,a source node maintains a ParityForwardVector (in addition to the ForwardVector in the original algorithm), as an indicator of the parity packets that need to be sent if the node becomes a sender.The ParityForwardVector of a source node is the union of the ParityMissingVector in the re-quest messages the node has received.Whenever a receiver receives a packet(either data or parity),it checks to see if the number of packets it has received is enough to recover all the missing data packets.If so,the receiver has received the entire segment,otherwise,it asks for the missing data and parity packets,and a sender sends the requested packets. The difference between adding XOR code and Reed-Solomon codes is the capability of loss recovery and the complexity of encoding and decoding,as we mentioned in Section3.1.Because XOR code can only have one parity packet and repair a single loss in a transmission group,in or-der to repair more than one loss in a segment,we divide a segment into t transmission groups.Each group has afixed number(SegSize/t)of data packets and one parity packet. The parity packet that has ID SegSize+i(1≤i≤t)corre-sponds to the i th transmission group.In each transmission, the sender sends all the requested data packets,followed by the requested parity packets.Whenever a node has received enough number of packets to recover the whole group,it sets all the bits of this group in MissingVector and ParityMiss-ingVector to0,so that the node will not request for packets within this group any more.For Reed-Solomon codes,we consider a full segment as a transmission group,which can have multiple parity pack-ets.As long as the receiver has received SegSize(or more) number of packets(either data packets or parity packets),the whole segment is received.Further optimization is possible. For example,consider one scenario where a receiver has11 missing data packets,and has received8parity packets.In this case,the parity packets received are not enough to re-cover the losses.In order to recover the whole segment,the receiver only needs3more packets,rather than11.Taking message losses into account,we allow the receivers to re-quest for twice the required number of missing packets(k packets in(n,k)FEC schemes).In the previous example, the receiver will request for2×3=6packets.This feature is expected to reduce the number of retransmissions,espe-cially when the number of parity packets is large.However, because in MNP,the packets a sender transmits is the com-bination(union)of the packets that are requested,it is likely that different receivers request for different sets of packets, thus the combination of them virtually covers the whole seg-ment.In this case,this optimization is not effective.We add one restriction that if a node requests for a subset of the packets it is missing,it always requests for those packets that have the lowest IDs,so that it is more possible that the sets of packets requested by different receivers are overlapping.4.Evaluation ResultsWe added simple XOR code and RS codes to MNP source code,as described in Section3.2,and used TOSSIM[10],to evaluate the effectiveness of the new reliability scheme.In TOSSIM,the network is modelled as a directed graph. Each vertex in the graph is a sensor node.Each edge has a bit-error rate,representing the probability with which a bitcan be corrupted if it is sent along this link.Asymmetric links exist in this model since the bit-error rate for each edge is de-cided independently.We decide the bit-error rate based on our experience with Mica-2motes.Specifically,the packet loss rate on a one-hop(10feet)link is around5%.The loss rate increases with distance,and after50feet,the loss rate is 100%.We consider two measurements:the completion time and the energy consumption.A previous study[11]shows that the time a Mica-2mote keeps its radio on(i.e.,idle listen-ing time)contributes to the major part of energy consump-tion.The other parts of energy consumption include message transmissions and receptions.The following simulations were conducted in a10x10net-work.The distance between two neighboring nodes is kept constant at10feet.In the current implementation,each seg-ment has128data packets.The program size is8.4KB(3seg-ments,384packets).We assume that initially only the base station,the node at a corner,has the new program.In Figure2,we compare the performance of the original MNP protocol(marked as“No FEC”in Figure2)with MNP protocol plus XOR/RS FEC codes.To prevent the random-ness of a single simulation,we repeated each experiment three times,and presented the mean value.We found that adding either simple XOR code or RS codes to MNP helps improving performance.The completion time and the active radio time were reduced after we applied FEC schemes to MNP.For XOR code,when the number of parity packets is from4to8,the reduction on completion time is about10%, and the reduction on active radio time is about14-17%.In-creasing or reducing the number of parity packets minuses the performance gain.RS codes generally perform better than simple XOR code.As shown in Figure2,using RS codes,the completion time is reduced by9-33%,and the active radio time is reduced by10-38%.For RS codes,the performance improves when more parity packets are used. In Figure3,we show the number of transmissions and recep-tions of the original MNP protocol and MNP plus XOR/RS FEC schemes.We note that,with XOR/RS FEC schemes, the number of transmissions and receptions are reduced by up to19%and41%respectively.In general,(MNP+XOR) scheme has lower number of transmissions and receptions than the original MNP protocol,and(MNP+RS codes)has the lowest number of transmissions and receptions among the three schemes.The number of receptions is largely decided by the average active radio time,as can be seen from Figure 2(b)and Figure3(b).We categorize the transmitted packets into two types:control packets and encoding packets.Control packets include the advertisements and requests.They are used for the sender se-lection algorithm and ARQ scheme.Encoding packets carry the information that is to be disseminated to the sensor nodes. They include data packets and parity packets.In Figure4pletion time and active radio time of(MNP +XOR)and(MNP+RS codes),when the number of parity packets is from1to32packets per segment(128data pack-ets/segment).(a)Completion time(b)Average active radio time per node.Figure3.Average number of transmissions and receptions per node:(MNP+XOR)and(MNP+RS codes),when the number of parity packets is from1to32packets per segment (128data packets/segment).(a)Average number of transmis-sions per node(b)Average number of receptions per node.(a)and(b),we show the average number of control packets that are transmitted per node,for(MNP+XOR)scheme,and (MNP+RS codes)scheme,compared to the original MNP protocol.In Figure4(c)and(d),we show the corresponding results for encoding packets.We note that using XOR/RS codes effectively reduces the number of control packets(up to44%),especially the num-ber of requests.For example,when we use RS codes with32 parity packets per segment,the average number of requests per node is only10(Figure4(b)),less than half of the re-quests transmitted per node when the original MNP is used. As to encoding packets,we note that although FEC schemes transmit additional parity packets,the number of data pack-ets,plus the parity packets,is still lower than the number of data packets transmitted by the original MNP algorithm in general,with a few exceptions(when the number of parity packets per segment is2or32in(MNP+XOR)scheme). From Figure4,we can see how RS codes performance better than simple XOR code.For XOR code,when the number of parity packets per segment increases from4to32,the ad-ditional parity packets do not contribute much to recovering packet losses,but incur higher transmission overhead due to more redundant packets.As we mentioned in Section3,the limitation of XOR scheme is that,only one loss can be recov-ered in each transmission group.Although we can use mul-tiple parity packets for a segment by dividing a segment into several transmission groups,only one loss from each group can be recovered.In other words,XOR code with multiple parity packets work best when the message losses are evenly distributed in the segment.However,in network reprogram-ming,a large part of the message losses are bursty in nature, caused by message collisions or channel errors.Therefore, dividing a segment into many tiny groups,and transmitting one parity packet for each group using XOR code,is not de-sirable.By contrast,RS codes can recover message losses at arbitrary locations.For RS codes,increasing the number of parity packets in a group improves its ability of recov-ering message losses.As shown in Figure4(c)and(d),for XOR code,when more than4parity packets are used for each segment,the number of encoding packets increases with the increase of the number of parity packets;for RS codes,the number of encoding packets remains the same although more parity packets are transmitted.5.Related WorkAutomatic repeat request(ARQ)and forward error correction (FEC)are the two basic ways to provide reliability for trans-mission protocols.All the existing work on network repro-gramming[1–6]uses ARQ-based approaches for error recov-ery.In this paper,we propose adding FEC to the ARQ-based reliability scheme,and perform a case study on MNP[4], which is a multihop network reprogramming protocol.There are different types of FEC codes.We have introduced simple XOR code and Reed-Solomon (RS)codes in Section3.1.Both XOR code and RS codes be-long to block(n,k)FEC codes.A block code has the prop-erty that any k out of the n encoding packets can reconstruct the original k source packets.Tornado codes[12]provide an alternative to RS codes.Tor-nado codes have lower computation complexity than RS codes,at the small cost of reception overhead,that is,a(n,k) Tornado code requires slightly more than k out of n encoding packets to recover k source packets.Unlike the block(n,k)codes,Luby Transform(LT)codes [13]can generate as many unique encoding packets as re-quired,using the k source packets as input.Each encod-ing packet is generated randomly and independently of all other encoding packets.LT codes have the property that the receiver is able to reassemble the original k source packets as long as it receives enough number(slightly more than k) of encoding packets.LT codes are designed for delivering a large amount of data over high bandwidth internet links. They have lower computation complexity on encoding and decoding than RS codes.However,they introduce higher re-covery overhead because more redundant packets are trans-mitted.Normally,the number of repair packets are more than 10times the number of source packets.6.ConclusionsAutomatic repeat request(ARQ)is a commonly used tech-nique for reliable bulk data dissemination applications in sensor networks.A typical example of this type of appli-cations is network reprogramming.All the existing net-work reprogramming protocols use ARQ as the error recov-ery scheme.In this paper,we proposed a hybrid reliabil-ity scheme,which combines forward error correction(FEC) with ARQ schemes.The FEC provides an abstraction of a better transmission medium,and ARQ scheme takes care of the remaining error corrections.We use MNP,a multihop network reprogramming protocol,as a study case,and pre-sented the implementation details of adding FEC schemes to MNP.Specifically,we considered two(n,k)FEC schemes: the simple XOR code,and Reed-Solomon(RS)codes.We added the two FEC schemes to MNP,and simulated them in TOSSIM.The simulation results show that both simple XOR code and Reed-Solomon codes effectively reduce the number of re-quest messages that ask for missing packets,thus enable faster reprogramming and less energy consumption.Adding XOR code to MNP can reduce the reprogramming time by 1-10%,and reduce the active radio time(which contributes to the major part of energy consumption)by3-18%;while adding Reed-Solomon codes to MNP can reduce the repro-C o n t r o l p a c k e t s s e n t p e r n o d eFigure 4.The average number of control/encoding packets transmitted per node.(a)control packets transmitted in (MNP +XOR)scheme (b)control packets transmitted in (MNP +RS codes)scheme (c)encoding packets transmitted in (MNP +XOR)scheme (d)encoding packets transmitted in (MNP +RS codes)scheme.*0-means original MNP (No FEC/parity)gramming time by 9-33%,and reduce the active radio time by 10-38%.The message transmissions and receptions are reduced as well when the FEC schemes are used.We found that simple XOR code has limited capability of correcting er-rors (reducing completion time by up to 10%and reducing active radio time by up to 18%),that is,it cannot deal with bursty packet losses,which is not rare when disseminating a large amount of data in large networks.Therefore,increas-ing the number of parity packets does not help to improve the performance,only incurs more redundant transmissions.On the other hand,Reed-Solomon codes are more flexible.It can deal with any loss patterns.It becomes more powerful at cor-recting errors when more parity packets are used.XOR code is very simple to implement,while Reed-Solomon codes re-quire higher computing resources due to the complexity of their calculation.It suggests a tradeoff between computation and performance:if more computation resources are avail-able,a more powerful coding scheme,e.g.,Reed-Solomon codes,should be used to achieve better performance;other-wise,a simple XOR FEC scheme can be used for limited improvement.References[1]Crossbow Technology,Inc.Mote In-Network Pro-gramming User Reference Version 20030315,2003./tos/tinyos-1.x/doc/Xnp.pdf.[2]T.Stathopoulos,J.Heidemann,and D.Estrin.A remote codeupdate mechanism for wireless sensor networks.Technical report,UCLA,2003.[3]J.W.Hui and D.Culler.The dynamic behavior of a data dis-semination protocol for network programming at scale.In Proceedings of the second International Conference on Em-bedded Networked Sensor Systems (SenSys 2004),Baltimore,Maryland,2004.[4]S.S.Kulkarni and L.Wang.MNP:Multihop network repro-gramming service for sensor networks.In Proceedings of the25th International Conference on Distributed Computing Sys-tems ICDCS ,pages 7–16,June 2005.[5]S.S.Kulkarni and M.Arumugam.Infuse:A TDMA baseddata dissemination protocol for sensor networks.Technical Report MSU-CSE-04-46,Department of Computer Science,Michigan State University,November 2004.[6]V .Naik,A.Arora,P.Sinha,and H.Zhang.Sprinkler:A reli-able and energy efficient data dissemination service for wire-less embedded devices.To appear in Proceedings of the 26th IEEE Real-Time Systems Symposium ,December 2005.[7]I.S.Reed and G.Solomon.Ploynomial codes over certainfinite fields.Journal of the Society for Industrial and Applied Mathematics ,8(10):300–304,1960.[8]J.Hill and D.Culler.Mica:A wirleess platform for deeplyembedded networks.IEEE Micro ,22(6):12–24,2002.[9]P.Dutta,M.Grimmer,A.Arora,S.Bibyk,and D.Culler.Design of a wireless sensor network platform for detecting rare,random,and ephemeral events.In Proceedings of the International Conference on Information Processing in Sen-sor Networks (IPSN),Special track on Platform Tools and De-sign Methods for Network Embedded Sensors (SPOTS),April 2005.[10]P.Levis,N.Lee,M.Welsh,and D.Culler.Tossim:Ac-curate and scalable simulation of entire tinyos applications.In Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys 2003),Los Angeles,CA,November 2003.[11] A.Mainwaring,J.Polastre,R.Szewczyk,D.Culler,and J.An-derson.Wireless sensor networks for habitat monitoring.In Proceedings of ACM International Workshop on Wireless Sensor Networks and Applications (WSNA’02),Atlanta,GA,September 2002.[12]M.Luby,M.Mitzenmacher,A.Shokrollahi,and D.Spielman.Efficient erasure correcting codes.IEEE Transactions on In-formation Theory,Special Issue:Codes on Graphs and Itera-tive Algorithms ,47(2):569–584,February 2001.[13]M.Luby.LT codes.In Proceedings of the 43rd Annual IEEESymposium on Foundations of Computer Science ,pages 271–282,2002.。