当前位置:文档之家› A parallel adaptive tabu search approach

A parallel adaptive tabu search approach

A parallel adaptive tabu search approach
A parallel adaptive tabu search approach

A parallel adaptive tabu search approach

E.G.Talbi *,Z.Ha?di,J-M.Geib

LIFL URA-369CNRS,Universit e

de Lille 1,B timent M359655,Villeneuve d'Ascq Cedex,France Received 15April 1997

Abstract

This paper presents a new approach for parallel tabu search based on adaptive parallelism.

Adaptive parallelism was used to dynamically adjust the parallelism degree of the application with respect to the system load.Adaptive parallelism demonstrates that high-performance computing using a hundred of heterogeneous workstations combined with massively parallel machines is feasible to solve large optimization problems.The parallel tabu search algorithm includes di erent tabu list sizes and new intensi?cation/diversi?cation mechanisms.Encour-aging results have been obtained in solving the quadratic assignment problem.We have im-proved the best known solutions for some large real-world problems.ó1998Elsevier Science B.V.All rights reserved.

Keywords:Tabu search;Adaptive parallelism;Quadratic assignment problem

1.Motivation and goals

Many interesting combinatorial optimization problems are NP-hard,and then they cannot be solved exactly within a reasonable amount of time.Consequently,heuristics must be used to solve real-world problems.Tabu search (TS)is a general purpose heuristic (meta-heuristic)that has been proposed by Glover [1].TS has achieved widespread success in solving practical optimization problems in di erent domains (such as resource management,process design,logistic and telecommuni-cations).Promising results of applying TS to a variety of academic optimization problems (traveling salesman,quadratic assignment,time-tabling,job-shop sched-uling,etc.)are reported in the literature [2].Solving large problems motivates the development of a parallel implementation of

TS.

Parallel Computing 24(1998)2003±2019

*

Corresponding author.E-mail:talbi@liˉ.fr

0167-8191/98/$±see front matter ó1998Elsevier Science B.V.All rights reserved.PII:S 0167-8191(98)00086-6

2004 E.G.Talbi et al./Parallel Computing24(1998)2003±2019

The proliferation of powerful workstations and fast communication networks (ATM,Myrinet,etc.)with constantly decreasing cost/performance ratio have shown the emergence of heterogeneous workstation networks and homogeneous clusters of processors(such as DEC Alpha farms and IBM SP/2)[3,4].These parallel platforms are generally composed of an important number of machines shared by many users. In addition,a workstation belongs to an owner who will not tolerate external ap-plications degrading the performance of his machine.Load analysis of those plat-forms during long periods of time showed that only a few percentage of the available power was used[5,6].There is a substantial amount of idle time.Therefore,dynamic adaptive scheduling of parallel applications is essential.

Many parallel TS algorithms have been proposed in the literature.In general,they don't use advanced programming tools(such as load balancing,dynamic recon?g-uration and checkpointing)to e ciently use the machines.Most of them are de-veloped for dedicated parallel homogeneous machines.

Our aim is to develop a parallel adaptive TS strategy,which can bene?t greatly from a platform having combined computing resources of massively parallel ma-chines(MPPs)and networks of workstations(NOWs).For this purpose,we use a dynamic scheduling system(MARS1)which harnesses idle time(keeping in mind the ownership of workstations),and supports adaptive parallelism to dynamically re-con?gure the set of processors hosting the parallel TS.

The testbed optimization problem we used is the quadratic assignment problem (QAP),one of the hardest among the NP-hard combinatorial optimization prob-lems.The parallel TS algorithm includes di erent tabu list sizes and intensi?cation/ diversi?cation mechanisms based on frequency based long-term memory and re-stricted neighborhood.

The remainder of the paper is organized as follows.In Section2,we describe existing parallel TS algorithms.The parallel adaptive TS proposed will be detailed in Section3.Finally,Sections4and5will present respectively the application of the proposed algorithm to the QAP and results of experiments for several standard instances from the QAP-library.

2.Classi?cation of parallel TS algorithms

We present in this section,respectively the main components of a sequential TS algorithm,and a classi?cation of parallel TS algorithms.A new taxonomy dimension has been introduced.

2.1.Sequential tabu search

A combinatorial optimization problem is de?ned by the speci?cation of a pair Y f ,where the search space X is a discrete set of all(feasible)solutions,and the

1Multi-user Adaptive Resource Scheduler.

E.G.Talbi et al./Parallel Computing24(1998)2003±20192005

objective function f is a mapping f X 3 X A neighborhood N is a mapping x X 3 ,which speci?es for each P a subset x of X of neighbors of S. The most famous local search optimization method is the descent method.A de-scent method starts from an initial solution and then continually explores the neighborhood of the current solution for a better solution.If such a solution is found,it replaces the current solution.The algorithm terminates as soon as the current solution has no neighboring solution of better quality.Such a method generally stops at a local but not global minimum.

Unlike a descent method,TS uses an adaptive memory r to control the search process.For example,a solution H in x may be classi?ed tabu,when selecting a potential neighbor of ,due to memory considerations.x r Y contains all neighborhood candidates that the memory r will allow the algorithm to consider. TS may be viewed as a variable neighborhood method:each iteration rede?nes the neighborhood,based on the conditions that classify certain moves as tabu.

At each iterations,TS selects the best neighbor solution in x r Y even if this results in a worst solution than the current one.A form of short-term memory embodied in r is the tabu list that forbid the selection of certain moves to prevent cycling.

To use TS for solving an optimization problem,we must de?ne in the input the following items:

·An initial solution 0X

·The de?nition of the memory r.

·The stopping condition:there may be several possible stopping conditions[7].A maximum number nbmax of iterations between two improvements of f is used as the stopping condition.

The output of the algorithm represents the best solution found during the search process.The following is a straightforward description of a sequential basic TS al-gorithm(Fig.1)[8].

A tabu move applied to a current solution may appear attractive because it gives, for example,a solution better than the best found so far.We would like to accept the move in spite of its status by de?ning aspiration conditions.Other advanced tech-niques may be implemented in a long-term-memory such as intensi?cation to en-courage the exploitation of a promising region in the search space,and diversi?cation to encourage the exploration of new regions[2].

2.2.Parallel tabu search

Many classi?cations of parallel TS algorithms have been proposed[9,10].They are based on many criteria:number of initial solutions,identical or di erent pa-rameter settings,control and communication strategies.We have identi?ed two main categories(Fig.2).

Domain decomposition:Parallelism in this class of algorithms relies exclusively on: (i)The decomposition of the search space:the main problem is decomposed into a number of smaller subproblems,each subproblem being solved by a di erent TS algorithm[11].

(ii)The decomposition of the neighborhood:the search for the best neighbour at each iteration is performed in parallel,and each task evaluates a di erent subset of the partitioned neighborhood [12,13].

A high degree of synchronisation is required to implement this class of algorithms.Multiple tabu search tasks:This class of algorithms consists in executing multiple TS algorithms in parallel.The di erent TS tasks start with the same or di erent parameter values (initial solution,tabu list size,maximum number of iterations,etc.).Tabu tasks may be independent (without communication)[14,15]or cooper-ative.A cooperative algorithm has been proposed in [10],where each task performs

a

Fig.1.A basic sequential tabu search

algorithm.

Fig.2.Hierarchical classi?cation of parallel TS strategies.

2006 E.G.Talbi et al./Parallel Computing 24(1998)2003±2019

E.G.Talbi et al./Parallel Computing24(1998)2003±20192007 given number of iterations,then broadcasts the best solution.The best of all solu-tions becomes the initial solution for the next phase.

Parallelizing the exploration of the search space or the neighborhood is problem-dependent.This assumption is strong and is met only for few problems.The second class of algorithms is less restrictive and then more general.A parallel algorithm that combines the two approaches(two-level parallel organization)has been proposed in [16].

We can extend this classi?cation by introducing a new taxonomy dimension:the way scheduling of tasks over processors is done.Parallel TS algorithms fall into three categories depending on whether the number and/or the location of work(tasks, data)depend or not on the load state of the parallel machine(Table1):

Non-adaptive:This category represents parallel TS in which both the number of tasks of the application and the location of work(tasks or data)are generated at compile time(static scheduling).The allocation of processors to tasks(or data)re-mains unchanged during the execution of the application regardless of the current state of the parallel machine.Most of the proposed algorithms belong to this class. An example of such an approach is presented in[17].The neighborhood is par-titionned in equal size partitions depending on the number of workers,which is equal to the number of processors of the parallel machine.In[13],the number of tasks generated depends on the size of the problem and is equal to n2Y where n is the problem size.

When there are noticeable load or power di erences between processors,the search time of the non-adaptive approach presented is derived by the maximum execution time over all processors(highly loaded processor or the least powerful processor).A signi?cant number of tasks are often idle waiting for other tasks to complete their work.

Semi-adaptive:To improve the performance of the parallel non adaptive TS al-gorithms,dynamic load balancing must be introduced[17,16].This class represents applications for which the number of tasks is?xed at compile-time,but the locations of work(tasks,data)are determined and/or changed at run-time(as seen in Table1). Load balancing requirements are met in[17]by a dynamic redistribution of work between processors.During the search,each time a task?nishes its work,it proceeds to a work-demand.Dynamic load balancing through partition of the neighborhood is done by migrating data.

However,the parallelism degree in this class of algorithms is not related to load variation in the parallel system:when the number of tasks exceeds the number of idle

Table1

Another taxonomy dimension for parallel TS algorithms

Tasks or Data

Number Location

Non-adaptive Static Static

Semi-adaptive Static Dynamic

Adaptive Dynamic Dynamic

2008 E.G.Talbi et al./Parallel Computing24(1998)2003±2019

nodes,multiple tasks are assigned to the same node.Moreover,when there are more idle nodes than tasks,some of them will not be used.

Adaptive:A parallel adaptive program refers to a parallel computation with a dynamically changing set of tasks.Tasks may be created or killed function of the load state of the parallel machine.Di erent types of load state dessimination schemes may be used[18].A task is created automatically when a processor becomes idle.When a processor becomes busy,the task is killed.2As far as we know,no work has been done on parallel adaptive TS.

3.A parallel adaptive tabu search algorithm

In this paper,a straightforward approach has been used to introduce adaptive parallelism in TS.It consists in parallel independent TS algorithms.This requires no communication between the sequential tasks.The algorithms are initialized with di erent solutions.Di erent parameter settings are also used(size of the tabu list).

3.1.Parallel algorithm design

The programming style used is the master/workers paradigm.The master task generates work to be processed by the workers.Each worker task receives a work from the master,computes a result and sends it back to the master.The master/ workers paradigm works well in adaptive dynamic environments because:·when a new node becomes available,a worker task can be started there,·when a node becomes busy,the master task gets back the pending work which was being computed on this node,to be computed on the next available node.

The master implements a central memory through which passes all communica-tion,and that captures the global knowledge acquired during the search.The number of workers created initially by the master is equal to the number of idle nodes in the parallel platform.Each worker implements a sequential TS task.The initial solution is generated randomly and the tabu list is empty.

The parallel adaptive TS algorithm reacts to two events(Fig.3):

Transition of the load state of a node from idle to busy:If a node hosting a worker becomes loaded,the master folds up the application by withdrawing the worker.The concerned worker puts back all pending work to the master and dies.The pending work is composed of the current solution,the best local solution found,the short-term memory,the long-term memory and the number of iterations done without improving the best solution.The master updates the best global solution if it's worst than the best local solution received.

Transition of the load state of a node from busy to idle:When a node becomes available,the master unfolds the application by staring a new worker on it.Before

2Note that before being killed,a task may return its pending work(best known solution,short and long-term memory).

F i g .3.A r c h i t e c t u r e o f t h e p a r a l l e l a d a p t i v e T S .

E.G.Talbi et al./Parallel Computing 24(1998)2003±20192009

2010 E.G.Talbi et al./Parallel Computing24(1998)2003±2019

starting a sequential TS,the worker task gets the values of the di erent parameters from the master:the best global solution and an initial solution which may be an intermediate solution found by a folded TS task,which constitute a``good''initial solution.In this case,the worker receives also the state of the short-term memory, the long-term memory and the number of iterations done without improving the best solution.

The local memory of each TS task which de?nes the pending work is composed of (Fig.3):the best solution found by the task,the number of iterations applied,the intermediate solution and the adaptive memory of the search(short-term and long-term memories).The central memory in the master is then composed of(Fig.3):the best global solution found by all TS tasks,the di erent intermediate solutions with the associated number of iterations and adaptive memory.

3.2.Parallel algorithm implementation

The parallel run-time system to be used has to support dynamic adaptive scheduling of tasks,where the programmer is totally preserved from the complex task of managing the availability of nodes and the dynamics of the target machine. Piranha(under Linda)[19],CARMI/Wodi(under PVM/Condor)[20],and MARS [21]are representative of such scheduling systems.We have used the MARS dynamic scheduling system.

The MARS system is implemented on top of the UNIX operating system.We use an existing communication library which preserves the ordering of messages:PVM.3 Data representations using XDR are hidden for the programmer.The execution model is based on a preemptive multi-threaded run-time system:PM2.4The basic functionality of PM2is the Lightweight Remote Procedure Call(LRPC),which consists in forking a remote thread to execute a speci?ed service.

It is very important for the MARS scheduling system to quantify node idleness or node availability.This is highly related to both load indicators chosen to de?ne it and owner behavior.Several load indicators are provided:CPU utilization,load average,number of users logged in,user memory,swap space,paging rate,disk transfer rate,/tmp space,NFS performance,etc.Owner activity is detected by controlling its keyboard and mouse idle times.For our experiments based on many parallel applications,a node is considered idle if the one,?ve and ten minutes load average are below2.0,1.5and1.0respectively and the keyboard/mouse are inactive for more than?ve minutes.Two conˉicting goals emerge when setting the thresh-olds:minimize the overhead of the evaluation and theˉuctuation of the load state, and exploit a node as soon as it becomes idle.

A MARS programmer writes a parallel application by specifying two multi-threaded modules:the master module and the worker module.The master module is composed mainly of the work server thread.The worker module acts essentially as

3Parallel Virtual Machine.

4Parallel Multi-threaded Machine.

a template for the worker threads .When the parallel application is submitted,the master module is executed on the home node.The number of ``worker threads''is function of the available idle nodes.The MARS run-time scheduling system handles transparently the adaptive execution of the application on behalf of the user.

In the application,we have to de?ne two coordination services:get_work and put_back_work .The ?rst coordination service speci?es the function to execute when an unfolding operation occurs and the second one for the folding operation.

When a processor becomes idle,the MARS node manger communicates the state transition to the MARS scheduler,which in turn communicates the information to the application through the master using the RPC mechanism.Then,the master creates a worker task.Once the worker is created,it makes a LRPC to the get_work service to get the work to be done.Then,the worker creates a thread which execute a sequential TS algorithm (Fig.4).

When a processor becomes busy or owned,the same process is initiated in the MARS system.In this case,the worker makes a LRPC to the put_back_work service to return the pending work and dies (Fig.5).4.Application to the QAP

The parallel adaptive TS algorithm has been used to solve the quadratic assign-ment problem (QAP).The QAP represents an important class of combinatorial optimization problems with many applications in di erent domains (facility location,data analysis,task scheduling,image synthesis,

etc.).

Fig.4.Operations carried out when a processor becomes idle.

E.G.Talbi et al./Parallel Computing 24(1998)2003±20192011

4.1.The quadratic assignment problem

The ?rst formulation was given by Koopmans and Beckmann in 1957[22].The QAP can be de?ned as follows:Given:

·a set of n objects y f y 1Y y 2Y F F F Y y n g Y ·a set of n locations v f v 1Y v 2Y F F F Y v n g Y

·a ˉow matrix C ,where each element ij denotes a ˉow cost between the objects y i

and y j ,

·a distance matrix D ,where each element d kl denotes a distance between location v k

and v l ,

?nd an object-location bijective mapping w X y 3v Y which minimizes the objective function f ,

f

n i 1 n j 1

ij d w i w j X The QAP is NP-hard [23].Finding an -approximate solution is also NP-complete

[24].This fact has restricted exact algorithms (such as branch and bound)to small instances n `22 [25].An extensive survey and recent developments can be found in [26].

4.2.Tabu search for the QAP

To apply the parallel adaptive TS to the QAP,we must de?ne the neighborhood structure of the problem and its evaluation,the short-term memory to avoid cycling and the long-term memory for the intensi?cation/diversi?cation

phase.

Fig.5.Operations carried out when a processor becomes busy or owned.

2012 E.G.Talbi et al./Parallel Computing 24(1998)2003±2019

E.G.Talbi et al./Parallel Computing24(1998)2003±20192013

4.2.1.Neighborhood structure and evaluation

Many encoding schemes may be used to represent a solution of the QAP.We have used a representation which is based on a permutation of n integers: s l1Y l2Y F F F Y l n Y

where l i denotes the location of the object y i.We use a pair exchange move in which two objects of a permutation are swapped.The number of neighbors obtained by this move is n nà1 a2X

We use the formulae reported in[27]to e ciently compute the variation in the objective function due to a swap of two objects.The evaluation of the neighborhood can be done in y n2 operations.

4.2.2.Short-term memory

The tabu list contains pairs(i,j)of objects that cannot be exchanged(recency-based restriction).The e ciency of the algorithm depends on the choice of the size of the tabu list.Our experiments indicate that choosing a size which varies between n a2 and 3n a2gives very good results.The number of parallel TS tasks is set to the problem size n,and each TS task is initialized with a di erent tabu list size from n a2 to 3n a2with an increment of1.

The aspiration function allows a tabu move if it generates a solution better than the best found solution.The total number of iterations depends on the problem size, and is limited to1000n.

4.2.3.Long-term memory

We use as a long-term memory a frequency-based memory which complements the information provided by recency-based memory.A matrix p f i Y k represents the long-term memory.Let denote the sequence of all solutions generated.The value f i Y k represents the number of solutions in for which s i k X This quantity identi?es the number of times the object i is mapped on the location k.The di erent values are normalized by dividing them by the average value which is equal to 1 n iter tions a n Y given that the sum of the n2elements of the matrix p is equal to n 1 n iter tions X

If no better solution is found in100n iterations,the intensi?cation phase is started. The intensi?cation phase starts from the best solution found in the current region and an empty tabu list.The use of the frequency-based memory will penalize non-improving moves by assigning a larger penalty to swaps with greater frequency values.

A simulated-annealing like process is used in the intensi?cation phase.The rele-vance of our approach compared to pure simulated-annealing is that it exploits memory of TS for selecting moves.For each move m i Y j Y an incentive value m is introduced in the acceptance probability to encourage the incorporation of good attributes(Fig.6).The value of m is initialized to Max f i Y s i Y f j Y s j X Therefore,the probability that a move will be accepted diminishes with small values of m.

The diversi?cation phase is started after100n iterations are performed without any improvements of the restarted TS algorithm.Diversi?cation is performed in10n

iterations.The search will be forced to explore new regions by penalizing the solu-tions often visited.The penalty value associated to a move m i Y j is s m Min f i Y s i Y f j Y s j X A move is tabu-active if the condition s m b 1is true.There-fore,we will penalize moves by assigning a penalty to moves with greater frequency.The number of iterations is large enough to drive the search out of the current re-gion.When the diversi?cation phase terminates,the tabu status based on long-term memory is https://www.doczj.com/doc/a217602984.html,putational results

For our experimental study,we collected results from a platform combining a network of heterogeneous workstations and a massively parallel homogeneous machine (Fig.7).The network is composed of 126workstations (PC/Linux,Sparc/Sunos,Alpha/OSF,Sparc/Solaris)owned by researchers and students of our Uni-versity.The parallel machine is an Alpha-farm composed of 16processors connected by a crossbar switched interconnection network.The parallel adaptive TS competes with other applications (sequential and parallel)and owners of the workstations.5.1.Adaptability of the parallel algorithm

The performance measures we use when evaluating the adaptability of the parallel TS algorithm are execution time,overhead,the number of nodes allocated to

the

Fig.6.Simulated annealing for the intensi?cation phase.

2014 E.G.Talbi et al./Parallel Computing 24(1998)2003±2019

application,and the number of fold and unfold operations.The overhead is the total amount of CPU time required for scheduling operations.Table 2summarizes the results obtained for 10runs.

The average number of nodes allocated to the application does not vary signi?-cantly and represents 71%of the total number of processors.However,the high number of fold and unfold operations shows a signi?cant load ˉuctuation of the di erent processors.During an average execution time of 2h 25mn,79fold oper-ations and 179unfold operations are performed.This corresponds to one new node every 0.8mn and one node loss every 2mn.These results demonstrate the signi?-cance of the adaptability concept in parallel applications.

The parallel algorithm is e cient in terms of the scheduling overhead due to the adaptability.The overhead is low comparing to the total execution time (0.09%of the total execution time).We see also that the deviation of the overhead is very low (0.24%for 10runs).The classical speedup measure cannot be applied to our ap-plication which executes on a heterogeneous multi-user non-dedicated

parallel

Fig.7.The meta-system used for our experiments.

Table 2

Experiment results obtained for 10runs of a large problem (Sko100a)on 100processors (16processors of the Alpha-farm,54Sparc/Solaris,25Sparc/SunOs,5PC/Linux)

Mean

Deviation Min Max Execution time (mn)145.7523.75124182Overhead (sec)

8.360.248.188.75Number of nodes allocated 7115.735092Number of fold operations 7949.7524149Number of unfold operations

179

45.55

120

248

E.G.Talbi et al./Parallel Computing 24(1998)2003±20192015

platform.Unfortunately,quantitative analysis of heterogeneous dynamic parallel systems still in its infancy[28].

5.2.QAP results

To evaluate the performance of the parallel TS in terms of solution quality and search time,we have used standard QAP problems of di erent types(QAP library):·random and uniform distances andˉows:Tai35a,Tai100a,

·randomˉows on grids:Nug30,Sko100a-f,Tho150,Wil100,

·real-life or real-life like problems:Bur26d,Ste36b,Esc128,Tai150b,Tai256c. The parallel TS algorithm was run10times to obtain an average performance estimate.Table3shows the best known,best found,worst,average value and the standard deviation of the solutions obtained for the chosen small instances(n`50). The search cost was estimated by the wall-clock time to?nd the best solution,and hence account for all overheads.Considering that the best solution may not be the last visited solution,the measured time is not necessarily the time of the complete execution of the algorithm.

We always succeed in?nding the best known solutions for small problems.This result shows the e ciency and the robustness of the parallel TS.

Table4shows the best results obtained for large problems.For random-grid problems,we found the best known solutions(for Sko100c)or solutions very close to best known solutions.Our results in terms of search time are smaller than those presented in[29]with better solution quality.The most di cult instance for our algorithm is the random-uniform Tai100a,in which we obtain a gap of0.32%above the best known solution.For this class of instances,it is di cult to?nd the best known solution but simple to?nd``good''solutions.

For the third class of instances(real-life or real-life like),the best known solutions for Tai150b and Tai256c(generation of grey patterns)have been improved. According to the results,we observe that the algorithm is well?tted to a large number of instances but its performance decreases for large uniform±random in-stances(Tai100a).Work is still to be done to improve the e ciency of the algorithm by introducing intensi?cation mechanisms based on path relinking,where S repre-sents a small subset of elite solutions[2].

Table3

Results for small problems(n`50)of di erent types

Instance Best known Best found Worst Average Standard

deviation Average search time (s)

Tai35a24220022422002242200224220020566 Nug3061246124612461240337 Bur26d38212253821225382122538212250623 Ste36b158521585215852158520763 2016 E.G.Talbi et al./Parallel Computing24(1998)2003±2019

6.Conclusion and future work

The dynamic nature associated to the load of a parallel machine (cluster of processors and network of workstations)makes essential the adaptive scheduling of tasks composing a parallel application.The main feature of our parallel TS is to adjust,in an adaptive manner,the number of tasks with respect to available nodes,to fully exploit the availability of machines.The parallel algorithm can bene?t greatly from a platform having combined computing resources of MPPs and NOWs.An experimental study has been carried out in solving the QAP.The parallel TS algorithm includes di erent tabu list sizes and intensi?cation/diversi?cation mecha-nisms (frequency based long-term memory,etc.).The performance results obtained for several standard instances from the QAP-library are very encouraging in terms of,

Adaptability :The overhead introduced by scheduling operations is very low,and the algorithm reacts very quickly to changes of the machines load.It's a worthwhile parallelization because the parallel TS application is coarse-grained.

E ciency and robustness :The parallel algorithm has always succeeded in ?nding the best known solutions for small problems (n `50).The best known solutions of large real-life problems ``charts of grey densities''(Taixxxc )and the real-life problem Tai150b have been improved.The parallel algorithm often produces best known or close to best known solutions for large random problems.Other sophisticated in-tensi?cation and diversi?cation mechanisms to improve the results obtained for large random±uniform problems (Taixxa)are under investigation.

The parallel adaptive TS may be used to solve other optimization problems:set covering,independent set,multiconstraint knapsack.We plan to improve our

Table 4

Results for large problems (n P 50)of di erent types Instance Best known Best found Gap Search time (mn)Tai100a 21125314211932460.32%117Sko100a 1520021520360.022%142Sko100b 1538901539140.015%155Sko100c 1478621478620%132Sko100d 1495761496100.022%152Sko100e 1491501491700.013%124Sko100f 149

036

1490460.006%125Wil1002730382730740.013%389Tho1508134030

8140368

0.078%287Esc12864

64

0%230Tai150b 499348972499342577A 0.0013%415Tai256c

44894480

44810866

A 0.19%

593

E.G.Talbi et al./Parallel Computing 24(1998)2003±2019

2017

2018 E.G.Talbi et al./Parallel Computing24(1998)2003±2019

framework to provide adaptive parallelism to other metaheuristics(genetic algo-rithms and hybrid algorithms)and for exact algorithms(IDA?and branch and bound).

Solving very large optimization problems can take several hours.The aspects of fault tolerance must be taken into account.A checkpointing mechanism which pe-riodically save the context of the application is under evaluation[30].

References

[1]F.Glover,Tabu search±Part I,ORSA Journal of Computing1(3)(1989)190±206.

[2]F.Glover,https://www.doczj.com/doc/a217602984.html,guna,Tabu search,in:C.R.Reeves(Ed.),Modern Heuristic Techniques for

Combinatorial Problems,Blackwell Scienti?c Publications,Oxford,1992,pp.70±150.

[3]N.Boden,D.Cohen,R.Felderman,A.Kulawik,C.Seitz,J.Seizovic,W.Su,Myrinet±A gigabit-

per-second local-area network,IEEE Micro(1995)29±36.

[4]P.R.Woodward,Perspectives on supercomputing:Three decades of change,IEEE Computer(1996)

99±111.

[5]D.A.Nichols,Using idle workstations in a shared computing environment,ACM Operating System

Review21(5)(1987)5±12.

[6]M.M.Theimer,https://www.doczj.com/doc/a217602984.html,ntz,Finding idle machines in a workstation-based distributed system,IEEE

Transactions on Software Engineering15(11)(1989)1444±1458.

[7]F.Glover,E.Taillard,D.de Werra,A user's guide to tabu search,Annals of Operations Research41

(1993)3±28.

[8]A.Hertz,D.de Werra,The tabu search metaheuristic:How we use it?Annals of Mathematics and

Arti?cial Intelligence(1989)111±121.

[9]S.Voss,Tabu search:Applications and prospects,Technical report Technische Hochshule

Darmstadt,Germany,1992.

[10]T.D.Crainic,M.Toulouse,M.Gendreau,Towards a taxonomy of parallel tabu search algorithms,

Technical Report CRT-933,Centre de Recherche sur les Transports,Universit e de Montreal,1993.

[11]E.Taillard,Parallel iterative search methods for vehicle routing problem,Networks23(1993)

661±673.

[12]E.Taillard,Robust taboo search for the quadratic assignment problem,Parallel Computing17(1991)

443±455.

[13]J.Chakrapani,J.Skorin-Kapov,Massively parallel tabu search for the quadratic assignment

problem,Annals of Operations Research41(1993)327±341.

[14]M.Malek,M.Guruswamy,M.Pandya,H.Owens,Serial and parallel simulated annealing and tabu

search algorithms for the traveling salesman problem,Annals of Operations Research21(1989) 59±84.

[15]C.Rego,C.Roucairol,A parallel tabu search algorithm using ejection chains for the vehicle routing

problem,in:Proc.of the Metaheuristics Int.Conf.,Breckenridge,1995,pp.253±295.

[16]P.Badeau,M.Gendreau,F.Guertin,J.-Y.Potvin,E.Taillard,A parallel tabu search heuristic for the

vehicle routing problem with time windows,RR CRT-95-84,Centre de Recherche sur les Transports, Universit e de Montr e al,1995.

[17]S.C.S.Porto,C.Ribeiro,Parallel tabu search message-passing synchronous strategies for task

scheduling under precedence constraints,Journal of heuristics1(2)(1996)207±223.

[18]T.L.Casavant,J.G.Kuhl,A taxonomy of scheduling in general-purpose distributed computing

systems,IEEE Transactions on Software Engineering14(2)(1988)141±154.

[19]D.L.Kaminsky,Adaptive parallelism in Piranha,Ph.D.thesis,Department of Computer Science,

Yale University,RR-1021,1994.

[20]J.Pruyne,M.Livny,Parallel processing on dynamic resources with CARMI,in:Proc.of the

Workshop on Job Scheduling for Parallel Processing IPPS'95,Lecture Notes On Computer Science, No.949,Springer,Berlin,1995,pp.259±278.

E.G.Talbi et al./Parallel Computing24(1998)2003±20192019

[21]Z.Ha?di,E.G.Talbi,J.-M.Geib,MARS:Adaptive scheduling of parallel applications in a multi-user

heterogeneous environment,in:European School of Computer Science ESPPE'96:Parallel Programming Environments for High Performance Computing,Alpe d'Huez,France,1996,pp.

119±122.

[22]T.C.Koopmans,M.J.Beckmann,Assignment problems and the location of economic activities,

Econometrica25(1957)53±76.

[23]M.Garey,D.Johnson,Computers and Intractability:A guide to the theory on NP-completeness,

Freeman,New York,1979.

[24]S.Sahni,T.Gonzales,P-complete approximation problems,Journal of the ACM23(1976)556±565.

[25]A.Brungger,A.Marzetta,J.Clausen,M.Perregaard,Joining forces in solving large-scale quadratic

assignment problems in parallel,in:A.Gottlieb(Ed.),11th Int.Parallel Processing Symposium, Geneva,Switzerland,Morgan Kaufmann,Los Altos,CA,1997,pp.418±427.

[26]P.M.Pardalos,F.Rendl,H.Wolkowicz,The quadratic assignment problem:A survey and recent

developments,DIMACS Series in Discrete Mathematics and Theoretical Computer Science16(1994) 1±42.

[27]E.Taillard,Comparison of iterative searches for the quadratic assignment problem,Location Science

3(1995)87±103.

[28]M.M.Eshagian,Heterogeneous computing,Artech House,MA,1996.

[29]C.Fleurent,J.A.Ferland,Genetic hybrids for the quadratic assignment problem,DIMACS Series in

Discrete Mathematics and Theoretical Computer Science16(1994)173±188.

[30]D.Kebbal,E.G.Talbi,J.-M.Geib,A new approach for check pointing parallel applications,in:Int.

Conf.on Parallel and Distributed Processing Techniques and Applications PDPTA'97,LasVegas, USA,1997,pp.1643±1651.

α-突触核蛋白与帕金森病

α-突触核蛋白与帕金森病 【摘要】α-突触核蛋白是一种中枢神经系统突触前表达的可溶性蛋白,与帕金森病的发病有密切的关系。α-突触核蛋白在各种生理,环境因素的影响下异常表达和聚集,通过一系列的氧化应激等生化反应,产生了对神经元的毒性作用,从而参与了帕金森病的发生。对α-突触核蛋白的化学性质,聚集的机制及其影响因素的了解与研究,将会十分有利于帕金森病的预防和治疗。 【关键词】α-突触核蛋白;帕金森病;路易小体 Alpha - synapse nucleoalbumin and Parkinson sickness ZHANGYanchaoZHANGNanDIAO Jianping 【Abstract】Alpha - the synapse nucleoalbumin is the

soluble protein which one kind of central nervous system synapse, expresses, gets sick the morbidity with Parkinson to have the close relationship. Alpha - the synapse nucleoalbumin in each physiology, under environmental factor's influence exceptionally expresses and gathers, through biochemistry and so on a series of oxidized stress responded, has produced to neuron's toxic effect, thus participated in the occurrence which Parkinson gets sick. To Alpha - the synapse nucleoalbumin's chemical property, the accumulation mechanism and influencing factor's understanding and the research, will be very advantageous the prevention which and the treatment will get sick in Parkinson. 【Key words】Alpha - synapse nucleoalbumin; Parkinson sickness; Louis minute 帕金森病是一种中老年人常见的运动障碍疾病,以黑质多巴胺能神经元变性缺失和路易小体形成为病理特征,临床上表现为静止性震颤,运动迟缓,肌强直和姿势步态异常等。 α-突触核蛋白是一种在健康人的脑组织中广泛分布的 可溶性蛋白,它是帕金森病的发病机制中最重要的蛋白,因为它是路易小体的主要结构成分,α-突触核蛋白聚集与路易小体的形成及多巴胺能神经元的死亡密切相关。本文就α-

敲除α-突触核蛋白保护多巴胺能神经元

敲除α-突触核蛋白保护多巴胺能神经元 甲基苯丙胺诱导的帕金森病模型中多巴胺能神经元死亡的主要因素是α-突触核蛋白过表达。中国南方医科大学王慧君博士所在团队发现,立体定位注射α-syn-shRNA慢病毒抑制右侧纹状体α-syn mRNA和蛋白的表达后,帕金森病模型大鼠抑郁表现减弱,且纹状体中多巴胺水平和酪氨酸羟化酶及超氧化物歧化酶活性显著增加,而活性氧生成量、一氧化氮合酶活性、一氧化氮含量和丙二醛含量下降,同时纹状体中凋亡细胞的数量明显降低。作者认为,α-突触核蛋白具有通过抑制氧化应激和改善多巴胺能系统功能扭转甲基苯丙胺诱导的神经毒性的能力。相关文献发表于《中国神经再生研究(英文版)》杂志2014年5月第9期。 TUNEL染色结果显示,以立体定向注射α-syn-shRNA慢病毒敲除右侧纹状体中α-syn后,腹腔注射甲基苯丙胺建立帕金森病模型大鼠纹状体中凋亡细胞的数量显著降低 Article: " Protective effect of alpha-synuclein knockdown on methamphetamine-induced neurotoxicity in dopaminergic neurons," by Yunchun Tai1, Ling Chen1, Enping Huang1, Chao Liu1, 2, Xingyi Yang1, Pingming Qiu1, Huijun Wang1 (1 Department of Forensic Medicine, School of Basic Medical Sciences, Southern Medical University, Guangzhou, Guangdong Province, China; 2 Guangzhou Forensic Science Institute, Guangzhou, Guangdong Province, China) Tai YC, Chen L, Huang EP, Liu C, Yang XY, Qiu PM, Wang HJ. Protective effect of alpha-synuclein knockdown on methamphetamine-induced neurotoxicity in dopaminergic neurons. Neural Regen Res. 2014;9(9):951-958. 欲获更多资讯: Neural Regen Res

人α-突触核蛋白(α-SYN)ELISA试剂盒使用说明书

人α-突触核蛋白(α-SYN)ELISA试剂盒使用说明书 本试剂仅供研究使用目的:本试剂盒用于测定人血清,血浆及相关液体样本中人α-突触核蛋白(α-SYN)含量。 实验原理: 本试剂盒应用双抗体夹心法测定标本中人α-突触核蛋白(α-SYN)水平。用纯化的人α-突触核蛋白(α-SYN)抗体包被微孔板,制成固相抗体,往包被单抗的微孔中依次加入α-突触核蛋白(α-SYN),再与HRP标记的α-突触核蛋白(α-SYN)抗体结合,形成抗体-抗原-酶标抗体复合物,经过彻底洗涤后加底物TMB显色。TMB在HRP酶的催化下转化成蓝色,并在酸的作用下转化成最终的黄色。颜色的深浅和样品中的α-突触核蛋白(α-SYN)呈正相关。用酶标仪在450nm波长下测定吸光度(OD值),通过标准曲线计算样品中人α-突触核蛋白(α-SYN)浓度。 试剂盒组成: 样本处理及要求: 1.血清:室温血液自然凝固10-20分钟,离心20分钟左右(2000-3000转/分)。仔细收集上 清,保存过程中如出现沉淀,应再次离心。 2.血浆:应根据标本的要求选择EDTA、者柠檬酸钠或肝素作为抗凝剂,混合10-20分钟后, 离心20分钟左右(2000-3000转/分)。仔细收集上清,保存过程中如有沉淀形成,应该再次离心。 3.尿液:用无菌管收集,离心20分钟左右(2000-3000转/分)。仔细收集上清,保存过程中 如有沉淀形成,应再次离心。胸腹水、脑脊液参照实行。 4.细胞培养上清:检测分泌性的成份时,用无菌管收集。离心20分钟左右(2000-3000转/分)。仔细收集上清。检测细胞内的成份时,用PBS(PH7.2-7.4)稀释细胞悬液,细胞浓度达到100万/ml左右。通过反复冻融,以使细胞破坏并放出细胞内成份。离心20分钟

α—突触核蛋白与帕金森病相互关系研究进展

α—突触核蛋白与帕金森病相互关系研究进展α-突触核蛋白是一种可溶性小分子蛋白主要在中枢神经系统的突出前末梢 表达。近年来关于α-突触核蛋白和帕金森病等神经退行性病的密切关系被广泛报道越来越多。在本综述中,详细介绍了α突触核蛋白的构成、生理功能和作用、病理生理、聚合机制等方面的研究进展,从α-突触核蛋白的角度理解帕金森病的发生机制。 标签:α-突触核蛋白;中枢神经系统 帕金森病(Parkinson’s Disease,PD)发病率在65岁以上人群中高达1%,神经系统退行性疾病中排在第二位,尤其男性罹患为多[1]。PD的临床特点集中在严重的运动性症状,包括静止性震颤、肌强直、姿势反射异常以及运动减少[2-3]。这些症状障碍可以与自主神经功能障碍、抑郁障碍、痴呆等其他神经精神功能障碍伴发[4]。病理学上PD特点为黑质致密部多巴胺神经元显著的退行性变,导致纹状体投射区或脑干多巴胺神经元投射的减少以及一系列运动皮质系统有关的运动功能障碍。这一神经退行性变伴随着存活多巴胺神经元和受累神经细胞胞浆和突触中路易小体的表现,但具体机制目前尚不清楚[5]。 尽管许多研究者试图阐明选择性多巴胺神经退化的机制,但PD仍是一种病理研究不明的散发性神经退行性疾病。基因和环境危险因素目前收到广泛的关注,但神经退行过程的始动因素目前仍不清楚[6]。基于目前研究,迟发型原发性PD是由于复杂的多重易感基因和环境因素引起的。此外,若干PD单基因家族类型以發病早和显性或隐性以遗传染色体病为特征。在众多与PD可能的基因中,四种PD有关的突变包括α-synuclein[7]、parkin[8]、泛素羧基末端水解酶(ubiquitin carboxy-terminal hydrolase L-1)[9]和DJ-1[10]。 尽管伴有特殊基因缺陷的家族性PD患者只占所有发病人群的一小部分,但对这些病例的研究可以有助于我们发现造成多因素散发性PD的核心蛋白通路。与α-syn编码有关的基因突变受到的重视,原因就在于在大多数家族性或散发性PD患者中,病理发现其路易小体中存在大量α-Syn纤维沉积[11-12]。 这一观察发现说明了尽管α-syn属于PD患者中不常见突变,但通过一些不正常的代谢性过程的积累,这些蛋白在家族或散发性PD的病例中起到了重要作用。 1 α-syn的分布和结构 α-Syn和β-Syn、γ-Syn同属于synuclein家族[13],该家族蛋白只存在于脊椎动物中。α-Syn在脑部(尤其是新皮层、海马、纹状体、丘脑和小脑)的突触前末梢中广泛表达[14-15]。α-syn基因定位于4q21[16]。所有突触核蛋白的基因排序均类似,但是只有alfa-Syn与疾病相关。α-Syn最初在太平洋电鳗”加州电鯆”的放电器官中被发现。数百种保守的α-syn蛋白类似物存在于人类、鸟类、小鼠、

相关主题
文本预览
相关文档 最新文档