More info on "Time Parallelizing"

The parallel replica dynamics method is a very general and powerful approach, requiring only that the system makes occasional transitions from state to state, with the time between transitions being exponentially distributed. (And true infrequent-event systems naturally give this exponential behavior.) The way it works is that we put all the processors to work on one state -- that is, we replicate the entire system on each processor. Then we run a trajectory forward in time independently on each processor. At first, we are simply "dephasing" the replicas -- making them independent of each other, by running for a short time with a different random number seed on each processor. Then, we start accumulating official time on each processor, and we watch for a transition to occur. It can be shown mathematically that when we first see a transition, on any processor, we simply add together the times all the processors accumulated up to that point, and this is the correct time for the system at the time of the transition. We then start the procedure over again, putting all the processors to work on the new state that the one processor made a transition to. The result is that we have parallelized time, and it is efficient to the extent that the time between transitions, divided by the number of processors, is large compared to the dephasing time (which is overhead). For a metal system, the typical dephasing time is about one picosecond, and the time between transitions can be nanoseconds, microseconds, or longer, so we often get very good efficiency.
more info: 1998 paper (pdf) | Review paper (pdf)


Home  |  ASCR  |  Contact Us  |  DOE disclaimer