Ejemplo n.º 1
0
int gmx_mdrun(int argc, char *argv[])
{
   const char   *desc[] = {
      "[THISMODULE] is the main computational chemistry engine",
      "within GROMACS. Obviously, it performs Molecular Dynamics simulations,",
      "but it can also perform Stochastic Dynamics, Energy Minimization,",
      "test particle insertion or (re)calculation of energies.",
      "Normal mode analysis is another option. In this case [TT]mdrun[tt]",
      "builds a Hessian matrix from single conformation.",
      "For usual Normal Modes-like calculations, make sure that",
      "the structure provided is properly energy-minimized.",
      "The generated matrix can be diagonalized by [gmx-nmeig].[PAR]",
      "The [TT]mdrun[tt] program reads the run input file ([TT]-s[tt])",
      "and distributes the topology over ranks if needed.",
      "[TT]mdrun[tt] produces at least four output files.",
      "A single log file ([TT]-g[tt]) is written, unless the option",
      "[TT]-seppot[tt] is used, in which case each rank writes a log file.",
      "The trajectory file ([TT]-o[tt]), contains coordinates, velocities and",
      "optionally forces.",
      "The structure file ([TT]-c[tt]) contains the coordinates and",
      "velocities of the last step.",
      "The energy file ([TT]-e[tt]) contains energies, the temperature,",
      "pressure, etc, a lot of these things are also printed in the log file.",
      "Optionally coordinates can be written to a compressed trajectory file",
      "([TT]-x[tt]).[PAR]",
      "The option [TT]-dhdl[tt] is only used when free energy calculation is",
      "turned on.[PAR]",
      "A simulation can be run in parallel using two different parallelization",
      "schemes: MPI parallelization and/or OpenMP thread parallelization.",
      "The MPI parallelization uses multiple processes when [TT]mdrun[tt] is",
      "compiled with a normal MPI library or threads when [TT]mdrun[tt] is",
      "compiled with the GROMACS built-in thread-MPI library. OpenMP threads",
      "are supported when [TT]mdrun[tt] is compiled with OpenMP. Full OpenMP support",
      "is only available with the Verlet cut-off scheme, with the (older)",
      "group scheme only PME-only ranks can use OpenMP parallelization.",
      "In all cases [TT]mdrun[tt] will by default try to use all the available",
      "hardware resources. With a normal MPI library only the options",
      "[TT]-ntomp[tt] (with the Verlet cut-off scheme) and [TT]-ntomp_pme[tt],",
      "for PME-only ranks, can be used to control the number of threads.",
      "With thread-MPI there are additional options [TT]-nt[tt], which sets",
      "the total number of threads, and [TT]-ntmpi[tt], which sets the number",
      "of thread-MPI threads.",
      "The number of OpenMP threads used by [TT]mdrun[tt] can also be set with",
      "the standard environment variable, [TT]OMP_NUM_THREADS[tt].",
      "The [TT]GMX_PME_NUM_THREADS[tt] environment variable can be used to specify",
      "the number of threads used by the PME-only ranks.[PAR]",
      "Note that combined MPI+OpenMP parallelization is in many cases",
      "slower than either on its own. However, at high parallelization, using the",
      "combination is often beneficial as it reduces the number of domains and/or",
      "the number of MPI ranks. (Less and larger domains can improve scaling,",
      "with separate PME ranks, using fewer MPI ranks reduces communication costs.)",
      "OpenMP-only parallelization is typically faster than MPI-only parallelization",
      "on a single CPU(-die). Since we currently don't have proper hardware",
      "topology detection, [TT]mdrun[tt] compiled with thread-MPI will only",
      "automatically use OpenMP-only parallelization when you use up to 4",
      "threads, up to 12 threads with Intel Nehalem/Westmere, or up to 16",
      "threads with Intel Sandy Bridge or newer CPUs. Otherwise MPI-only",
      "parallelization is used (except with GPUs, see below).",
      "[PAR]",
      "To quickly test the performance of the new Verlet cut-off scheme",
      "with old [TT].tpr[tt] files, either on CPUs or CPUs+GPUs, you can use",
      "the [TT]-testverlet[tt] option. This should not be used for production,",
      "since it can slightly modify potentials and it will remove charge groups",
      "making analysis difficult, as the [TT].tpr[tt] file will still contain",
      "charge groups. For production simulations it is highly recommended",
      "to specify [TT]cutoff-scheme = Verlet[tt] in the [TT].mdp[tt] file.",
      "[PAR]",
      "With GPUs (only supported with the Verlet cut-off scheme), the number",
      "of GPUs should match the number of particle-particle ranks, i.e.",
      "excluding PME-only ranks. With thread-MPI, unless set on the command line, the number",
      "of MPI threads will automatically be set to the number of GPUs detected.",
      "To use a subset of the available GPUs, or to manually provide a mapping of",
      "GPUs to PP ranks, you can use the [TT]-gpu_id[tt] option. The argument of [TT]-gpu_id[tt] is",
      "a string of digits (without delimiter) representing device id-s of the GPUs to be used.",
      "For example, \"[TT]02[tt]\" specifies using GPUs 0 and 2 in the first and second PP ranks per compute node",
      "respectively. To select different sets of GPU-s",
      "on different nodes of a compute cluster, use the [TT]GMX_GPU_ID[tt] environment",
      "variable instead. The format for [TT]GMX_GPU_ID[tt] is identical to ",
      "[TT]-gpu_id[tt], with the difference that an environment variable can have",
      "different values on different compute nodes. Multiple MPI ranks on each node",
      "can share GPUs. This is accomplished by specifying the id(s) of the GPU(s)",
      "multiple times, e.g. \"[TT]0011[tt]\" for four ranks sharing two GPUs in this node.",
      "This works within a single simulation, or a multi-simulation, with any form of MPI.",
      "[PAR]",
      "With the Verlet cut-off scheme and verlet-buffer-tolerance set,",
      "the pair-list update interval nstlist can be chosen freely with",
      "the option [TT]-nstlist[tt]. [TT]mdrun[tt] will then adjust",
      "the pair-list cut-off to maintain accuracy, and not adjust nstlist.",
      "Otherwise, by default, [TT]mdrun[tt] will try to increase the",
      "value of nstlist set in the [TT].mdp[tt] file to improve the",
      "performance. For CPU-only runs, nstlist might increase to 20, for",
      "GPU runs up to 40. For medium to high parallelization or with",
      "fast GPUs, a (user-supplied) larger nstlist value can give much",
      "better performance.",
      "[PAR]",
      "When using PME with separate PME ranks or with a GPU, the two major",
      "compute tasks, the non-bonded force calculation and the PME calculation",
      "run on different compute resources. If this load is not balanced,",
      "some of the resources will be idle part of time. With the Verlet",
      "cut-off scheme this load is automatically balanced when the PME load",
      "is too high (but not when it is too low). This is done by scaling",
      "the Coulomb cut-off and PME grid spacing by the same amount. In the first",
      "few hundred steps different settings are tried and the fastest is chosen",
      "for the rest of the simulation. This does not affect the accuracy of",
      "the results, but it does affect the decomposition of the Coulomb energy",
      "into particle and mesh contributions. The auto-tuning can be turned off",
      "with the option [TT]-notunepme[tt].",
      "[PAR]",
      "[TT]mdrun[tt] pins (sets affinity of) threads to specific cores,",
      "when all (logical) cores on a compute node are used by [TT]mdrun[tt],",
      "even when no multi-threading is used,",
      "as this usually results in significantly better performance.",
      "If the queuing systems or the OpenMP library pinned threads, we honor",
      "this and don't pin again, even though the layout may be sub-optimal.",
      "If you want to have [TT]mdrun[tt] override an already set thread affinity",
      "or pin threads when using less cores, use [TT]-pin on[tt].",
      "With SMT (simultaneous multithreading), e.g. Intel Hyper-Threading,",
      "there are multiple logical cores per physical core.",
      "The option [TT]-pinstride[tt] sets the stride in logical cores for",
      "pinning consecutive threads. Without SMT, 1 is usually the best choice.",
      "With Intel Hyper-Threading 2 is best when using half or less of the",
      "logical cores, 1 otherwise. The default value of 0 do exactly that:",
      "it minimizes the threads per logical core, to optimize performance.",
      "If you want to run multiple [TT]mdrun[tt] jobs on the same physical node,"
	 "you should set [TT]-pinstride[tt] to 1 when using all logical cores.",
      "When running multiple [TT]mdrun[tt] (or other) simulations on the same physical",
      "node, some simulations need to start pinning from a non-zero core",
      "to avoid overloading cores; with [TT]-pinoffset[tt] you can specify",
      "the offset in logical cores for pinning.",
      "[PAR]",
      "When [TT]mdrun[tt] is started with more than 1 rank,",
      "parallelization with domain decomposition is used.",
      "[PAR]",
      "With domain decomposition, the spatial decomposition can be set",
      "with option [TT]-dd[tt]. By default [TT]mdrun[tt] selects a good decomposition.",
      "The user only needs to change this when the system is very inhomogeneous.",
      "Dynamic load balancing is set with the option [TT]-dlb[tt],",
      "which can give a significant performance improvement,",
      "especially for inhomogeneous systems. The only disadvantage of",
      "dynamic load balancing is that runs are no longer binary reproducible,",
      "but in most cases this is not important.",
      "By default the dynamic load balancing is automatically turned on",
      "when the measured performance loss due to load imbalance is 5% or more.",
      "At low parallelization these are the only important options",
      "for domain decomposition.",
      "At high parallelization the options in the next two sections",
      "could be important for increasing the performace.",
      "[PAR]",
      "When PME is used with domain decomposition, separate ranks can",
      "be assigned to do only the PME mesh calculation;",
      "this is computationally more efficient starting at about 12 ranks,",
      "or even fewer when OpenMP parallelization is used.",
      "The number of PME ranks is set with option [TT]-npme[tt],",
      "but this cannot be more than half of the ranks.",
      "By default [TT]mdrun[tt] makes a guess for the number of PME",
      "ranks when the number of ranks is larger than 16. With GPUs,",
      "using separate PME ranks is not selected automatically,",
      "since the optimal setup depends very much on the details",
      "of the hardware. In all cases, you might gain performance",
      "by optimizing [TT]-npme[tt]. Performance statistics on this issue",
      "are written at the end of the log file.",
      "For good load balancing at high parallelization, the PME grid x and y",
      "dimensions should be divisible by the number of PME ranks",
      "(the simulation will run correctly also when this is not the case).",
      "[PAR]",
      "This section lists all options that affect the domain decomposition.",
      "[PAR]",
      "Option [TT]-rdd[tt] can be used to set the required maximum distance",
      "for inter charge-group bonded interactions.",
      "Communication for two-body bonded interactions below the non-bonded",
      "cut-off distance always comes for free with the non-bonded communication.",
      "Atoms beyond the non-bonded cut-off are only communicated when they have",
      "missing bonded interactions; this means that the extra cost is minor",
      "and nearly indepedent of the value of [TT]-rdd[tt].",
      "With dynamic load balancing option [TT]-rdd[tt] also sets",
      "the lower limit for the domain decomposition cell sizes.",
      "By default [TT]-rdd[tt] is determined by [TT]mdrun[tt] based on",
      "the initial coordinates. The chosen value will be a balance",
      "between interaction range and communication cost.",
      "[PAR]",
      "When inter charge-group bonded interactions are beyond",
      "the bonded cut-off distance, [TT]mdrun[tt] terminates with an error message.",
      "For pair interactions and tabulated bonds",
      "that do not generate exclusions, this check can be turned off",
      "with the option [TT]-noddcheck[tt].",
      "[PAR]",
      "When constraints are present, option [TT]-rcon[tt] influences",
      "the cell size limit as well.",
      "Atoms connected by NC constraints, where NC is the LINCS order plus 1,",
      "should not be beyond the smallest cell size. A error message is",
      "generated when this happens and the user should change the decomposition",
      "or decrease the LINCS order and increase the number of LINCS iterations.",
      "By default [TT]mdrun[tt] estimates the minimum cell size required for P-LINCS",
      "in a conservative fashion. For high parallelization it can be useful",
      "to set the distance required for P-LINCS with the option [TT]-rcon[tt].",
      "[PAR]",
      "The [TT]-dds[tt] option sets the minimum allowed x, y and/or z scaling",
      "of the cells with dynamic load balancing. [TT]mdrun[tt] will ensure that",
      "the cells can scale down by at least this factor. This option is used",
      "for the automated spatial decomposition (when not using [TT]-dd[tt])",
      "as well as for determining the number of grid pulses, which in turn",
      "sets the minimum allowed cell size. Under certain circumstances",
      "the value of [TT]-dds[tt] might need to be adjusted to account for",
      "high or low spatial inhomogeneity of the system.",
      "[PAR]",
      "The option [TT]-gcom[tt] can be used to only do global communication",
      "every n steps.",
      "This can improve performance for highly parallel simulations",
      "where this global communication step becomes the bottleneck.",
      "For a global thermostat and/or barostat the temperature",
      "and/or pressure will also only be updated every [TT]-gcom[tt] steps.",
      "By default it is set to the minimum of nstcalcenergy and nstlist.[PAR]",
      "With [TT]-rerun[tt] an input trajectory can be given for which ",
      "forces and energies will be (re)calculated. Neighbor searching will be",
      "performed for every frame, unless [TT]nstlist[tt] is zero",
      "(see the [TT].mdp[tt] file).[PAR]",
      "ED (essential dynamics) sampling and/or additional flooding potentials",
      "are switched on by using the [TT]-ei[tt] flag followed by an [TT].edi[tt]",
      "file. The [TT].edi[tt] file can be produced with the [TT]make_edi[tt] tool",
      "or by using options in the essdyn menu of the WHAT IF program.",
      "[TT]mdrun[tt] produces a [TT].xvg[tt] output file that",
      "contains projections of positions, velocities and forces onto selected",
      "eigenvectors.[PAR]",
      "When user-defined potential functions have been selected in the",
      "[TT].mdp[tt] file the [TT]-table[tt] option is used to pass [TT]mdrun[tt]",
      "a formatted table with potential functions. The file is read from",
      "either the current directory or from the [TT]GMXLIB[tt] directory.",
      "A number of pre-formatted tables are presented in the [TT]GMXLIB[tt] dir,",
      "for 6-8, 6-9, 6-10, 6-11, 6-12 Lennard-Jones potentials with",
      "normal Coulomb.",
      "When pair interactions are present, a separate table for pair interaction",
      "functions is read using the [TT]-tablep[tt] option.[PAR]",
      "When tabulated bonded functions are present in the topology,",
      "interaction functions are read using the [TT]-tableb[tt] option.",
      "For each different tabulated interaction type the table file name is",
      "modified in a different way: before the file extension an underscore is",
      "appended, then a 'b' for bonds, an 'a' for angles or a 'd' for dihedrals",
      "and finally the table number of the interaction type.[PAR]",
      "The options [TT]-px[tt] and [TT]-pf[tt] are used for writing pull COM",
      "coordinates and forces when pulling is selected",
      "in the [TT].mdp[tt] file.[PAR]",
      "With [TT]-multi[tt] or [TT]-multidir[tt], multiple systems can be ",
      "simulated in parallel.",
      "As many input files/directories are required as the number of systems. ",
      "The [TT]-multidir[tt] option takes a list of directories (one for each ",
      "system) and runs in each of them, using the input/output file names, ",
      "such as specified by e.g. the [TT]-s[tt] option, relative to these ",
      "directories.",
      "With [TT]-multi[tt], the system number is appended to the run input ",
      "and each output filename, for instance [TT]topol.tpr[tt] becomes",
      "[TT]topol0.tpr[tt], [TT]topol1.tpr[tt] etc.",
      "The number of ranks per system is the total number of ranks",
      "divided by the number of systems.",
      "One use of this option is for NMR refinement: when distance",
      "or orientation restraints are present these can be ensemble averaged",
      "over all the systems.[PAR]",
      "With [TT]-replex[tt] replica exchange is attempted every given number",
      "of steps. The number of replicas is set with the [TT]-multi[tt] or ",
      "[TT]-multidir[tt] option, described above.",
      "All run input files should use a different coupling temperature,",
      "the order of the files is not important. The random seed is set with",
      "[TT]-reseed[tt]. The velocities are scaled and neighbor searching",
      "is performed after every exchange.[PAR]",
      "Finally some experimental algorithms can be tested when the",
      "appropriate options have been given. Currently under",
      "investigation are: polarizability.",
      "[PAR]",
      "The option [TT]-membed[tt] does what used to be g_membed, i.e. embed",
      "a protein into a membrane. The data file should contain the options",
      "that where passed to g_membed before. The [TT]-mn[tt] and [TT]-mp[tt]",
      "both apply to this as well.",
      "[PAR]",
      "The option [TT]-pforce[tt] is useful when you suspect a simulation",
      "crashes due to too large forces. With this option coordinates and",
      "forces of atoms with a force larger than a certain value will",
      "be printed to stderr.",
      "[PAR]",
      "Checkpoints containing the complete state of the system are written",
      "at regular intervals (option [TT]-cpt[tt]) to the file [TT]-cpo[tt],",
      "unless option [TT]-cpt[tt] is set to -1.",
      "The previous checkpoint is backed up to [TT]state_prev.cpt[tt] to",
      "make sure that a recent state of the system is always available,",
      "even when the simulation is terminated while writing a checkpoint.",
      "With [TT]-cpnum[tt] all checkpoint files are kept and appended",
      "with the step number.",
      "A simulation can be continued by reading the full state from file",
      "with option [TT]-cpi[tt]. This option is intelligent in the way that",
      "if no checkpoint file is found, Gromacs just assumes a normal run and",
      "starts from the first step of the [TT].tpr[tt] file. By default the output",
      "will be appending to the existing output files. The checkpoint file",
      "contains checksums of all output files, such that you will never",
      "loose data when some output files are modified, corrupt or removed.",
      "There are three scenarios with [TT]-cpi[tt]:[PAR]",
      "[TT]*[tt] no files with matching names are present: new output files are written[PAR]",
      "[TT]*[tt] all files are present with names and checksums matching those stored",
      "in the checkpoint file: files are appended[PAR]",
      "[TT]*[tt] otherwise no files are modified and a fatal error is generated[PAR]",
      "With [TT]-noappend[tt] new output files are opened and the simulation",
      "part number is added to all output file names.",
      "Note that in all cases the checkpoint file itself is not renamed",
      "and will be overwritten, unless its name does not match",
      "the [TT]-cpo[tt] option.",
      "[PAR]",
      "With checkpointing the output is appended to previously written",
      "output files, unless [TT]-noappend[tt] is used or none of the previous",
      "output files are present (except for the checkpoint file).",
      "The integrity of the files to be appended is verified using checksums",
      "which are stored in the checkpoint file. This ensures that output can",
      "not be mixed up or corrupted due to file appending. When only some",
      "of the previous output files are present, a fatal error is generated",
      "and no old output files are modified and no new output files are opened.",
      "The result with appending will be the same as from a single run.",
      "The contents will be binary identical, unless you use a different number",
      "of ranks or dynamic load balancing or the FFT library uses optimizations",
      "through timing.",
      "[PAR]",
      "With option [TT]-maxh[tt] a simulation is terminated and a checkpoint",
      "file is written at the first neighbor search step where the run time",
      "exceeds [TT]-maxh[tt]*0.99 hours.",
      "[PAR]",
      "When [TT]mdrun[tt] receives a TERM signal, it will set nsteps to the current",
      "step plus one. When [TT]mdrun[tt] receives an INT signal (e.g. when ctrl+C is",
      "pressed), it will stop after the next neighbor search step ",
      "(with nstlist=0 at the next step).",
      "In both cases all the usual output will be written to file.",
      "When running with MPI, a signal to one of the [TT]mdrun[tt] ranks",
      "is sufficient, this signal should not be sent to mpirun or",
      "the [TT]mdrun[tt] process that is the parent of the others.",
      "[PAR]",
      "Interactive molecular dynamics (IMD) can be activated by using at least one",
      "of the three IMD switches: The [TT]-imdterm[tt] switch allows to terminate the",
      "simulation from the molecular viewer (e.g. VMD). With [TT]-imdwait[tt],",
      "[TT]mdrun[tt] pauses whenever no IMD client is connected. Pulling from the",
      "IMD remote can be turned on by [TT]-imdpull[tt].",
      "The port [TT]mdrun[tt] listens to can be altered by [TT]-imdport[tt].The",
      "file pointed to by [TT]-if[tt] contains atom indices and forces if IMD",
      "pulling is used."
	 "[PAR]",
      "When [TT]mdrun[tt] is started with MPI, it does not run niced by default."
   };
   t_commrec    *cr;
   t_filenm      fnm[] = {
      { efTPX, NULL,      NULL,       ffREAD },
      { efTRN, "-o",      NULL,       ffWRITE },
      { efCOMPRESSED, "-x", NULL,     ffOPTWR },
      { efCPT, "-cpi",    NULL,       ffOPTRD },
      { efCPT, "-cpo",    NULL,       ffOPTWR },
      { efSTO, "-c",      "confout",  ffWRITE },
      { efEDR, "-e",      "ener",     ffWRITE },
      { efLOG, "-g",      "md",       ffWRITE },
      { efXVG, "-dhdl",   "dhdl",     ffOPTWR },
      { efXVG, "-field",  "field",    ffOPTWR },
      { efXVG, "-table",  "table",    ffOPTRD },
      { efXVG, "-tabletf", "tabletf",    ffOPTRD },
      { efXVG, "-tablep", "tablep",   ffOPTRD },
      { efXVG, "-tableb", "table",    ffOPTRD },
      { efTRX, "-rerun",  "rerun",    ffOPTRD },
      { efXVG, "-tpi",    "tpi",      ffOPTWR },
      { efXVG, "-tpid",   "tpidist",  ffOPTWR },
      { efEDI, "-ei",     "sam",      ffOPTRD },
      { efXVG, "-eo",     "edsam",    ffOPTWR },
      { efXVG, "-devout", "deviatie", ffOPTWR },
      { efXVG, "-runav",  "runaver",  ffOPTWR },
      { efXVG, "-px",     "pullx",    ffOPTWR },
      { efXVG, "-pf",     "pullf",    ffOPTWR },
      { efXVG, "-ro",     "rotation", ffOPTWR },
      { efLOG, "-ra",     "rotangles", ffOPTWR },
      { efLOG, "-rs",     "rotslabs", ffOPTWR },
      { efLOG, "-rt",     "rottorque", ffOPTWR },
      { efMTX, "-mtx",    "nm",       ffOPTWR },
      { efNDX, "-dn",     "dipole",   ffOPTWR },
      { efRND, "-multidir", NULL,      ffOPTRDMULT},
      { efDAT, "-membed", "membed",   ffOPTRD },
      { efTOP, "-mp",     "membed",   ffOPTRD },
      { efNDX, "-mn",     "membed",   ffOPTRD },
      { efXVG, "-if",     "imdforces", ffOPTWR },
      { efXVG, "-swap",   "swapions", ffOPTWR },
      { efMDP, "-at",     NULL,       ffOPTRD },  /* at.cfg, only for do_md */
      { efMDP, "-addtop", NULL,       ffOPTRD },  /* add additional topology */
   };
#define NFILE asize(fnm)

   /* Command line options ! */
   gmx_bool        bDDBondCheck  = TRUE;
   gmx_bool        bDDBondComm   = TRUE;
   gmx_bool        bTunePME      = TRUE;
   gmx_bool        bTestVerlet   = FALSE;
   gmx_bool        bVerbose      = FALSE;
   gmx_bool        bCompact      = TRUE;
   gmx_bool        bSepPot       = FALSE;
   gmx_bool        bRerunVSite   = FALSE;
   gmx_bool        bConfout      = TRUE;
   gmx_bool        bReproducible = FALSE;
   gmx_bool        bIMDwait      = FALSE;
   gmx_bool        bIMDterm      = FALSE;
   gmx_bool        bIMDpull      = FALSE;

   int             npme          = -1;
   int             nstlist       = 0;
   int             nmultisim     = 0;
   int             nstglobalcomm = -1;
   int             repl_ex_nst   = 0;
   int             repl_ex_seed  = -1;
   int             repl_ex_nex   = 0;
   int             nstepout      = 100;
   int             resetstep     = -1;
   gmx_int64_t     nsteps        = -2;   /* the value -2 means that the mdp option will be used */
   int             imdport       = 8888; /* can be almost anything, 8888 is easy to remember */

   rvec            realddxyz          = {0, 0, 0};
   const char     *ddno_opt[ddnoNR+1] =
   { NULL, "interleave", "pp_pme", "cartesian", NULL };
   const char     *dddlb_opt[] =
   { NULL, "auto", "no", "yes", NULL };
   const char     *thread_aff_opt[threadaffNR+1] =
   { NULL, "auto", "on", "off", NULL };
   const char     *nbpu_opt[] =
   { NULL, "auto", "cpu", "gpu", "gpu_cpu", NULL };
   real            rdd                   = 0.0, rconstr = 0.0, dlb_scale = 0.8, pforce = -1;
   char           *ddcsx                 = NULL, *ddcsy = NULL, *ddcsz = NULL;
   real            cpt_period            = 15.0, max_hours = -1;
   gmx_bool        bAppendFiles          = TRUE;
   gmx_bool        bKeepAndNumCPT        = FALSE;
   gmx_bool        bResetCountersHalfWay = FALSE;
   output_env_t    oenv                  = NULL;
   const char     *deviceOptions         = "";

   /* Non transparent initialization of a complex gmx_hw_opt_t struct.
    * But unfortunately we are not allowed to call a function here,
    * since declarations follow below.
    */
   gmx_hw_opt_t    hw_opt = {
      0, 0, 0, 0, threadaffSEL, 0, 0,
      { NULL, FALSE, 0, NULL }
   };

   t_pargs         pa[] = {

      { "-dd",      FALSE, etRVEC, {&realddxyz},
	 "Domain decomposition grid, 0 is optimize" },
      { "-ddorder", FALSE, etENUM, {ddno_opt},
	 "DD rank order" },
      { "-npme",    FALSE, etINT, {&npme},
	 "Number of separate ranks to be used for PME, -1 is guess" },
      { "-nt",      FALSE, etINT, {&hw_opt.nthreads_tot},
	 "Total number of threads to start (0 is guess)" },
      { "-ntmpi",   FALSE, etINT, {&hw_opt.nthreads_tmpi},
	 "Number of thread-MPI threads to start (0 is guess)" },
      { "-ntomp",   FALSE, etINT, {&hw_opt.nthreads_omp},
	 "Number of OpenMP threads per MPI rank to start (0 is guess)" },
      { "-ntomp_pme", FALSE, etINT, {&hw_opt.nthreads_omp_pme},
	 "Number of OpenMP threads per MPI rank to start (0 is -ntomp)" },
      { "-pin",     FALSE, etENUM, {thread_aff_opt},
	 "Set thread affinities" },
      { "-pinoffset", FALSE, etINT, {&hw_opt.core_pinning_offset},
	 "The starting logical core number for pinning to cores; used to avoid pinning threads from different mdrun instances to the same core" },
      { "-pinstride", FALSE, etINT, {&hw_opt.core_pinning_stride},
	 "Pinning distance in logical cores for threads, use 0 to minimize the number of threads per physical core" },
      { "-gpu_id",  FALSE, etSTR, {&hw_opt.gpu_opt.gpu_id},
	 "List of GPU device id-s to use, specifies the per-node PP rank to GPU mapping" },
      { "-ddcheck", FALSE, etBOOL, {&bDDBondCheck},
	 "Check for all bonded interactions with DD" },
      { "-ddbondcomm", FALSE, etBOOL, {&bDDBondComm},
	 "HIDDENUse special bonded atom communication when [TT]-rdd[tt] > cut-off" },
      { "-rdd",     FALSE, etREAL, {&rdd},
	 "The maximum distance for bonded interactions with DD (nm), 0 is determine from initial coordinates" },
      { "-rcon",    FALSE, etREAL, {&rconstr},
	 "Maximum distance for P-LINCS (nm), 0 is estimate" },
      { "-dlb",     FALSE, etENUM, {dddlb_opt},
	 "Dynamic load balancing (with DD)" },
      { "-dds",     FALSE, etREAL, {&dlb_scale},
	 "Fraction in (0,1) by whose reciprocal the initial DD cell size will be increased in order to "
	    "provide a margin in which dynamic load balancing can act while preserving the minimum cell size." },
      { "-ddcsx",   FALSE, etSTR, {&ddcsx},
	 "HIDDENA string containing a vector of the relative sizes in the x "
	    "direction of the corresponding DD cells. Only effective with static "
	    "load balancing." },
      { "-ddcsy",   FALSE, etSTR, {&ddcsy},
	 "HIDDENA string containing a vector of the relative sizes in the y "
	    "direction of the corresponding DD cells. Only effective with static "
	    "load balancing." },
      { "-ddcsz",   FALSE, etSTR, {&ddcsz},
	 "HIDDENA string containing a vector of the relative sizes in the z "
	    "direction of the corresponding DD cells. Only effective with static "
	    "load balancing." },
      { "-gcom",    FALSE, etINT, {&nstglobalcomm},
	 "Global communication frequency" },
      { "-nb",      FALSE, etENUM, {&nbpu_opt},
	 "Calculate non-bonded interactions on" },
      { "-nstlist", FALSE, etINT, {&nstlist},
	 "Set nstlist when using a Verlet buffer tolerance (0 is guess)" },
      { "-tunepme", FALSE, etBOOL, {&bTunePME},
	 "Optimize PME load between PP/PME ranks or GPU/CPU" },
      { "-testverlet", FALSE, etBOOL, {&bTestVerlet},
	 "Test the Verlet non-bonded scheme" },
      { "-v",       FALSE, etBOOL, {&bVerbose},
	 "Be loud and noisy" },
      { "-compact", FALSE, etBOOL, {&bCompact},
	 "Write a compact log file" },
      { "-seppot",  FALSE, etBOOL, {&bSepPot},
	 "Write separate V and dVdl terms for each interaction type and rank to the log file(s)" },
      { "-pforce",  FALSE, etREAL, {&pforce},
	 "Print all forces larger than this (kJ/mol nm)" },
      { "-reprod",  FALSE, etBOOL, {&bReproducible},
	 "Try to avoid optimizations that affect binary reproducibility" },
      { "-cpt",     FALSE, etREAL, {&cpt_period},
	 "Checkpoint interval (minutes)" },
      { "-cpnum",   FALSE, etBOOL, {&bKeepAndNumCPT},
	 "Keep and number checkpoint files" },
      { "-append",  FALSE, etBOOL, {&bAppendFiles},
	 "Append to previous output files when continuing from checkpoint instead of adding the simulation part number to all file names" },
      { "-nsteps",  FALSE, etINT64, {&nsteps},
	 "Run this number of steps, overrides .mdp file option" },
      { "-maxh",   FALSE, etREAL, {&max_hours},
	 "Terminate after 0.99 times this time (hours)" },
      { "-multi",   FALSE, etINT, {&nmultisim},
	 "Do multiple simulations in parallel" },
      { "-replex",  FALSE, etINT, {&repl_ex_nst},
	 "Attempt replica exchange periodically with this period (steps)" },
      { "-nex",  FALSE, etINT, {&repl_ex_nex},
	 "Number of random exchanges to carry out each exchange interval (N^3 is one suggestion).  -nex zero or not specified gives neighbor replica exchange." },
      { "-reseed",  FALSE, etINT, {&repl_ex_seed},
	 "Seed for replica exchange, -1 is generate a seed" },
      { "-imdport",    FALSE, etINT, {&imdport},
	 "HIDDENIMD listening port" },
      { "-imdwait",  FALSE, etBOOL, {&bIMDwait},
	 "HIDDENPause the simulation while no IMD client is connected" },
      { "-imdterm",  FALSE, etBOOL, {&bIMDterm},
	 "HIDDENAllow termination of the simulation from IMD client" },
      { "-imdpull",  FALSE, etBOOL, {&bIMDpull},
	 "HIDDENAllow pulling in the simulation from IMD client" },
      { "-rerunvsite", FALSE, etBOOL, {&bRerunVSite},
	 "HIDDENRecalculate virtual site coordinates with [TT]-rerun[tt]" },
      { "-confout", FALSE, etBOOL, {&bConfout},
	 "HIDDENWrite the last configuration with [TT]-c[tt] and force checkpointing at the last step" },
      { "-stepout", FALSE, etINT, {&nstepout},
	 "HIDDENFrequency of writing the remaining wall clock time for the run" },
      { "-resetstep", FALSE, etINT, {&resetstep},
	 "HIDDENReset cycle counters after these many time steps" },
      { "-resethway", FALSE, etBOOL, {&bResetCountersHalfWay},
	 "HIDDENReset the cycle counters after half the number of steps or halfway [TT]-maxh[tt]" }
   };
   unsigned long   Flags, PCA_Flags;
   ivec            ddxyz;
   int             dd_node_order;
   gmx_bool        bAddPart;
   FILE           *fplog, *fpmulti;
   int             sim_part, sim_part_fn;
   const char     *part_suffix = ".part";
   char            suffix[STRLEN];
   int             rc;
   char          **multidir = NULL;


   cr = init_commrec();

   PCA_Flags = (PCA_CAN_SET_DEFFNM | (MASTER(cr) ? 0 : PCA_QUIET));

   /* Comment this in to do fexist calls only on master
    * works not with rerun or tables at the moment
    * also comment out the version of init_forcerec in md.c
    * with NULL instead of opt2fn
    */
   /*
      if (!MASTER(cr))
      {
      PCA_Flags |= PCA_NOT_READ_NODE;
      }
      */

   if (!parse_common_args(&argc, argv, PCA_Flags, NFILE, fnm, asize(pa), pa,
	    asize(desc), desc, 0, NULL, &oenv))
   {
      return 0;
   }


   /* we set these early because they might be used in init_multisystem()
      Note that there is the potential for npme>nnodes until the number of
      threads is set later on, if there's thread parallelization. That shouldn't
      lead to problems. */
   dd_node_order = nenum(ddno_opt);
   cr->npmenodes = npme;

   hw_opt.thread_affinity = nenum(thread_aff_opt);

   /* now check the -multi and -multidir option */
   if (opt2bSet("-multidir", NFILE, fnm))
   {
      if (nmultisim > 0)
      {
	 gmx_fatal(FARGS, "mdrun -multi and -multidir options are mutually exclusive.");
      }
      nmultisim = opt2fns(&multidir, "-multidir", NFILE, fnm);
   }


   if (repl_ex_nst != 0 && nmultisim < 2)
   {
      gmx_fatal(FARGS, "Need at least two replicas for replica exchange (option -multi)");
   }

   if (repl_ex_nex < 0)
   {
      gmx_fatal(FARGS, "Replica exchange number of exchanges needs to be positive");
   }

   if (nmultisim > 1)
   {
#ifndef GMX_THREAD_MPI
      gmx_bool bParFn = (multidir == NULL);
      init_multisystem(cr, nmultisim, multidir, NFILE, fnm, bParFn);
#else
      gmx_fatal(FARGS, "mdrun -multi is not supported with the thread library. "
	    "Please compile GROMACS with MPI support");
#endif
   }

   bAddPart = !bAppendFiles;

   /* Check if there is ANY checkpoint file available */
   sim_part    = 1;
   sim_part_fn = sim_part;
   if (opt2bSet("-cpi", NFILE, fnm))
   {
      if (bSepPot && bAppendFiles)
      {
	 gmx_fatal(FARGS, "Output file appending is not supported with -seppot");
      }

      bAppendFiles =
	 read_checkpoint_simulation_part(opt2fn_master("-cpi", NFILE,
		  fnm, cr),
	       &sim_part_fn, NULL, cr,
	       bAppendFiles, NFILE, fnm,
	       part_suffix, &bAddPart);
      if (sim_part_fn == 0 && MULTIMASTER(cr))
      {
	 fprintf(stdout, "No previous checkpoint file present, assuming this is a new run.\n");
      }
      else
      {
	 sim_part = sim_part_fn + 1;
      }

      if (MULTISIM(cr) && MASTER(cr))
      {
	 if (MULTIMASTER(cr))
	 {
	    /* Log file is not yet available, so if there's a
	     * problem we can only write to stderr. */
	    fpmulti = stderr;
	 }
	 else
	 {
	    fpmulti = NULL;
	 }
	 check_multi_int(fpmulti, cr->ms, sim_part, "simulation part", TRUE);
      }
   }
   else
   {
      bAppendFiles = FALSE;
   }

   if (!bAppendFiles)
   {
      sim_part_fn = sim_part;
   }

   if (bAddPart)
   {
      /* Rename all output files (except checkpoint files) */
      /* create new part name first (zero-filled) */
      sprintf(suffix, "%s%04d", part_suffix, sim_part_fn);

      add_suffix_to_output_names(fnm, NFILE, suffix);
      if (MULTIMASTER(cr))
      {
	 fprintf(stdout, "Checkpoint file is from part %d, new output files will be suffixed '%s'.\n", sim_part-1, suffix);
      }
   }

   Flags = opt2bSet("-rerun", NFILE, fnm) ? MD_RERUN : 0;
   Flags = Flags | (bSepPot       ? MD_SEPPOT       : 0);
   Flags = Flags | (bDDBondCheck  ? MD_DDBONDCHECK  : 0);
   Flags = Flags | (bDDBondComm   ? MD_DDBONDCOMM   : 0);
   Flags = Flags | (bTunePME      ? MD_TUNEPME      : 0);
   Flags = Flags | (bTestVerlet   ? MD_TESTVERLET   : 0);
   Flags = Flags | (bConfout      ? MD_CONFOUT      : 0);
   Flags = Flags | (bRerunVSite   ? MD_RERUN_VSITE  : 0);
   Flags = Flags | (bReproducible ? MD_REPRODUCIBLE : 0);
   Flags = Flags | (bAppendFiles  ? MD_APPENDFILES  : 0);
   Flags = Flags | (opt2parg_bSet("-append", asize(pa), pa) ? MD_APPENDFILESSET : 0);
   Flags = Flags | (bKeepAndNumCPT ? MD_KEEPANDNUMCPT : 0);
   Flags = Flags | (sim_part > 1    ? MD_STARTFROMCPT : 0);
   Flags = Flags | (bResetCountersHalfWay ? MD_RESETCOUNTERSHALFWAY : 0);
   Flags = Flags | (bIMDwait      ? MD_IMDWAIT      : 0);
   Flags = Flags | (bIMDterm      ? MD_IMDTERM      : 0);
   Flags = Flags | (bIMDpull      ? MD_IMDPULL      : 0);
   Flags = Flags | ((opt2bSet("-at", NFILE, fnm)) ? MD_ADAPTIVETEMPERING : 0);
   Flags = Flags | ((opt2bSet("-addtop", NFILE, fnm)) ? MD_MULTOP : 0);

   /* We postpone opening the log file if we are appending, so we can
      first truncate the old log file and append to the correct position
      there instead.  */
   if ((MASTER(cr) || bSepPot) && !bAppendFiles)
   {
      gmx_log_open(ftp2fn(efLOG, NFILE, fnm), cr,
	    !bSepPot, Flags & MD_APPENDFILES, &fplog);
      please_cite(fplog, "Hess2008b");
      please_cite(fplog, "Spoel2005a");
      please_cite(fplog, "Lindahl2001a");
      please_cite(fplog, "Berendsen95a");
   }
   else if (!MASTER(cr) && bSepPot)
   {
      gmx_log_open(ftp2fn(efLOG, NFILE, fnm), cr, !bSepPot, Flags, &fplog);
   }
   else
   {
      fplog = NULL;
   }

   ddxyz[XX] = (int)(realddxyz[XX] + 0.5);
   ddxyz[YY] = (int)(realddxyz[YY] + 0.5);
   ddxyz[ZZ] = (int)(realddxyz[ZZ] + 0.5);

   rc = mdrunner(&hw_opt, fplog, cr, NFILE, fnm, oenv, bVerbose, bCompact,
	 nstglobalcomm, ddxyz, dd_node_order, rdd, rconstr,
	 dddlb_opt[0], dlb_scale, ddcsx, ddcsy, ddcsz,
	 nbpu_opt[0], nstlist,
	 nsteps, nstepout, resetstep,
	 nmultisim, repl_ex_nst, repl_ex_nex, repl_ex_seed,
	 pforce, cpt_period, max_hours, deviceOptions, imdport, Flags);

   /* Log file has to be closed in mdrunner if we are appending to it
      (fplog not set here) */
   if (MASTER(cr) && !bAppendFiles)
   {
      gmx_log_close(fplog);
   }

   return rc;
}
Ejemplo n.º 2
0
int main(int argc,char *argv[])
{
  const char *desc[] = {
 #ifdef GMX_OPENMM
    "This is an experimental release of GROMACS for accelerated",
	"Molecular Dynamics simulations on GPU processors. Support is provided",
	"by the OpenMM library (https://simtk.org/home/openmm).[PAR]",
	"*Warning*[BR]",
	"This release is targeted at developers and advanced users and",
	"care should be taken before production use. The following should be",
	"noted before using the program:[PAR]",
	" * The current release runs only on modern nVidia GPU hardware with CUDA support.",
	"Make sure that the necessary CUDA drivers and libraries for your operating system",
	"are already installed. The CUDA SDK also should be installed in order to compile",
	"the program from source (http://www.nvidia.com/object/cuda_home.html).[PAR]",
	" * Multiple GPU cards are not supported.[PAR]",
	" * Only a small subset of the GROMACS features and options are supported on the GPUs.",
	"See below for a detailed list.[PAR]",
	" * Consumer level GPU cards are known to often have problems with faulty memory.",
	"It is recommended that a full memory check of the cards is done at least once",
	"(for example, using the memtest=full option).",
	"A partial memory check (for example, memtest=15) before and",
	"after the simulation run would help spot",
	"problems resulting from processor overheating.[PAR]",
	" * The maximum size of the simulated systems depends on the available",
	"GPU memory,for example, a GTX280 with 1GB memory has been tested with systems",
	"of up to about 100,000 atoms.[PAR]",
	" * In order to take a full advantage of the GPU platform features, many algorithms",
	"have been implemented in a very different way than they are on the CPUs.",
	"Therefore numercal correspondence between properties of the state of",
	"simulated systems should not be expected. Moreover, the values will likely vary",
	"when simulations are done on different GPU hardware.[PAR]",
	" * Frequent retrieval of system state information such as",
	"trajectory coordinates and energies can greatly influence the performance",
	"of the program due to slow CPU<->GPU memory transfer speed.[PAR]",
	" * MD algorithms are complex, and although the Gromacs code is highly tuned for them,",
	"they often do not translate very well onto the streaming architetures.",
	"Realistic expectations about the achievable speed-up from test with GTX280:",
	"For small protein systems in implicit solvent using all-vs-all kernels the acceleration",
	"can be as high as 20 times, but in most other setups involving cutoffs and PME the",
	"acceleration is usually only ~4 times relative to a 3GHz CPU.[PAR]",
	"Supported features:[PAR]",
	" * Integrators: md/md-vv/md-vv-avek, sd/sd1 and bd.\n",
	" * Long-range interactions (option coulombtype): Reaction-Field, Ewald, PME, and cut-off (for Implicit Solvent only)\n",
	" * Temperature control: Supported only with the md/md-vv/md-vv-avek, sd/sd1 and bd integrators.\n",
	" * Pressure control: Supported.\n",
	" * Implicit solvent: Supported.\n",
	"A detailed description can be found on the GROMACS website:\n",
	"http://www.gromacs.org/gpu[PAR]",
/* From the original mdrun documentaion */
    "The [TT]mdrun[tt] program reads the run input file ([TT]-s[tt])",
    "and distributes the topology over nodes if needed.",
    "[TT]mdrun[tt] produces at least four output files.",
    "A single log file ([TT]-g[tt]) is written, unless the option",
    "[TT]-seppot[tt] is used, in which case each node writes a log file.",
    "The trajectory file ([TT]-o[tt]), contains coordinates, velocities and",
    "optionally forces.",
    "The structure file ([TT]-c[tt]) contains the coordinates and",
    "velocities of the last step.",
    "The energy file ([TT]-e[tt]) contains energies, the temperature,",
    "pressure, etc, a lot of these things are also printed in the log file.",
    "Optionally coordinates can be written to a compressed trajectory file",
    "([TT]-x[tt]).[PAR]",
/* openmm specific information */
	"Usage with OpenMM:[BR]",
	"[TT]mdrun -device \"OpenMM:platform=Cuda,memtest=15,deviceid=0,force-device=no\"[tt][PAR]",
	"Options:[PAR]",
	"      [TT]platform[tt] = Cuda\t\t:\tThe only available value. OpenCL support will be available in future.\n",
	"      [TT]memtest[tt] = 15\t\t:\tRun a partial, random GPU memory test for the given amount of seconds. A full test",
	"(recommended!) can be run with \"memtest=full\". Memory testing can be disabled with \"memtest=off\".\n",
	"      [TT]deviceid[tt] = 0\t\t:\tSpecify the target device when multiple cards are present.",
	"Only one card can be used at any given time though.\n",
	"      [TT]force-device[tt] = no\t\t:\tIf set to \"yes\" [TT]mdrun[tt]  will be forced to execute on",
	"hardware that is not officially supported. GPU acceleration can also be achieved on older",
	"but Cuda capable cards, although the simulation might be too slow, and the memory limits too strict.",
#else
    "The [TT]mdrun[tt] program is the main computational chemistry engine",
    "within GROMACS. Obviously, it performs Molecular Dynamics simulations,",
    "but it can also perform Stochastic Dynamics, Energy Minimization,",
    "test particle insertion or (re)calculation of energies.",
    "Normal mode analysis is another option. In this case [TT]mdrun[tt]",
    "builds a Hessian matrix from single conformation.",
    "For usual Normal Modes-like calculations, make sure that",
    "the structure provided is properly energy-minimized.",
    "The generated matrix can be diagonalized by [TT]g_nmeig[tt].[PAR]",
    "The [TT]mdrun[tt] program reads the run input file ([TT]-s[tt])",
    "and distributes the topology over nodes if needed.",
    "[TT]mdrun[tt] produces at least four output files.",
    "A single log file ([TT]-g[tt]) is written, unless the option",
    "[TT]-seppot[tt] is used, in which case each node writes a log file.",
    "The trajectory file ([TT]-o[tt]), contains coordinates, velocities and",
    "optionally forces.",
    "The structure file ([TT]-c[tt]) contains the coordinates and",
    "velocities of the last step.",
    "The energy file ([TT]-e[tt]) contains energies, the temperature,",
    "pressure, etc, a lot of these things are also printed in the log file.",
    "Optionally coordinates can be written to a compressed trajectory file",
    "([TT]-x[tt]).[PAR]",
    "The option [TT]-dhdl[tt] is only used when free energy calculation is",
    "turned on.[PAR]",
    "When [TT]mdrun[tt] is started using MPI with more than 1 node, parallelization",
    "is used. By default domain decomposition is used, unless the [TT]-pd[tt]",
    "option is set, which selects particle decomposition.[PAR]",
    "With domain decomposition, the spatial decomposition can be set",
    "with option [TT]-dd[tt]. By default [TT]mdrun[tt] selects a good decomposition.",
    "The user only needs to change this when the system is very inhomogeneous.",
    "Dynamic load balancing is set with the option [TT]-dlb[tt],",
    "which can give a significant performance improvement,",
    "especially for inhomogeneous systems. The only disadvantage of",
    "dynamic load balancing is that runs are no longer binary reproducible,",
    "but in most cases this is not important.",
    "By default the dynamic load balancing is automatically turned on",
    "when the measured performance loss due to load imbalance is 5% or more.",
    "At low parallelization these are the only important options",
    "for domain decomposition.",
    "At high parallelization the options in the next two sections",
    "could be important for increasing the performace.",
    "[PAR]",
    "When PME is used with domain decomposition, separate nodes can",
    "be assigned to do only the PME mesh calculation;",
    "this is computationally more efficient starting at about 12 nodes.",
    "The number of PME nodes is set with option [TT]-npme[tt],",
    "this can not be more than half of the nodes.",
    "By default [TT]mdrun[tt] makes a guess for the number of PME",
    "nodes when the number of nodes is larger than 11 or performance wise",
    "not compatible with the PME grid x dimension.",
    "But the user should optimize npme. Performance statistics on this issue",
    "are written at the end of the log file.",
    "For good load balancing at high parallelization, the PME grid x and y",
    "dimensions should be divisible by the number of PME nodes",
    "(the simulation will run correctly also when this is not the case).",
    "[PAR]",
    "This section lists all options that affect the domain decomposition.",
    "[PAR]",
    "Option [TT]-rdd[tt] can be used to set the required maximum distance",
    "for inter charge-group bonded interactions.",
    "Communication for two-body bonded interactions below the non-bonded",
    "cut-off distance always comes for free with the non-bonded communication.",
    "Atoms beyond the non-bonded cut-off are only communicated when they have",
    "missing bonded interactions; this means that the extra cost is minor",
    "and nearly indepedent of the value of [TT]-rdd[tt].",
    "With dynamic load balancing option [TT]-rdd[tt] also sets",
    "the lower limit for the domain decomposition cell sizes.",
    "By default [TT]-rdd[tt] is determined by [TT]mdrun[tt] based on",
    "the initial coordinates. The chosen value will be a balance",
    "between interaction range and communication cost.",
    "[PAR]",
    "When inter charge-group bonded interactions are beyond",
    "the bonded cut-off distance, [TT]mdrun[tt] terminates with an error message.",
    "For pair interactions and tabulated bonds",
    "that do not generate exclusions, this check can be turned off",
    "with the option [TT]-noddcheck[tt].",
    "[PAR]",
    "When constraints are present, option [TT]-rcon[tt] influences",
    "the cell size limit as well.",
    "Atoms connected by NC constraints, where NC is the LINCS order plus 1,",
    "should not be beyond the smallest cell size. A error message is",
    "generated when this happens and the user should change the decomposition",
    "or decrease the LINCS order and increase the number of LINCS iterations.",
    "By default [TT]mdrun[tt] estimates the minimum cell size required for P-LINCS",
    "in a conservative fashion. For high parallelization it can be useful",
    "to set the distance required for P-LINCS with the option [TT]-rcon[tt].",
    "[PAR]",
    "The [TT]-dds[tt] option sets the minimum allowed x, y and/or z scaling",
    "of the cells with dynamic load balancing. [TT]mdrun[tt] will ensure that",
    "the cells can scale down by at least this factor. This option is used",
    "for the automated spatial decomposition (when not using [TT]-dd[tt])",
    "as well as for determining the number of grid pulses, which in turn",
    "sets the minimum allowed cell size. Under certain circumstances",
    "the value of [TT]-dds[tt] might need to be adjusted to account for",
    "high or low spatial inhomogeneity of the system.",
    "[PAR]",
    "The option [TT]-gcom[tt] can be used to only do global communication",
    "every n steps.",
    "This can improve performance for highly parallel simulations",
    "where this global communication step becomes the bottleneck.",
    "For a global thermostat and/or barostat the temperature",
    "and/or pressure will also only be updated every [TT]-gcom[tt] steps.",
    "By default it is set to the minimum of nstcalcenergy and nstlist.[PAR]",
    "With [TT]-rerun[tt] an input trajectory can be given for which ",
    "forces and energies will be (re)calculated. Neighbor searching will be",
    "performed for every frame, unless [TT]nstlist[tt] is zero",
    "(see the [TT].mdp[tt] file).[PAR]",
    "ED (essential dynamics) sampling is switched on by using the [TT]-ei[tt]",
    "flag followed by an [TT].edi[tt] file.",
    "The [TT].edi[tt] file can be produced using options in the essdyn",
    "menu of the WHAT IF program. [TT]mdrun[tt] produces a [TT].edo[tt] file that",
    "contains projections of positions, velocities and forces onto selected",
    "eigenvectors.[PAR]",
    "When user-defined potential functions have been selected in the",
    "[TT].mdp[tt] file the [TT]-table[tt] option is used to pass [TT]mdrun[tt]",
    "a formatted table with potential functions. The file is read from",
    "either the current directory or from the [TT]GMXLIB[tt] directory.",
    "A number of pre-formatted tables are presented in the [TT]GMXLIB[tt] dir,",
    "for 6-8, 6-9, 6-10, 6-11, 6-12 Lennard-Jones potentials with",
    "normal Coulomb.",
    "When pair interactions are present, a separate table for pair interaction",
    "functions is read using the [TT]-tablep[tt] option.[PAR]",
    "When tabulated bonded functions are present in the topology,",
    "interaction functions are read using the [TT]-tableb[tt] option.",
    "For each different tabulated interaction type the table file name is",
    "modified in a different way: before the file extension an underscore is",
    "appended, then a 'b' for bonds, an 'a' for angles or a 'd' for dihedrals",
    "and finally the table number of the interaction type.[PAR]",
    "The options [TT]-px[tt] and [TT]-pf[tt] are used for writing pull COM",
    "coordinates and forces when pulling is selected",
    "in the [TT].mdp[tt] file.[PAR]",
    "With [TT]-multi[tt] or [TT]-multidir[tt], multiple systems can be ",
    "simulated in parallel.",
    "As many input files/directories are required as the number of systems. ",
    "The [TT]-multidir[tt] option takes a list of directories (one for each ",
    "system) and runs in each of them, using the input/output file names, ",
    "such as specified by e.g. the [TT]-s[tt] option, relative to these ",
    "directories.",
    "With [TT]-multi[tt], the system number is appended to the run input ",
    "and each output filename, for instance [TT]topol.tpr[tt] becomes",
    "[TT]topol0.tpr[tt], [TT]topol1.tpr[tt] etc.",
    "The number of nodes per system is the total number of nodes",
    "divided by the number of systems.",
    "One use of this option is for NMR refinement: when distance",
    "or orientation restraints are present these can be ensemble averaged",
    "over all the systems.[PAR]",
    "With [TT]-replex[tt] replica exchange is attempted every given number",
    "of steps. The number of replicas is set with the [TT]-multi[tt] or ",
    "[TT]-multidir[tt] option, described above.",
    "All run input files should use a different coupling temperature,",
    "the order of the files is not important. The random seed is set with",
    "[TT]-reseed[tt]. The velocities are scaled and neighbor searching",
    "is performed after every exchange.[PAR]",
    "Finally some experimental algorithms can be tested when the",
    "appropriate options have been given. Currently under",
    "investigation are: polarizability and X-ray bombardments.",
    "[PAR]",
    "The option [TT]-pforce[tt] is useful when you suspect a simulation",
    "crashes due to too large forces. With this option coordinates and",
    "forces of atoms with a force larger than a certain value will",
    "be printed to stderr.",
    "[PAR]",
    "Checkpoints containing the complete state of the system are written",
    "at regular intervals (option [TT]-cpt[tt]) to the file [TT]-cpo[tt],",
    "unless option [TT]-cpt[tt] is set to -1.",
    "The previous checkpoint is backed up to [TT]state_prev.cpt[tt] to",
    "make sure that a recent state of the system is always available,",
    "even when the simulation is terminated while writing a checkpoint.",
    "With [TT]-cpnum[tt] all checkpoint files are kept and appended",
    "with the step number.",
    "A simulation can be continued by reading the full state from file",
    "with option [TT]-cpi[tt]. This option is intelligent in the way that",
    "if no checkpoint file is found, Gromacs just assumes a normal run and",
    "starts from the first step of the [TT].tpr[tt] file. By default the output",
    "will be appending to the existing output files. The checkpoint file",
    "contains checksums of all output files, such that you will never",
    "loose data when some output files are modified, corrupt or removed.",
    "There are three scenarios with [TT]-cpi[tt]:[PAR]",
    "[TT]*[tt] no files with matching names are present: new output files are written[PAR]",
    "[TT]*[tt] all files are present with names and checksums matching those stored",
    "in the checkpoint file: files are appended[PAR]",
    "[TT]*[tt] otherwise no files are modified and a fatal error is generated[PAR]",
    "With [TT]-noappend[tt] new output files are opened and the simulation",
    "part number is added to all output file names.",
    "Note that in all cases the checkpoint file itself is not renamed",
    "and will be overwritten, unless its name does not match",
    "the [TT]-cpo[tt] option.",
    "[PAR]",
    "With checkpointing the output is appended to previously written",
    "output files, unless [TT]-noappend[tt] is used or none of the previous",
    "output files are present (except for the checkpoint file).",
    "The integrity of the files to be appended is verified using checksums",
    "which are stored in the checkpoint file. This ensures that output can",
    "not be mixed up or corrupted due to file appending. When only some",
    "of the previous output files are present, a fatal error is generated",
    "and no old output files are modified and no new output files are opened.",
    "The result with appending will be the same as from a single run.",
    "The contents will be binary identical, unless you use a different number",
    "of nodes or dynamic load balancing or the FFT library uses optimizations",
    "through timing.",
    "[PAR]",
    "With option [TT]-maxh[tt] a simulation is terminated and a checkpoint",
    "file is written at the first neighbor search step where the run time",
    "exceeds [TT]-maxh[tt]*0.99 hours.",
    "[PAR]",
    "When [TT]mdrun[tt] receives a TERM signal, it will set nsteps to the current",
    "step plus one. When [TT]mdrun[tt] receives an INT signal (e.g. when ctrl+C is",
    "pressed), it will stop after the next neighbor search step ",
    "(with nstlist=0 at the next step).",
    "In both cases all the usual output will be written to file.",
    "When running with MPI, a signal to one of the [TT]mdrun[tt] processes",
    "is sufficient, this signal should not be sent to mpirun or",
    "the [TT]mdrun[tt] process that is the parent of the others.",
    "[PAR]",
    "When [TT]mdrun[tt] is started with MPI, it does not run niced by default."
#endif
  };
  t_commrec    *cr;
  t_filenm fnm[] = {
    { efTPX, NULL,      NULL,       ffREAD },
    { efTRN, "-o",      NULL,       ffWRITE },
    { efXTC, "-x",      NULL,       ffOPTWR },
    { efCPT, "-cpi",    NULL,       ffOPTRD },
    { efCPT, "-cpo",    NULL,       ffOPTWR },
    { efSTO, "-c",      "confout",  ffWRITE },
    { efEDR, "-e",      "ener",     ffWRITE },
    { efLOG, "-g",      "md",       ffWRITE },
    { efXVG, "-dhdl",   "dhdl",     ffOPTWR },
    { efXVG, "-field",  "field",    ffOPTWR },
    { efXVG, "-table",  "table",    ffOPTRD },
    { efXVG, "-tabletf", "tabletf",    ffOPTRD },
    { efXVG, "-tablep", "tablep",   ffOPTRD },
    { efXVG, "-tableb", "table",    ffOPTRD },
    { efTRX, "-rerun",  "rerun",    ffOPTRD },
    { efXVG, "-tpi",    "tpi",      ffOPTWR },
    { efXVG, "-tpid",   "tpidist",  ffOPTWR },
    { efEDI, "-ei",     "sam",      ffOPTRD },
    { efEDO, "-eo",     "sam",      ffOPTWR },
    { efGCT, "-j",      "wham",     ffOPTRD },
    { efGCT, "-jo",     "bam",      ffOPTWR },
    { efXVG, "-ffout",  "gct",      ffOPTWR },
    { efXVG, "-devout", "deviatie", ffOPTWR },
    { efXVG, "-runav",  "runaver",  ffOPTWR },
    { efXVG, "-px",     "pullx",    ffOPTWR },
    { efXVG, "-pf",     "pullf",    ffOPTWR },
    { efXVG, "-ro",     "rotation", ffOPTWR },
    { efLOG, "-ra",     "rotangles",ffOPTWR },
    { efLOG, "-rs",     "rotslabs", ffOPTWR },
    { efLOG, "-rt",     "rottorque",ffOPTWR },
    { efMTX, "-mtx",    "nm",       ffOPTWR },
    { efNDX, "-dn",     "dipole",   ffOPTWR },
    { efRND, "-multidir",NULL,      ffOPTRDMULT},
    { efGMX, "-mmcg",   "mmcg",     ffOPTRD },
    { efWPO, "-wpot",   "wpot",     ffOPTRD }
  };
#define NFILE asize(fnm)

  /* Command line options ! */
  gmx_bool bCart        = FALSE;
  gmx_bool bPPPME       = FALSE;
  gmx_bool bPartDec     = FALSE;
  gmx_bool bDDBondCheck = TRUE;
  gmx_bool bDDBondComm  = TRUE;
  gmx_bool bVerbose     = FALSE;
  gmx_bool bCompact     = TRUE;
  gmx_bool bSepPot      = FALSE;
  gmx_bool bRerunVSite  = FALSE;
  gmx_bool bIonize      = FALSE;
  gmx_bool bConfout     = TRUE;
  gmx_bool bReproducible = FALSE;
    
  int  npme=-1;
  int  nmultisim=0;
  int  nstglobalcomm=-1;
  int  repl_ex_nst=0;
  int  repl_ex_seed=-1;
  int  nstepout=100;
  int  nthreads=0; /* set to determine # of threads automatically */
  int  resetstep=-1;
  
  rvec realddxyz={0,0,0};
  const char *ddno_opt[ddnoNR+1] =
    { NULL, "interleave", "pp_pme", "cartesian", NULL };
    const char *dddlb_opt[] =
    { NULL, "auto", "no", "yes", NULL };
  real rdd=0.0,rconstr=0.0,dlb_scale=0.8,pforce=-1;
  char *ddcsx=NULL,*ddcsy=NULL,*ddcsz=NULL;
  real cpt_period=15.0,max_hours=-1;
  gmx_bool bAppendFiles=TRUE;
  gmx_bool bKeepAndNumCPT=FALSE;
  gmx_bool bResetCountersHalfWay=FALSE;
  output_env_t oenv=NULL;
  const char *deviceOptions = "";

  t_pargs pa[] = {

    { "-pd",      FALSE, etBOOL,{&bPartDec},
      "Use particle decompostion" },
    { "-dd",      FALSE, etRVEC,{&realddxyz},
      "Domain decomposition grid, 0 is optimize" },
#ifdef GMX_THREADS
    { "-nt",      FALSE, etINT, {&nthreads},
      "Number of threads to start (0 is guess)" },
#endif
    { "-npme",    FALSE, etINT, {&npme},
      "Number of separate nodes to be used for PME, -1 is guess" },
    { "-ddorder", FALSE, etENUM, {ddno_opt},
      "DD node order" },
    { "-ddcheck", FALSE, etBOOL, {&bDDBondCheck},
      "Check for all bonded interactions with DD" },
    { "-ddbondcomm", FALSE, etBOOL, {&bDDBondComm},
      "HIDDENUse special bonded atom communication when [TT]-rdd[tt] > cut-off" },
    { "-rdd",     FALSE, etREAL, {&rdd},
      "The maximum distance for bonded interactions with DD (nm), 0 is determine from initial coordinates" },
    { "-rcon",    FALSE, etREAL, {&rconstr},
      "Maximum distance for P-LINCS (nm), 0 is estimate" },
    { "-dlb",     FALSE, etENUM, {dddlb_opt},
      "Dynamic load balancing (with DD)" },
    { "-dds",     FALSE, etREAL, {&dlb_scale},
      "Minimum allowed dlb scaling of the DD cell size" },
    { "-ddcsx",   FALSE, etSTR, {&ddcsx},
      "HIDDENThe DD cell sizes in x" },
    { "-ddcsy",   FALSE, etSTR, {&ddcsy},
      "HIDDENThe DD cell sizes in y" },
    { "-ddcsz",   FALSE, etSTR, {&ddcsz},
      "HIDDENThe DD cell sizes in z" },
    { "-gcom",    FALSE, etINT,{&nstglobalcomm},
      "Global communication frequency" },
    { "-v",       FALSE, etBOOL,{&bVerbose},  
      "Be loud and noisy" },
    { "-compact", FALSE, etBOOL,{&bCompact},  
      "Write a compact log file" },
    { "-seppot",  FALSE, etBOOL, {&bSepPot},
      "Write separate V and dVdl terms for each interaction type and node to the log file(s)" },
    { "-pforce",  FALSE, etREAL, {&pforce},
      "Print all forces larger than this (kJ/mol nm)" },
    { "-reprod",  FALSE, etBOOL,{&bReproducible},  
      "Try to avoid optimizations that affect binary reproducibility" },
    { "-cpt",     FALSE, etREAL, {&cpt_period},
      "Checkpoint interval (minutes)" },
    { "-cpnum",   FALSE, etBOOL, {&bKeepAndNumCPT},
      "Keep and number checkpoint files" },
    { "-append",  FALSE, etBOOL, {&bAppendFiles},
      "Append to previous output files when continuing from checkpoint instead of adding the simulation part number to all file names" },
    { "-maxh",   FALSE, etREAL, {&max_hours},
      "Terminate after 0.99 times this time (hours)" },
    { "-multi",   FALSE, etINT,{&nmultisim}, 
      "Do multiple simulations in parallel" },
    { "-replex",  FALSE, etINT, {&repl_ex_nst}, 
      "Attempt replica exchange every # steps" },
    { "-reseed",  FALSE, etINT, {&repl_ex_seed}, 
      "Seed for replica exchange, -1 is generate a seed" },
    { "-rerunvsite", FALSE, etBOOL, {&bRerunVSite},
      "HIDDENRecalculate virtual site coordinates with [TT]-rerun[tt]" },
    { "-ionize",  FALSE, etBOOL,{&bIonize},
      "Do a simulation including the effect of an X-Ray bombardment on your system" },
    { "-confout", FALSE, etBOOL, {&bConfout},
      "HIDDENWrite the last configuration with [TT]-c[tt] and force checkpointing at the last step" },
    { "-stepout", FALSE, etINT, {&nstepout},
      "HIDDENFrequency of writing the remaining runtime" },
    { "-resetstep", FALSE, etINT, {&resetstep},
      "HIDDENReset cycle counters after these many time steps" },
    { "-resethway", FALSE, etBOOL, {&bResetCountersHalfWay},
      "HIDDENReset the cycle counters after half the number of steps or halfway [TT]-maxh[tt]" }
#ifdef GMX_OPENMM
    ,
    { "-device",  FALSE, etSTR, {&deviceOptions},
      "Device option string" }
#endif
  };
  gmx_edsam_t  ed;
  unsigned long Flags, PCA_Flags;
  ivec     ddxyz;
  int      dd_node_order;
  gmx_bool     bAddPart;
  FILE     *fplog,*fptest;
  int      sim_part,sim_part_fn;
  const char *part_suffix=".part";
  char     suffix[STRLEN];
  int      rc;
  char **multidir=NULL;


  cr = init_par(&argc,&argv);

  if (MASTER(cr))
    CopyRight(stderr, argv[0]);

  PCA_Flags = (PCA_KEEP_ARGS | PCA_NOEXIT_ON_ARGS | PCA_CAN_SET_DEFFNM
	       | (MASTER(cr) ? 0 : PCA_QUIET));
  

  /* Comment this in to do fexist calls only on master
   * works not with rerun or tables at the moment
   * also comment out the version of init_forcerec in md.c 
   * with NULL instead of opt2fn
   */
  /*
     if (!MASTER(cr))
     {
     PCA_Flags |= PCA_NOT_READ_NODE;
     }
     */

  parse_common_args(&argc,argv,PCA_Flags, NFILE,fnm,asize(pa),pa,
                    asize(desc),desc,0,NULL, &oenv);



  /* we set these early because they might be used in init_multisystem() 
     Note that there is the potential for npme>nnodes until the number of
     threads is set later on, if there's thread parallelization. That shouldn't
     lead to problems. */ 
  dd_node_order = nenum(ddno_opt);
  cr->npmenodes = npme;

#ifndef GMX_THREADS
  nthreads=1;
#endif

  /* now check the -multi and -multidir option */
  if (opt2bSet("-multidir", NFILE, fnm))
  {
      int i;
      if (nmultisim > 0)
      {
          gmx_fatal(FARGS, "mdrun -multi and -multidir options are mutually exclusive.");
      }
      nmultisim = opt2fns(&multidir, "-multidir", NFILE, fnm);
  }


  if (repl_ex_nst != 0 && nmultisim < 2)
      gmx_fatal(FARGS,"Need at least two replicas for replica exchange (option -multi)");

  if (nmultisim > 1) {
#ifndef GMX_THREADS
    gmx_bool bParFn = (multidir == NULL);
    init_multisystem(cr, nmultisim, multidir, NFILE, fnm, bParFn);
#else
    gmx_fatal(FARGS,"mdrun -multi is not supported with the thread library.Please compile GROMACS with MPI support");
#endif
  }

  bAddPart = !bAppendFiles;

  /* Check if there is ANY checkpoint file available */	
  sim_part    = 1;
  sim_part_fn = sim_part;
  if (opt2bSet("-cpi",NFILE,fnm))
  {
      if (bSepPot && bAppendFiles)
      {
          gmx_fatal(FARGS,"Output file appending is not supported with -seppot");
      }

      bAppendFiles =
                read_checkpoint_simulation_part(opt2fn_master("-cpi", NFILE,
                                                              fnm,cr),
                                                &sim_part_fn,NULL,cr,
                                                bAppendFiles,NFILE,fnm,
                                                part_suffix,&bAddPart);
      if (sim_part_fn==0 && MASTER(cr))
      {
          fprintf(stdout,"No previous checkpoint file present, assuming this is a new run.\n");
      }
      else
      {
          sim_part = sim_part_fn + 1;
      }

      if (MULTISIM(cr) && MASTER(cr))
      {
          check_multi_int(stdout,cr->ms,sim_part,"simulation part");
      }
  } 
  else
  {
      bAppendFiles = FALSE;
  }

  if (!bAppendFiles)
  {
      sim_part_fn = sim_part;
  }

  if (bAddPart)
  {
      /* Rename all output files (except checkpoint files) */
      /* create new part name first (zero-filled) */
      sprintf(suffix,"%s%04d",part_suffix,sim_part_fn);

      add_suffix_to_output_names(fnm,NFILE,suffix);
      if (MASTER(cr))
      {
          fprintf(stdout,"Checkpoint file is from part %d, new output files will be suffixed '%s'.\n",sim_part-1,suffix);
      }
  }

  Flags = opt2bSet("-rerun",NFILE,fnm) ? MD_RERUN : 0;
  Flags = Flags | (opt2bSet("-mmcg",NFILE,fnm) ? MD_MMCG : 0);
  Flags = Flags | (bSepPot       ? MD_SEPPOT       : 0);
  Flags = Flags | (bIonize       ? MD_IONIZE       : 0);
  Flags = Flags | (bPartDec      ? MD_PARTDEC      : 0);
  Flags = Flags | (bDDBondCheck  ? MD_DDBONDCHECK  : 0);
  Flags = Flags | (bDDBondComm   ? MD_DDBONDCOMM   : 0);
  Flags = Flags | (bConfout      ? MD_CONFOUT      : 0);
  Flags = Flags | (bRerunVSite   ? MD_RERUN_VSITE  : 0);
  Flags = Flags | (bReproducible ? MD_REPRODUCIBLE : 0);
  Flags = Flags | (bAppendFiles  ? MD_APPENDFILES  : 0); 
  Flags = Flags | (bKeepAndNumCPT ? MD_KEEPANDNUMCPT : 0); 
  Flags = Flags | (sim_part>1    ? MD_STARTFROMCPT : 0); 
  Flags = Flags | (bResetCountersHalfWay ? MD_RESETCOUNTERSHALFWAY : 0);
  Flags = Flags | (opt2bSet("-wpot",NFILE,fnm) ? MD_WALLPOT : 0);

  /* We postpone opening the log file if we are appending, so we can 
     first truncate the old log file and append to the correct position 
     there instead.  */
  if ((MASTER(cr) || bSepPot) && !bAppendFiles) 
  {
      gmx_log_open(ftp2fn(efLOG,NFILE,fnm),cr,!bSepPot,Flags,&fplog);
      CopyRight(fplog,argv[0]);
      please_cite(fplog,"Hess2008b");
      please_cite(fplog,"Spoel2005a");
      please_cite(fplog,"Lindahl2001a");
      please_cite(fplog,"Berendsen95a");
  }
  else if (!MASTER(cr) && bSepPot)
  {
      gmx_log_open(ftp2fn(efLOG,NFILE,fnm),cr,!bSepPot,Flags,&fplog);
  }
  else
  {
      fplog = NULL;
  }

  ddxyz[XX] = (int)(realddxyz[XX] + 0.5);
  ddxyz[YY] = (int)(realddxyz[YY] + 0.5);
  ddxyz[ZZ] = (int)(realddxyz[ZZ] + 0.5);

  rc = mdrunner(nthreads, fplog,cr,NFILE,fnm,oenv,bVerbose,bCompact,
                nstglobalcomm, ddxyz,dd_node_order,rdd,rconstr,
                dddlb_opt[0],dlb_scale,ddcsx,ddcsy,ddcsz,
                nstepout,resetstep,nmultisim,repl_ex_nst,repl_ex_seed,
                pforce, cpt_period,max_hours,deviceOptions,Flags);

  if (gmx_parallel_env_initialized())
      gmx_finalize();

  if (MULTIMASTER(cr)) {
      thanx(stderr);
  }

  /* Log file has to be closed in mdrunner if we are appending to it 
     (fplog not set here) */
  if (MASTER(cr) && !bAppendFiles) 
  {
      gmx_log_close(fplog);
  }

  return rc;
}
Ejemplo n.º 3
0
int main(int argc,char *argv[])
{
  int       mmm[] = { 8, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36, 40,
		      45, 48, 50, 54, 60, 64, 72, 75, 80, 81, 90, 100 };
  int       nnn[] = { 24, 32, 48, 60, 72, 84, 96 };
#define NNN asize(nnn)
  FILE      *fp,*fplog;
  int       *niter;
  int       i,j,n,nit,ntot,n3,rsize;
  double    t,nflop,start;
  double    *rt,*ct;
  t_fftgrid *g;
  t_commrec *cr;
  static gmx_bool bReproducible = FALSE;
  static int  nnode    = 1;
  static int  nitfac  = 1;
  t_pargs pa[] = {
    { "-reproducible",   FALSE, etBOOL, {&bReproducible}, 
      "Request binary reproducible results" },
    { "-np",    FALSE, etINT, {&nnode},
      "Number of NODEs" },
    { "-itfac", FALSE, etINT, {&nitfac},
      "Multiply number of iterations by this" }
  };
  static t_filenm fnm[] = {
    { efLOG, "-g", "fft",      ffWRITE },
    { efXVG, "-o", "fft",      ffWRITE }
  };
#define NFILE asize(fnm)
  
  cr = init_par(&argc,&argv);
  if (MASTER(cr))
    CopyRight(stdout,argv[0]);
  parse_common_args(&argc,argv,
		    PCA_CAN_SET_DEFFNM | (MASTER(cr) ? 0 : PCA_QUIET),
		    NFILE,fnm,asize(pa),pa,0,NULL,0,NULL);
  gmx_log_open(ftp2fn(efLOG,NFILE,fnm),cr,1,0,&fplog);

  snew(niter,NNN);
  snew(ct,NNN);
  snew(rt,NNN);
  rsize = sizeof(real);
  for(i=0; (i<NNN); i++) {
    n  = nnn[i];
    if (n < 16)
      niter[i] = 50;
    else if (n < 26)
      niter[i] = 20;
    else if (n < 51)
      niter[i] = 10;
    else
      niter[i] = 5;
    niter[i] *= nitfac;
    nit = niter[i];
    
    if (MASTER(cr))
      fprintf(stderr,"\r3D FFT (%s precision) %3d^3, niter %3d     ",
	      (rsize == 8) ? "Double" : "Single",n,nit);
    
    g  = mk_fftgrid(n,n,n,NULL,NULL,cr,bReproducible);

    if (PAR(cr))
      start = time(NULL);
    else
      start_time();
    for(j=0; (j<nit); j++) {
      gmxfft3D(g,GMX_FFT_REAL_TO_COMPLEX,cr);
      gmxfft3D(g,GMX_FFT_COMPLEX_TO_REAL,cr);
    }
    if (PAR(cr)) 
      rt[i] = time(NULL)-start;
    else {
      update_time();
      rt[i] = node_time();
    }
    done_fftgrid(g);
    sfree(g);
  }
  if (MASTER(cr)) {
    fprintf(stderr,"\n");
    fp=xvgropen(ftp2fn(efXVG,NFILE,fnm),
		"FFT timings","n^3","t (s)");
    for(i=0; (i<NNN); i++) {
      n3 = 2*niter[i]*nnn[i]*nnn[i]*nnn[i];
      fprintf(fp,"%10d  %10g\n",nnn[i],rt[i]/(2*niter[i]));
    }
    gmx_fio_fclose(fp);
  }
  return 0;
}
Ejemplo n.º 4
0
int gmx_disre(int argc,char *argv[])
{
  const char *desc[] = {
    "g_disre computes violations of distance restraints.",
    "If necessary all protons can be added to a protein molecule ",
    "using the protonate program.[PAR]",
    "The program always",
    "computes the instantaneous violations rather than time-averaged,",
    "because this analysis is done from a trajectory file afterwards",
    "it does not make sense to use time averaging. However,",
    "the time averaged values per restraint are given in the log file.[PAR]",
    "An index file may be used to select specific restraints for",
    "printing.[PAR]",
    "When the optional[TT]-q[tt] flag is given a pdb file coloured by the",
    "amount of average violations.[PAR]",
    "When the [TT]-c[tt] option is given, an index file will be read",
    "containing the frames in your trajectory corresponding to the clusters",
    "(defined in another manner) that you want to analyze. For these clusters",
    "the program will compute average violations using the third power",
    "averaging algorithm and print them in the log file."
  };
  static int  ntop      = 0;
  static int  nlevels   = 20;
  static real max_dr    = 0;
  static gmx_bool bThird    = TRUE;
  t_pargs pa[] = {
    { "-ntop", FALSE, etINT,  {&ntop},
      "Number of large violations that are stored in the log file every step" },
    { "-maxdr", FALSE, etREAL, {&max_dr},
      "Maximum distance violation in matrix output. If less than or equal to 0 the maximum will be determined by the data." },
    { "-nlevels", FALSE, etINT, {&nlevels},
      "Number of levels in the matrix output" },
    { "-third", FALSE, etBOOL, {&bThird},
      "Use inverse third power averaging or linear for matrix output" }
  };
  
  FILE        *out=NULL,*aver=NULL,*numv=NULL,*maxxv=NULL,*xvg=NULL;
  t_tpxheader header;
  t_inputrec  ir;
  gmx_mtop_t  mtop;
  rvec        *xtop;
  gmx_localtop_t *top;
  t_atoms     *atoms=NULL;
  t_forcerec  *fr;
  t_fcdata    fcd;
  t_nrnb      nrnb;
  t_commrec   *cr;
  t_graph     *g;
  int         ntopatoms,natoms,i,j,kkk;
  t_trxstatus *status;
  real        t;
  rvec        *x,*f,*xav=NULL;
  matrix      box;
  gmx_bool        bPDB;
  int         isize;
  atom_id     *index=NULL,*ind_fit=NULL;
  char        *grpname;
  t_cluster_ndx *clust=NULL;
  t_dr_result dr,*dr_clust=NULL;
  char        **leg;
  real        *vvindex=NULL,*w_rls=NULL;
  t_mdatoms   *mdatoms;
  t_pbc       pbc,*pbc_null;
  int         my_clust;
  FILE        *fplog;
  output_env_t oenv;
  gmx_rmpbc_t  gpbc=NULL;
  
  t_filenm fnm[] = {
    { efTPX, NULL, NULL, ffREAD },
    { efTRX, "-f", NULL, ffREAD },
    { efXVG, "-ds", "drsum",  ffWRITE },
    { efXVG, "-da", "draver", ffWRITE },
    { efXVG, "-dn", "drnum",  ffWRITE },
    { efXVG, "-dm", "drmax",  ffWRITE },
    { efXVG, "-dr", "restr",  ffWRITE },
    { efLOG, "-l",  "disres", ffWRITE },
    { efNDX, NULL,  "viol",   ffOPTRD },
    { efPDB, "-q",  "viol",   ffOPTWR },
    { efNDX, "-c",  "clust",  ffOPTRD },
    { efXPM, "-x",  "matrix", ffOPTWR }
  };
#define NFILE asize(fnm)

  cr  = init_par(&argc,&argv);
  CopyRight(stderr,argv[0]);
  parse_common_args(&argc,argv,PCA_CAN_TIME | PCA_CAN_VIEW | PCA_BE_NICE,
		    NFILE,fnm,asize(pa),pa,asize(desc),desc,0,NULL,&oenv);

  gmx_log_open(ftp2fn(efLOG,NFILE,fnm),cr,FALSE,0,&fplog);
  
  if (ntop)
    init5(ntop);
  
  read_tpxheader(ftp2fn(efTPX,NFILE,fnm),&header,FALSE,NULL,NULL);
  snew(xtop,header.natoms);
  read_tpx(ftp2fn(efTPX,NFILE,fnm),&ir,box,&ntopatoms,xtop,NULL,NULL,&mtop);
  bPDB = opt2bSet("-q",NFILE,fnm);
  if (bPDB) {
    snew(xav,ntopatoms);
    snew(ind_fit,ntopatoms);
    snew(w_rls,ntopatoms);
    for(kkk=0; (kkk<ntopatoms); kkk++) {
      w_rls[kkk] = 1;
      ind_fit[kkk] = kkk;
    }
    
    snew(atoms,1);
    *atoms = gmx_mtop_global_atoms(&mtop);
    
    if (atoms->pdbinfo == NULL) {
      snew(atoms->pdbinfo,atoms->nr);
    }
  } 

  top = gmx_mtop_generate_local_top(&mtop,&ir);

  g = NULL;
  pbc_null = NULL;
  if (ir.ePBC != epbcNONE) {
    if (ir.bPeriodicMols)
      pbc_null = &pbc;
    else
      g = mk_graph(fplog,&top->idef,0,mtop.natoms,FALSE,FALSE);
  }
  
  if (ftp2bSet(efNDX,NFILE,fnm)) {
    rd_index(ftp2fn(efNDX,NFILE,fnm),1,&isize,&index,&grpname);
    xvg=xvgropen(opt2fn("-dr",NFILE,fnm),"Inidividual Restraints","Time (ps)",
		 "nm",oenv);
    snew(vvindex,isize);
    snew(leg,isize);
    for(i=0; (i<isize); i++) {
      index[i]++;
      snew(leg[i],12);
      sprintf(leg[i],"index %d",index[i]);
    }
    xvgr_legend(xvg,isize,(const char**)leg,oenv);
  }
  else 
    isize=0;

  ir.dr_tau=0.0;
  init_disres(fplog,&mtop,&ir,NULL,FALSE,&fcd,NULL);

  natoms=read_first_x(oenv,&status,ftp2fn(efTRX,NFILE,fnm),&t,&x,box);
  snew(f,5*natoms);
  
  init_dr_res(&dr,fcd.disres.nres);
  if (opt2bSet("-c",NFILE,fnm)) {
    clust = cluster_index(fplog,opt2fn("-c",NFILE,fnm));
    snew(dr_clust,clust->clust->nr+1);
    for(i=0; (i<=clust->clust->nr); i++)
      init_dr_res(&dr_clust[i],fcd.disres.nres);
  }
  else {	
    out =xvgropen(opt2fn("-ds",NFILE,fnm),
		  "Sum of Violations","Time (ps)","nm",oenv);
    aver=xvgropen(opt2fn("-da",NFILE,fnm),
		  "Average Violation","Time (ps)","nm",oenv);
    numv=xvgropen(opt2fn("-dn",NFILE,fnm),
		  "# Violations","Time (ps)","#",oenv);
    maxxv=xvgropen(opt2fn("-dm",NFILE,fnm),
		   "Largest Violation","Time (ps)","nm",oenv);
  }

  mdatoms = init_mdatoms(fplog,&mtop,ir.efep!=efepNO);
  atoms2md(&mtop,&ir,0,NULL,0,mtop.natoms,mdatoms);
  update_mdatoms(mdatoms,ir.init_lambda);
  fr      = mk_forcerec();
  fprintf(fplog,"Made forcerec\n");
  init_forcerec(fplog,oenv,fr,NULL,&ir,&mtop,cr,box,FALSE,NULL,NULL,NULL,
                FALSE,-1);
  init_nrnb(&nrnb);
  if (ir.ePBC != epbcNONE)
    gpbc = gmx_rmpbc_init(&top->idef,ir.ePBC,natoms,box);
  
  j=0;
  do {
    if (ir.ePBC != epbcNONE) {
      if (ir.bPeriodicMols)
	set_pbc(&pbc,ir.ePBC,box);
      else
	gmx_rmpbc(gpbc,natoms,box,x);
    }
    
    if (clust) {
      if (j > clust->maxframe)
	gmx_fatal(FARGS,"There are more frames in the trajectory than in the cluster index file. t = %8f\n",t);
      my_clust = clust->inv_clust[j];
      range_check(my_clust,0,clust->clust->nr);
      check_viol(fplog,cr,&(top->idef.il[F_DISRES]),
		 top->idef.iparams,top->idef.functype,
		 x,f,fr,pbc_null,g,dr_clust,my_clust,isize,index,vvindex,&fcd);
    }
    else
      check_viol(fplog,cr,&(top->idef.il[F_DISRES]),
		 top->idef.iparams,top->idef.functype,
		 x,f,fr,pbc_null,g,&dr,0,isize,index,vvindex,&fcd);
    if (bPDB) {
      reset_x(atoms->nr,ind_fit,atoms->nr,NULL,x,w_rls);
      do_fit(atoms->nr,w_rls,x,x);
      if (j == 0) {
	/* Store the first frame of the trajectory as 'characteristic'
	 * for colouring with violations.
	 */
	for(kkk=0; (kkk<atoms->nr); kkk++)
	  copy_rvec(x[kkk],xav[kkk]);
      }
    }
    if (!clust) {
      if (isize > 0) {
	fprintf(xvg,"%10g",t);
	for(i=0; (i<isize); i++)
	  fprintf(xvg,"  %10g",vvindex[i]);
	fprintf(xvg,"\n");
      }    
      fprintf(out,  "%10g  %10g\n",t,dr.sumv);
      fprintf(aver, "%10g  %10g\n",t,dr.averv);
      fprintf(maxxv,"%10g  %10g\n",t,dr.maxv);
      fprintf(numv, "%10g  %10d\n",t,dr.nv);
    }
    j++;
  } while (read_next_x(oenv,status,&t,natoms,x,box));
  close_trj(status);
  if (ir.ePBC != epbcNONE)
    gmx_rmpbc_done(gpbc);

  if (clust) {
    dump_clust_stats(fplog,fcd.disres.nres,&(top->idef.il[F_DISRES]),
		     top->idef.iparams,clust->clust,dr_clust,
		     clust->grpname,isize,index);
  }
  else {
    dump_stats(fplog,j,fcd.disres.nres,&(top->idef.il[F_DISRES]),
	       top->idef.iparams,&dr,isize,index,
	       bPDB ? atoms : NULL);
    if (bPDB) {
      write_sto_conf(opt2fn("-q",NFILE,fnm),
		     "Coloured by average violation in Angstrom",
		     atoms,xav,NULL,ir.ePBC,box);
    }
    dump_disre_matrix(opt2fn_null("-x",NFILE,fnm),&dr,fcd.disres.nres,
		      j,&top->idef,&mtop,max_dr,nlevels,bThird);
    ffclose(out);
    ffclose(aver);
    ffclose(numv);
    ffclose(maxxv);
    if (isize > 0) {
      ffclose(xvg);
      do_view(oenv,opt2fn("-dr",NFILE,fnm),"-nxy");
    }
    do_view(oenv,opt2fn("-dn",NFILE,fnm),"-nxy");
    do_view(oenv,opt2fn("-da",NFILE,fnm),"-nxy");
    do_view(oenv,opt2fn("-ds",NFILE,fnm),"-nxy");
    do_view(oenv,opt2fn("-dm",NFILE,fnm),"-nxy");
  }
  thanx(stderr);

  if (gmx_parallel_env_initialized())
    gmx_finalize();

  gmx_log_close(fplog);
  
  return 0;
}
Ejemplo n.º 5
0
//! Implements C-style main function for mdrun
int gmx_mdrun(int argc, char *argv[])
{
    const char   *desc[] = {
        "[THISMODULE] is the main computational chemistry engine",
        "within GROMACS. Obviously, it performs Molecular Dynamics simulations,",
        "but it can also perform Stochastic Dynamics, Energy Minimization,",
        "test particle insertion or (re)calculation of energies.",
        "Normal mode analysis is another option. In this case [TT]mdrun[tt]",
        "builds a Hessian matrix from single conformation.",
        "For usual Normal Modes-like calculations, make sure that",
        "the structure provided is properly energy-minimized.",
        "The generated matrix can be diagonalized by [gmx-nmeig].[PAR]",
        "The [TT]mdrun[tt] program reads the run input file ([TT]-s[tt])",
        "and distributes the topology over ranks if needed.",
        "[TT]mdrun[tt] produces at least four output files.",
        "A single log file ([TT]-g[tt]) is written.",
        "The trajectory file ([TT]-o[tt]), contains coordinates, velocities and",
        "optionally forces.",
        "The structure file ([TT]-c[tt]) contains the coordinates and",
        "velocities of the last step.",
        "The energy file ([TT]-e[tt]) contains energies, the temperature,",
        "pressure, etc, a lot of these things are also printed in the log file.",
        "Optionally coordinates can be written to a compressed trajectory file",
        "([TT]-x[tt]).[PAR]",
        "The option [TT]-dhdl[tt] is only used when free energy calculation is",
        "turned on.[PAR]",
        "Running mdrun efficiently in parallel is a complex topic topic,",
        "many aspects of which are covered in the online User Guide. You",
        "should look there for practical advice on using many of the options",
        "available in mdrun.[PAR]",
        "ED (essential dynamics) sampling and/or additional flooding potentials",
        "are switched on by using the [TT]-ei[tt] flag followed by an [REF].edi[ref]",
        "file. The [REF].edi[ref] file can be produced with the [TT]make_edi[tt] tool",
        "or by using options in the essdyn menu of the WHAT IF program.",
        "[TT]mdrun[tt] produces a [REF].xvg[ref] output file that",
        "contains projections of positions, velocities and forces onto selected",
        "eigenvectors.[PAR]",
        "When user-defined potential functions have been selected in the",
        "[REF].mdp[ref] file the [TT]-table[tt] option is used to pass [TT]mdrun[tt]",
        "a formatted table with potential functions. The file is read from",
        "either the current directory or from the [TT]GMXLIB[tt] directory.",
        "A number of pre-formatted tables are presented in the [TT]GMXLIB[tt] dir,",
        "for 6-8, 6-9, 6-10, 6-11, 6-12 Lennard-Jones potentials with",
        "normal Coulomb.",
        "When pair interactions are present, a separate table for pair interaction",
        "functions is read using the [TT]-tablep[tt] option.[PAR]",
        "When tabulated bonded functions are present in the topology,",
        "interaction functions are read using the [TT]-tableb[tt] option.",
        "For each different tabulated interaction type used, a table file name must",
        "be given. For the topology to work, a file name given here must match a",
        "character sequence before the file extension. That sequence is: an underscore,",
        "then a 'b' for bonds, an 'a' for angles or a 'd' for dihedrals,",
        "and finally the matching table number index used in the topology.[PAR]",
        "The options [TT]-px[tt] and [TT]-pf[tt] are used for writing pull COM",
        "coordinates and forces when pulling is selected",
        "in the [REF].mdp[ref] file.[PAR]",
        "Finally some experimental algorithms can be tested when the",
        "appropriate options have been given. Currently under",
        "investigation are: polarizability.",
        "[PAR]",
        "The option [TT]-membed[tt] does what used to be g_membed, i.e. embed",
        "a protein into a membrane. This module requires a number of settings",
        "that are provided in a data file that is the argument of this option.",
        "For more details in membrane embedding, see the documentation in the",
        "user guide. The options [TT]-mn[tt] and [TT]-mp[tt] are used to provide",
        "the index and topology files used for the embedding.",
        "[PAR]",
        "The option [TT]-pforce[tt] is useful when you suspect a simulation",
        "crashes due to too large forces. With this option coordinates and",
        "forces of atoms with a force larger than a certain value will",
        "be printed to stderr. It will also terminate the run when non-finite",
        "forces are present.",
        "[PAR]",
        "Checkpoints containing the complete state of the system are written",
        "at regular intervals (option [TT]-cpt[tt]) to the file [TT]-cpo[tt],",
        "unless option [TT]-cpt[tt] is set to -1.",
        "The previous checkpoint is backed up to [TT]state_prev.cpt[tt] to",
        "make sure that a recent state of the system is always available,",
        "even when the simulation is terminated while writing a checkpoint.",
        "With [TT]-cpnum[tt] all checkpoint files are kept and appended",
        "with the step number.",
        "A simulation can be continued by reading the full state from file",
        "with option [TT]-cpi[tt]. This option is intelligent in the way that",
        "if no checkpoint file is found, GROMACS just assumes a normal run and",
        "starts from the first step of the [REF].tpr[ref] file. By default the output",
        "will be appending to the existing output files. The checkpoint file",
        "contains checksums of all output files, such that you will never",
        "loose data when some output files are modified, corrupt or removed.",
        "There are three scenarios with [TT]-cpi[tt]:[PAR]",
        "[TT]*[tt] no files with matching names are present: new output files are written[PAR]",
        "[TT]*[tt] all files are present with names and checksums matching those stored",
        "in the checkpoint file: files are appended[PAR]",
        "[TT]*[tt] otherwise no files are modified and a fatal error is generated[PAR]",
        "With [TT]-noappend[tt] new output files are opened and the simulation",
        "part number is added to all output file names.",
        "Note that in all cases the checkpoint file itself is not renamed",
        "and will be overwritten, unless its name does not match",
        "the [TT]-cpo[tt] option.",
        "[PAR]",
        "With checkpointing the output is appended to previously written",
        "output files, unless [TT]-noappend[tt] is used or none of the previous",
        "output files are present (except for the checkpoint file).",
        "The integrity of the files to be appended is verified using checksums",
        "which are stored in the checkpoint file. This ensures that output can",
        "not be mixed up or corrupted due to file appending. When only some",
        "of the previous output files are present, a fatal error is generated",
        "and no old output files are modified and no new output files are opened.",
        "The result with appending will be the same as from a single run.",
        "The contents will be binary identical, unless you use a different number",
        "of ranks or dynamic load balancing or the FFT library uses optimizations",
        "through timing.",
        "[PAR]",
        "With option [TT]-maxh[tt] a simulation is terminated and a checkpoint",
        "file is written at the first neighbor search step where the run time",
        "exceeds [TT]-maxh[tt]\\*0.99 hours. This option is particularly useful in",
        "combination with setting [TT]nsteps[tt] to -1 either in the mdp or using the",
        "similarly named command line option. This results in an infinite run,",
        "terminated only when the time limit set by [TT]-maxh[tt] is reached (if any)"
        "or upon receiving a signal."
        "[PAR]",
        "When [TT]mdrun[tt] receives a TERM signal, it will stop as soon as",
        "checkpoint file can be written, i.e. after the next global communication step.",
        "When [TT]mdrun[tt] receives an INT signal (e.g. when ctrl+C is",
        "pressed), it will stop at the next neighbor search step or at the",
        "second global communication step, whichever happens later.",
        "In both cases all the usual output will be written to file.",
        "When running with MPI, a signal to one of the [TT]mdrun[tt] ranks",
        "is sufficient, this signal should not be sent to mpirun or",
        "the [TT]mdrun[tt] process that is the parent of the others.",
        "[PAR]",
        "Interactive molecular dynamics (IMD) can be activated by using at least one",
        "of the three IMD switches: The [TT]-imdterm[tt] switch allows one to terminate",
        "the simulation from the molecular viewer (e.g. VMD). With [TT]-imdwait[tt],",
        "[TT]mdrun[tt] pauses whenever no IMD client is connected. Pulling from the",
        "IMD remote can be turned on by [TT]-imdpull[tt].",
        "The port [TT]mdrun[tt] listens to can be altered by [TT]-imdport[tt].The",
        "file pointed to by [TT]-if[tt] contains atom indices and forces if IMD",
        "pulling is used."
        "[PAR]",
        "When [TT]mdrun[tt] is started with MPI, it does not run niced by default."
    };
    t_commrec    *cr;
    t_filenm      fnm[] = {
        { efTPR, NULL,      NULL,       ffREAD },
        { efTRN, "-o",      NULL,       ffWRITE },
        { efCOMPRESSED, "-x", NULL,     ffOPTWR },
        { efCPT, "-cpi",    NULL,       ffOPTRD | ffALLOW_MISSING },
        { efCPT, "-cpo",    NULL,       ffOPTWR },
        { efSTO, "-c",      "confout",  ffWRITE },
        { efEDR, "-e",      "ener",     ffWRITE },
        { efLOG, "-g",      "md",       ffWRITE },
        { efXVG, "-dhdl",   "dhdl",     ffOPTWR },
        { efXVG, "-field",  "field",    ffOPTWR },
        { efXVG, "-table",  "table",    ffOPTRD },
        { efXVG, "-tablep", "tablep",   ffOPTRD },
        { efXVG, "-tableb", "table",    ffOPTRDMULT },
        { efTRX, "-rerun",  "rerun",    ffOPTRD },
        { efXVG, "-tpi",    "tpi",      ffOPTWR },
        { efXVG, "-tpid",   "tpidist",  ffOPTWR },
        { efEDI, "-ei",     "sam",      ffOPTRD },
        { efXVG, "-eo",     "edsam",    ffOPTWR },
        { efXVG, "-devout", "deviatie", ffOPTWR },
        { efXVG, "-runav",  "runaver",  ffOPTWR },
        { efXVG, "-px",     "pullx",    ffOPTWR },
        { efXVG, "-pf",     "pullf",    ffOPTWR },
        { efXVG, "-ro",     "rotation", ffOPTWR },
        { efLOG, "-ra",     "rotangles", ffOPTWR },
        { efLOG, "-rs",     "rotslabs", ffOPTWR },
        { efLOG, "-rt",     "rottorque", ffOPTWR },
        { efMTX, "-mtx",    "nm",       ffOPTWR },
        { efRND, "-multidir", NULL,      ffOPTRDMULT},
        { efDAT, "-membed", "membed",   ffOPTRD },
        { efTOP, "-mp",     "membed",   ffOPTRD },
        { efNDX, "-mn",     "membed",   ffOPTRD },
        { efXVG, "-if",     "imdforces", ffOPTWR },
        { efXVG, "-swap",   "swapions", ffOPTWR }
    };
    const int     NFILE = asize(fnm);

    /* Command line options ! */
    gmx_bool          bDDBondCheck  = TRUE;
    gmx_bool          bDDBondComm   = TRUE;
    gmx_bool          bTunePME      = TRUE;
    gmx_bool          bVerbose      = FALSE;
    gmx_bool          bRerunVSite   = FALSE;
    gmx_bool          bConfout      = TRUE;
    gmx_bool          bReproducible = FALSE;
    gmx_bool          bIMDwait      = FALSE;
    gmx_bool          bIMDterm      = FALSE;
    gmx_bool          bIMDpull      = FALSE;

    int               npme          = -1;
    int               nstlist       = 0;
    int               nmultisim     = 0;
    int               nstglobalcomm = -1;
    int               repl_ex_nst   = 0;
    int               repl_ex_seed  = -1;
    int               repl_ex_nex   = 0;
    int               nstepout      = 100;
    int               resetstep     = -1;
    gmx_int64_t       nsteps        = -2;   /* the value -2 means that the mdp option will be used */
    int               imdport       = 8888; /* can be almost anything, 8888 is easy to remember */

    rvec              realddxyz                   = {0, 0, 0};
    const char       *ddrank_opt[ddrankorderNR+1] =
    { NULL, "interleave", "pp_pme", "cartesian", NULL };
    const char       *dddlb_opt[] =
    { NULL, "auto", "no", "yes", NULL };
    const char       *thread_aff_opt[threadaffNR+1] =
    { NULL, "auto", "on", "off", NULL };
    const char       *nbpu_opt[] =
    { NULL, "auto", "cpu", "gpu", "gpu_cpu", NULL };
    real              rdd                   = 0.0, rconstr = 0.0, dlb_scale = 0.8, pforce = -1;
    char             *ddcsx                 = NULL, *ddcsy = NULL, *ddcsz = NULL;
    real              cpt_period            = 15.0, max_hours = -1;
    gmx_bool          bTryToAppendFiles     = TRUE;
    gmx_bool          bKeepAndNumCPT        = FALSE;
    gmx_bool          bResetCountersHalfWay = FALSE;
    gmx_output_env_t *oenv                  = NULL;

    /* Non transparent initialization of a complex gmx_hw_opt_t struct.
     * But unfortunately we are not allowed to call a function here,
     * since declarations follow below.
     */
    gmx_hw_opt_t    hw_opt = {
        0, 0, 0, 0, threadaffSEL, 0, 0,
        { NULL, FALSE, 0, NULL }
    };

    t_pargs         pa[] = {

        { "-dd",      FALSE, etRVEC, {&realddxyz},
          "Domain decomposition grid, 0 is optimize" },
        { "-ddorder", FALSE, etENUM, {ddrank_opt},
          "DD rank order" },
        { "-npme",    FALSE, etINT, {&npme},
          "Number of separate ranks to be used for PME, -1 is guess" },
        { "-nt",      FALSE, etINT, {&hw_opt.nthreads_tot},
          "Total number of threads to start (0 is guess)" },
        { "-ntmpi",   FALSE, etINT, {&hw_opt.nthreads_tmpi},
          "Number of thread-MPI threads to start (0 is guess)" },
        { "-ntomp",   FALSE, etINT, {&hw_opt.nthreads_omp},
          "Number of OpenMP threads per MPI rank to start (0 is guess)" },
        { "-ntomp_pme", FALSE, etINT, {&hw_opt.nthreads_omp_pme},
          "Number of OpenMP threads per MPI rank to start (0 is -ntomp)" },
        { "-pin",     FALSE, etENUM, {thread_aff_opt},
          "Whether mdrun should try to set thread affinities" },
        { "-pinoffset", FALSE, etINT, {&hw_opt.core_pinning_offset},
          "The lowest logical core number to which mdrun should pin the first thread" },
        { "-pinstride", FALSE, etINT, {&hw_opt.core_pinning_stride},
          "Pinning distance in logical cores for threads, use 0 to minimize the number of threads per physical core" },
        { "-gpu_id",  FALSE, etSTR, {&hw_opt.gpu_opt.gpu_id},
          "List of GPU device id-s to use, specifies the per-node PP rank to GPU mapping" },
        { "-ddcheck", FALSE, etBOOL, {&bDDBondCheck},
          "Check for all bonded interactions with DD" },
        { "-ddbondcomm", FALSE, etBOOL, {&bDDBondComm},
          "HIDDENUse special bonded atom communication when [TT]-rdd[tt] > cut-off" },
        { "-rdd",     FALSE, etREAL, {&rdd},
          "The maximum distance for bonded interactions with DD (nm), 0 is determine from initial coordinates" },
        { "-rcon",    FALSE, etREAL, {&rconstr},
          "Maximum distance for P-LINCS (nm), 0 is estimate" },
        { "-dlb",     FALSE, etENUM, {dddlb_opt},
          "Dynamic load balancing (with DD)" },
        { "-dds",     FALSE, etREAL, {&dlb_scale},
          "Fraction in (0,1) by whose reciprocal the initial DD cell size will be increased in order to "
          "provide a margin in which dynamic load balancing can act while preserving the minimum cell size." },
        { "-ddcsx",   FALSE, etSTR, {&ddcsx},
          "HIDDENA string containing a vector of the relative sizes in the x "
          "direction of the corresponding DD cells. Only effective with static "
          "load balancing." },
        { "-ddcsy",   FALSE, etSTR, {&ddcsy},
          "HIDDENA string containing a vector of the relative sizes in the y "
          "direction of the corresponding DD cells. Only effective with static "
          "load balancing." },
        { "-ddcsz",   FALSE, etSTR, {&ddcsz},
          "HIDDENA string containing a vector of the relative sizes in the z "
          "direction of the corresponding DD cells. Only effective with static "
          "load balancing." },
        { "-gcom",    FALSE, etINT, {&nstglobalcomm},
          "Global communication frequency" },
        { "-nb",      FALSE, etENUM, {&nbpu_opt},
          "Calculate non-bonded interactions on" },
        { "-nstlist", FALSE, etINT, {&nstlist},
          "Set nstlist when using a Verlet buffer tolerance (0 is guess)" },
        { "-tunepme", FALSE, etBOOL, {&bTunePME},
          "Optimize PME load between PP/PME ranks or GPU/CPU" },
        { "-v",       FALSE, etBOOL, {&bVerbose},
          "Be loud and noisy" },
        { "-pforce",  FALSE, etREAL, {&pforce},
          "Print all forces larger than this (kJ/mol nm)" },
        { "-reprod",  FALSE, etBOOL, {&bReproducible},
          "Try to avoid optimizations that affect binary reproducibility" },
        { "-cpt",     FALSE, etREAL, {&cpt_period},
          "Checkpoint interval (minutes)" },
        { "-cpnum",   FALSE, etBOOL, {&bKeepAndNumCPT},
          "Keep and number checkpoint files" },
        { "-append",  FALSE, etBOOL, {&bTryToAppendFiles},
          "Append to previous output files when continuing from checkpoint instead of adding the simulation part number to all file names" },
        { "-nsteps",  FALSE, etINT64, {&nsteps},
          "Run this number of steps, overrides .mdp file option (-1 means infinite, -2 means use mdp option, smaller is invalid)" },
        { "-maxh",   FALSE, etREAL, {&max_hours},
          "Terminate after 0.99 times this time (hours)" },
        { "-multi",   FALSE, etINT, {&nmultisim},
          "Do multiple simulations in parallel" },
        { "-replex",  FALSE, etINT, {&repl_ex_nst},
          "Attempt replica exchange periodically with this period (steps)" },
        { "-nex",  FALSE, etINT, {&repl_ex_nex},
          "Number of random exchanges to carry out each exchange interval (N^3 is one suggestion).  -nex zero or not specified gives neighbor replica exchange." },
        { "-reseed",  FALSE, etINT, {&repl_ex_seed},
          "Seed for replica exchange, -1 is generate a seed" },
        { "-imdport",    FALSE, etINT, {&imdport},
          "HIDDENIMD listening port" },
        { "-imdwait",  FALSE, etBOOL, {&bIMDwait},
          "HIDDENPause the simulation while no IMD client is connected" },
        { "-imdterm",  FALSE, etBOOL, {&bIMDterm},
          "HIDDENAllow termination of the simulation from IMD client" },
        { "-imdpull",  FALSE, etBOOL, {&bIMDpull},
          "HIDDENAllow pulling in the simulation from IMD client" },
        { "-rerunvsite", FALSE, etBOOL, {&bRerunVSite},
          "HIDDENRecalculate virtual site coordinates with [TT]-rerun[tt]" },
        { "-confout", FALSE, etBOOL, {&bConfout},
          "HIDDENWrite the last configuration with [TT]-c[tt] and force checkpointing at the last step" },
        { "-stepout", FALSE, etINT, {&nstepout},
          "HIDDENFrequency of writing the remaining wall clock time for the run" },
        { "-resetstep", FALSE, etINT, {&resetstep},
          "HIDDENReset cycle counters after these many time steps" },
        { "-resethway", FALSE, etBOOL, {&bResetCountersHalfWay},
          "HIDDENReset the cycle counters after half the number of steps or halfway [TT]-maxh[tt]" }
    };
    unsigned long   Flags;
    ivec            ddxyz;
    int             dd_rank_order;
    gmx_bool        bDoAppendFiles, bStartFromCpt;
    FILE           *fplog;
    int             rc;
    char          **multidir = NULL;

    cr = init_commrec();

    unsigned long PCA_Flags = PCA_CAN_SET_DEFFNM;
    // With -multi or -multidir, the file names are going to get processed
    // further (or the working directory changed), so we can't check for their
    // existence during parsing.  It isn't useful to do any completion based on
    // file system contents, either.
    if (is_multisim_option_set(argc, argv))
    {
        PCA_Flags |= PCA_DISABLE_INPUT_FILE_CHECKING;
    }

    /* Comment this in to do fexist calls only on master
     * works not with rerun or tables at the moment
     * also comment out the version of init_forcerec in md.c
     * with NULL instead of opt2fn
     */
    /*
       if (!MASTER(cr))
       {
       PCA_Flags |= PCA_NOT_READ_NODE;
       }
     */

    if (!parse_common_args(&argc, argv, PCA_Flags, NFILE, fnm, asize(pa), pa,
                           asize(desc), desc, 0, NULL, &oenv))
    {
        sfree(cr);
        return 0;
    }


    dd_rank_order = nenum(ddrank_opt);

    hw_opt.thread_affinity = nenum(thread_aff_opt);

    /* now check the -multi and -multidir option */
    if (opt2bSet("-multidir", NFILE, fnm))
    {
        if (nmultisim > 0)
        {
            gmx_fatal(FARGS, "mdrun -multi and -multidir options are mutually exclusive.");
        }
        nmultisim = opt2fns(&multidir, "-multidir", NFILE, fnm);
    }


    if (repl_ex_nst != 0 && nmultisim < 2)
    {
        gmx_fatal(FARGS, "Need at least two replicas for replica exchange (option -multi)");
    }

    if (repl_ex_nex < 0)
    {
        gmx_fatal(FARGS, "Replica exchange number of exchanges needs to be positive");
    }

    if (nmultisim >= 1)
    {
#if !GMX_THREAD_MPI
        init_multisystem(cr, nmultisim, multidir, NFILE, fnm);
#else
        gmx_fatal(FARGS, "mdrun -multi or -multidir are not supported with the thread-MPI library. "
                  "Please compile GROMACS with a proper external MPI library.");
#endif
    }

    if (!opt2bSet("-cpi", NFILE, fnm))
    {
        // If we are not starting from a checkpoint we never allow files to be appended
        // to, since that has caused a ton of strange behaviour and bugs in the past.
        if (opt2parg_bSet("-append", asize(pa), pa))
        {
            // If the user explicitly used the -append option, explain that it is not possible.
            gmx_fatal(FARGS, "GROMACS can only append to files when restarting from a checkpoint.");
        }
        else
        {
            // If the user did not say anything explicit, just disable appending.
            bTryToAppendFiles = FALSE;
        }
    }

    handleRestart(cr, bTryToAppendFiles, NFILE, fnm, &bDoAppendFiles, &bStartFromCpt);

    Flags = opt2bSet("-rerun", NFILE, fnm) ? MD_RERUN : 0;
    Flags = Flags | (bDDBondCheck  ? MD_DDBONDCHECK  : 0);
    Flags = Flags | (bDDBondComm   ? MD_DDBONDCOMM   : 0);
    Flags = Flags | (bTunePME      ? MD_TUNEPME      : 0);
    Flags = Flags | (bConfout      ? MD_CONFOUT      : 0);
    Flags = Flags | (bRerunVSite   ? MD_RERUN_VSITE  : 0);
    Flags = Flags | (bReproducible ? MD_REPRODUCIBLE : 0);
    Flags = Flags | (bDoAppendFiles  ? MD_APPENDFILES  : 0);
    Flags = Flags | (opt2parg_bSet("-append", asize(pa), pa) ? MD_APPENDFILESSET : 0);
    Flags = Flags | (bKeepAndNumCPT ? MD_KEEPANDNUMCPT : 0);
    Flags = Flags | (bStartFromCpt ? MD_STARTFROMCPT : 0);
    Flags = Flags | (bResetCountersHalfWay ? MD_RESETCOUNTERSHALFWAY : 0);
    Flags = Flags | (opt2parg_bSet("-ntomp", asize(pa), pa) ? MD_NTOMPSET : 0);
    Flags = Flags | (bIMDwait      ? MD_IMDWAIT      : 0);
    Flags = Flags | (bIMDterm      ? MD_IMDTERM      : 0);
    Flags = Flags | (bIMDpull      ? MD_IMDPULL      : 0);

    /* We postpone opening the log file if we are appending, so we can
       first truncate the old log file and append to the correct position
       there instead.  */
    if (MASTER(cr) && !bDoAppendFiles)
    {
        gmx_log_open(ftp2fn(efLOG, NFILE, fnm), cr,
                     Flags & MD_APPENDFILES, &fplog);
    }
    else
    {
        fplog = NULL;
    }

    ddxyz[XX] = (int)(realddxyz[XX] + 0.5);
    ddxyz[YY] = (int)(realddxyz[YY] + 0.5);
    ddxyz[ZZ] = (int)(realddxyz[ZZ] + 0.5);

    rc = gmx::mdrunner(&hw_opt, fplog, cr, NFILE, fnm, oenv, bVerbose,
                       nstglobalcomm, ddxyz, dd_rank_order, npme, rdd, rconstr,
                       dddlb_opt[0], dlb_scale, ddcsx, ddcsy, ddcsz,
                       nbpu_opt[0], nstlist,
                       nsteps, nstepout, resetstep,
                       nmultisim, repl_ex_nst, repl_ex_nex, repl_ex_seed,
                       pforce, cpt_period, max_hours, imdport, Flags);

    /* Log file has to be closed in mdrunner if we are appending to it
       (fplog not set here) */
    if (MASTER(cr) && !bDoAppendFiles)
    {
        gmx_log_close(fplog);
    }

    return rc;
}
Ejemplo n.º 6
0
int mdrunner(gmx_hw_opt_t *hw_opt,
             FILE *fplog, t_commrec *cr, int nfile,
             const t_filenm fnm[], const output_env_t oenv, gmx_bool bVerbose,
             gmx_bool bCompact, int nstglobalcomm,
             ivec ddxyz, int dd_node_order, real rdd, real rconstr,
             const char *dddlb_opt, real dlb_scale,
             const char *ddcsx, const char *ddcsy, const char *ddcsz,
             const char *nbpu_opt, int nstlist_cmdline,
             gmx_int64_t nsteps_cmdline, int nstepout, int resetstep,
             int gmx_unused nmultisim, int repl_ex_nst, int repl_ex_nex,
             int repl_ex_seed, real pforce, real cpt_period, real max_hours,
             int imdport, unsigned long Flags)
{
    gmx_bool                  bForceUseGPU, bTryUseGPU, bRerunMD;
    t_inputrec               *inputrec;
    t_state                  *state = NULL;
    matrix                    box;
    gmx_ddbox_t               ddbox = {0};
    int                       npme_major, npme_minor;
    t_nrnb                   *nrnb;
    gmx_mtop_t               *mtop          = NULL;
    t_mdatoms                *mdatoms       = NULL;
    t_forcerec               *fr            = NULL;
    t_fcdata                 *fcd           = NULL;
    real                      ewaldcoeff_q  = 0;
    real                      ewaldcoeff_lj = 0;
    struct gmx_pme_t        **pmedata       = NULL;
    gmx_vsite_t              *vsite         = NULL;
    gmx_constr_t              constr;
    int                       nChargePerturbed = -1, nTypePerturbed = 0, status;
    gmx_wallcycle_t           wcycle;
    gmx_bool                  bReadEkin;
    gmx_walltime_accounting_t walltime_accounting = NULL;
    int                       rc;
    gmx_int64_t               reset_counters;
    gmx_edsam_t               ed           = NULL;
    int                       nthreads_pme = 1;
    int                       nthreads_pp  = 1;
    gmx_membed_t              membed       = NULL;
    gmx_hw_info_t            *hwinfo       = NULL;
    /* The master rank decides early on bUseGPU and broadcasts this later */
    gmx_bool                  bUseGPU      = FALSE;

    /* CAUTION: threads may be started later on in this function, so
       cr doesn't reflect the final parallel state right now */
    snew(inputrec, 1);
    snew(mtop, 1);

    if (Flags & MD_APPENDFILES)
    {
        fplog = NULL;
    }

    bRerunMD     = (Flags & MD_RERUN);
    bForceUseGPU = (strncmp(nbpu_opt, "gpu", 3) == 0);
    bTryUseGPU   = (strncmp(nbpu_opt, "auto", 4) == 0) || bForceUseGPU;

    /* Detect hardware, gather information. This is an operation that is
     * global for this process (MPI rank). */
    hwinfo = gmx_detect_hardware(fplog, cr, bTryUseGPU);

    gmx_print_detected_hardware(fplog, cr, hwinfo);

    if (fplog != NULL)
    {
        /* Print references after all software/hardware printing */
        please_cite(fplog, "Abraham2015");
        please_cite(fplog, "Pall2015");
        please_cite(fplog, "Pronk2013");
        please_cite(fplog, "Hess2008b");
        please_cite(fplog, "Spoel2005a");
        please_cite(fplog, "Lindahl2001a");
        please_cite(fplog, "Berendsen95a");
    }

    snew(state, 1);
    if (SIMMASTER(cr))
    {
        /* Read (nearly) all data required for the simulation */
        read_tpx_state(ftp2fn(efTPR, nfile, fnm), inputrec, state, NULL, mtop);

        if (inputrec->cutoff_scheme == ecutsVERLET)
        {
            /* Here the master rank decides if all ranks will use GPUs */
            bUseGPU = (hwinfo->gpu_info.n_dev_compatible > 0 ||
                       getenv("GMX_EMULATE_GPU") != NULL);

            /* TODO add GPU kernels for this and replace this check by:
             * (bUseGPU && (ir->vdwtype == evdwPME &&
             *               ir->ljpme_combination_rule == eljpmeLB))
             * update the message text and the content of nbnxn_acceleration_supported.
             */
            if (bUseGPU &&
                !nbnxn_gpu_acceleration_supported(fplog, cr, inputrec, bRerunMD))
            {
                /* Fallback message printed by nbnxn_acceleration_supported */
                if (bForceUseGPU)
                {
                    gmx_fatal(FARGS, "GPU acceleration requested, but not supported with the given input settings");
                }
                bUseGPU = FALSE;
            }

            prepare_verlet_scheme(fplog, cr,
                                  inputrec, nstlist_cmdline, mtop, state->box,
                                  bUseGPU);
        }
        else
        {
            if (nstlist_cmdline > 0)
            {
                gmx_fatal(FARGS, "Can not set nstlist with the group cut-off scheme");
            }

            if (hwinfo->gpu_info.n_dev_compatible > 0)
            {
                md_print_warn(cr, fplog,
                              "NOTE: GPU(s) found, but the current simulation can not use GPUs\n"
                              "      To use a GPU, set the mdp option: cutoff-scheme = Verlet\n");
            }

            if (bForceUseGPU)
            {
                gmx_fatal(FARGS, "GPU requested, but can't be used without cutoff-scheme=Verlet");
            }

#ifdef GMX_TARGET_BGQ
            md_print_warn(cr, fplog,
                          "NOTE: There is no SIMD implementation of the group scheme kernels on\n"
                          "      BlueGene/Q. You will observe better performance from using the\n"
                          "      Verlet cut-off scheme.\n");
#endif
        }

        if (inputrec->eI == eiSD2)
        {
            md_print_warn(cr, fplog, "The stochastic dynamics integrator %s is deprecated, since\n"
                          "it is slower than integrator %s and is slightly less accurate\n"
                          "with constraints. Use the %s integrator.",
                          ei_names[inputrec->eI], ei_names[eiSD1], ei_names[eiSD1]);
        }
    }

    /* Check and update the hardware options for internal consistency */
    check_and_update_hw_opt_1(hw_opt, cr);

    /* Early check for externally set process affinity. */
    gmx_check_thread_affinity_set(fplog, cr,
                                  hw_opt, hwinfo->nthreads_hw_avail, FALSE);

#ifdef GMX_THREAD_MPI
    if (SIMMASTER(cr))
    {
        if (cr->npmenodes > 0 && hw_opt->nthreads_tmpi <= 0)
        {
            gmx_fatal(FARGS, "You need to explicitly specify the number of MPI threads (-ntmpi) when using separate PME ranks");
        }

        /* Since the master knows the cut-off scheme, update hw_opt for this.
         * This is done later for normal MPI and also once more with tMPI
         * for all tMPI ranks.
         */
        check_and_update_hw_opt_2(hw_opt, inputrec->cutoff_scheme);

        /* NOW the threads will be started: */
        hw_opt->nthreads_tmpi = get_nthreads_mpi(hwinfo,
                                                 hw_opt,
                                                 inputrec, mtop,
                                                 cr, fplog, bUseGPU);

        if (hw_opt->nthreads_tmpi > 1)
        {
            t_commrec *cr_old       = cr;
            /* now start the threads. */
            cr = mdrunner_start_threads(hw_opt, fplog, cr_old, nfile, fnm,
                                        oenv, bVerbose, bCompact, nstglobalcomm,
                                        ddxyz, dd_node_order, rdd, rconstr,
                                        dddlb_opt, dlb_scale, ddcsx, ddcsy, ddcsz,
                                        nbpu_opt, nstlist_cmdline,
                                        nsteps_cmdline, nstepout, resetstep, nmultisim,
                                        repl_ex_nst, repl_ex_nex, repl_ex_seed, pforce,
                                        cpt_period, max_hours,
                                        Flags);
            /* the main thread continues here with a new cr. We don't deallocate
               the old cr because other threads may still be reading it. */
            if (cr == NULL)
            {
                gmx_comm("Failed to spawn threads");
            }
        }
    }
#endif
    /* END OF CAUTION: cr is now reliable */

    /* g_membed initialisation *
     * Because we change the mtop, init_membed is called before the init_parallel *
     * (in case we ever want to make it run in parallel) */
    if (opt2bSet("-membed", nfile, fnm))
    {
        if (MASTER(cr))
        {
            fprintf(stderr, "Initializing membed");
        }
        membed = init_membed(fplog, nfile, fnm, mtop, inputrec, state, cr, &cpt_period);
    }

    if (PAR(cr))
    {
        /* now broadcast everything to the non-master nodes/threads: */
        init_parallel(cr, inputrec, mtop);

        /* The master rank decided on the use of GPUs,
         * broadcast this information to all ranks.
         */
        gmx_bcast_sim(sizeof(bUseGPU), &bUseGPU, cr);
    }

    if (fplog != NULL)
    {
        pr_inputrec(fplog, 0, "Input Parameters", inputrec, FALSE);
        fprintf(fplog, "\n");
    }

    /* now make sure the state is initialized and propagated */
    set_state_entries(state, inputrec);

    /* A parallel command line option consistency check that we can
       only do after any threads have started. */
    if (!PAR(cr) &&
        (ddxyz[XX] > 1 || ddxyz[YY] > 1 || ddxyz[ZZ] > 1 || cr->npmenodes > 0))
    {
        gmx_fatal(FARGS,
                  "The -dd or -npme option request a parallel simulation, "
#ifndef GMX_MPI
                  "but %s was compiled without threads or MPI enabled"
#else
#ifdef GMX_THREAD_MPI
                  "but the number of threads (option -nt) is 1"
#else
                  "but %s was not started through mpirun/mpiexec or only one rank was requested through mpirun/mpiexec"
#endif
#endif
                  , output_env_get_program_display_name(oenv)
                  );
    }

    if (bRerunMD &&
        (EI_ENERGY_MINIMIZATION(inputrec->eI) || eiNM == inputrec->eI))
    {
        gmx_fatal(FARGS, "The .mdp file specified an energy mininization or normal mode algorithm, and these are not compatible with mdrun -rerun");
    }

    if (can_use_allvsall(inputrec, TRUE, cr, fplog) && DOMAINDECOMP(cr))
    {
        gmx_fatal(FARGS, "All-vs-all loops do not work with domain decomposition, use a single MPI rank");
    }

    if (!(EEL_PME(inputrec->coulombtype) || EVDW_PME(inputrec->vdwtype)))
    {
        if (cr->npmenodes > 0)
        {
            gmx_fatal_collective(FARGS, cr, NULL,
                                 "PME-only ranks are requested, but the system does not use PME for electrostatics or LJ");
        }

        cr->npmenodes = 0;
    }

    if (bUseGPU && cr->npmenodes < 0)
    {
        /* With GPUs we don't automatically use PME-only ranks. PME ranks can
         * improve performance with many threads per GPU, since our OpenMP
         * scaling is bad, but it's difficult to automate the setup.
         */
        cr->npmenodes = 0;
    }

#ifdef GMX_FAHCORE
    if (MASTER(cr))
    {
        fcRegisterSteps(inputrec->nsteps, inputrec->init_step);
    }
#endif

    /* NMR restraints must be initialized before load_checkpoint,
     * since with time averaging the history is added to t_state.
     * For proper consistency check we therefore need to extend
     * t_state here.
     * So the PME-only nodes (if present) will also initialize
     * the distance restraints.
     */
    snew(fcd, 1);

    /* This needs to be called before read_checkpoint to extend the state */
    init_disres(fplog, mtop, inputrec, cr, fcd, state, repl_ex_nst > 0);

    init_orires(fplog, mtop, state->x, inputrec, cr, &(fcd->orires),
                state);

    if (DEFORM(*inputrec))
    {
        /* Store the deform reference box before reading the checkpoint */
        if (SIMMASTER(cr))
        {
            copy_mat(state->box, box);
        }
        if (PAR(cr))
        {
            gmx_bcast(sizeof(box), box, cr);
        }
        /* Because we do not have the update struct available yet
         * in which the reference values should be stored,
         * we store them temporarily in static variables.
         * This should be thread safe, since they are only written once
         * and with identical values.
         */
        tMPI_Thread_mutex_lock(&deform_init_box_mutex);
        deform_init_init_step_tpx = inputrec->init_step;
        copy_mat(box, deform_init_box_tpx);
        tMPI_Thread_mutex_unlock(&deform_init_box_mutex);
    }

    if (opt2bSet("-cpi", nfile, fnm))
    {
        /* Check if checkpoint file exists before doing continuation.
         * This way we can use identical input options for the first and subsequent runs...
         */
        if (gmx_fexist_master(opt2fn_master("-cpi", nfile, fnm, cr), cr) )
        {
            load_checkpoint(opt2fn_master("-cpi", nfile, fnm, cr), &fplog,
                            cr, ddxyz,
                            inputrec, state, &bReadEkin,
                            (Flags & MD_APPENDFILES),
                            (Flags & MD_APPENDFILESSET));

            if (bReadEkin)
            {
                Flags |= MD_READ_EKIN;
            }
        }
    }

    if (MASTER(cr) && (Flags & MD_APPENDFILES))
    {
        gmx_log_open(ftp2fn(efLOG, nfile, fnm), cr,
                     Flags, &fplog);
    }

    /* override nsteps with value from cmdline */
    override_nsteps_cmdline(fplog, nsteps_cmdline, inputrec, cr);

    if (SIMMASTER(cr))
    {
        copy_mat(state->box, box);
    }

    if (PAR(cr))
    {
        gmx_bcast(sizeof(box), box, cr);
    }

    /* Essential dynamics */
    if (opt2bSet("-ei", nfile, fnm))
    {
        /* Open input and output files, allocate space for ED data structure */
        ed = ed_open(mtop->natoms, &state->edsamstate, nfile, fnm, Flags, oenv, cr);
    }

    if (PAR(cr) && !(EI_TPI(inputrec->eI) ||
                     inputrec->eI == eiNM))
    {
        cr->dd = init_domain_decomposition(fplog, cr, Flags, ddxyz, rdd, rconstr,
                                           dddlb_opt, dlb_scale,
                                           ddcsx, ddcsy, ddcsz,
                                           mtop, inputrec,
                                           box, state->x,
                                           &ddbox, &npme_major, &npme_minor);

        make_dd_communicators(fplog, cr, dd_node_order);

        /* Set overallocation to avoid frequent reallocation of arrays */
        set_over_alloc_dd(TRUE);
    }
    else
    {
        /* PME, if used, is done on all nodes with 1D decomposition */
        cr->npmenodes = 0;
        cr->duty      = (DUTY_PP | DUTY_PME);
        npme_major    = 1;
        npme_minor    = 1;

        if (inputrec->ePBC == epbcSCREW)
        {
            gmx_fatal(FARGS,
                      "pbc=%s is only implemented with domain decomposition",
                      epbc_names[inputrec->ePBC]);
        }
    }

    if (PAR(cr))
    {
        /* After possible communicator splitting in make_dd_communicators.
         * we can set up the intra/inter node communication.
         */
        gmx_setup_nodecomm(fplog, cr);
    }

    /* Initialize per-physical-node MPI process/thread ID and counters. */
    gmx_init_intranode_counters(cr);
#ifdef GMX_MPI
    if (MULTISIM(cr))
    {
        md_print_info(cr, fplog,
                      "This is simulation %d out of %d running as a composite GROMACS\n"
                      "multi-simulation job. Setup for this simulation:\n\n",
                      cr->ms->sim, cr->ms->nsim);
    }
    md_print_info(cr, fplog, "Using %d MPI %s\n",
                  cr->nnodes,
#ifdef GMX_THREAD_MPI
                  cr->nnodes == 1 ? "thread" : "threads"
#else
                  cr->nnodes == 1 ? "process" : "processes"
#endif
                  );
    fflush(stderr);
#endif

    /* Check and update hw_opt for the cut-off scheme */
    check_and_update_hw_opt_2(hw_opt, inputrec->cutoff_scheme);

    /* Check and update hw_opt for the number of MPI ranks */
    check_and_update_hw_opt_3(hw_opt);

    gmx_omp_nthreads_init(fplog, cr,
                          hwinfo->nthreads_hw_avail,
                          hw_opt->nthreads_omp,
                          hw_opt->nthreads_omp_pme,
                          (cr->duty & DUTY_PP) == 0,
                          inputrec->cutoff_scheme == ecutsVERLET);

#ifndef NDEBUG
    if (integrator[inputrec->eI].func != do_tpi &&
        inputrec->cutoff_scheme == ecutsVERLET)
    {
        gmx_feenableexcept();
    }
#endif

    if (bUseGPU)
    {
        /* Select GPU id's to use */
        gmx_select_gpu_ids(fplog, cr, &hwinfo->gpu_info, bForceUseGPU,
                           &hw_opt->gpu_opt);
    }
    else
    {
        /* Ignore (potentially) manually selected GPUs */
        hw_opt->gpu_opt.n_dev_use = 0;
    }

    /* check consistency across ranks of things like SIMD
     * support and number of GPUs selected */
    gmx_check_hw_runconf_consistency(fplog, hwinfo, cr, hw_opt, bUseGPU);

    /* Now that we know the setup is consistent, check for efficiency */
    check_resource_division_efficiency(hwinfo, hw_opt, Flags & MD_NTOMPSET,
                                       cr, fplog);

    if (DOMAINDECOMP(cr))
    {
        /* When we share GPUs over ranks, we need to know this for the DLB */
        dd_setup_dlb_resource_sharing(cr, hwinfo, hw_opt);
    }

    /* getting number of PP/PME threads
       PME: env variable should be read only on one node to make sure it is
       identical everywhere;
     */
    /* TODO nthreads_pp is only used for pinning threads.
     * This is a temporary solution until we have a hw topology library.
     */
    nthreads_pp  = gmx_omp_nthreads_get(emntNonbonded);
    nthreads_pme = gmx_omp_nthreads_get(emntPME);

    wcycle = wallcycle_init(fplog, resetstep, cr, nthreads_pp, nthreads_pme);

    if (PAR(cr))
    {
        /* Master synchronizes its value of reset_counters with all nodes
         * including PME only nodes */
        reset_counters = wcycle_get_reset_counters(wcycle);
        gmx_bcast_sim(sizeof(reset_counters), &reset_counters, cr);
        wcycle_set_reset_counters(wcycle, reset_counters);
    }

    snew(nrnb, 1);
    if (cr->duty & DUTY_PP)
    {
        bcast_state(cr, state);

        /* Initiate forcerecord */
        fr          = mk_forcerec();
        fr->hwinfo  = hwinfo;
        fr->gpu_opt = &hw_opt->gpu_opt;
        init_forcerec(fplog, oenv, fr, fcd, inputrec, mtop, cr, box,
                      opt2fn("-table", nfile, fnm),
                      opt2fn("-tabletf", nfile, fnm),
                      opt2fn("-tablep", nfile, fnm),
                      opt2fn("-tableb", nfile, fnm),
                      nbpu_opt,
                      FALSE,
                      pforce);

        /* version for PCA_NOT_READ_NODE (see md.c) */
        /*init_forcerec(fplog,fr,fcd,inputrec,mtop,cr,box,FALSE,
           "nofile","nofile","nofile","nofile",FALSE,pforce);
         */

        /* Initialize QM-MM */
        if (fr->bQMMM)
        {
            init_QMMMrec(cr, mtop, inputrec, fr);
        }

        /* Initialize the mdatoms structure.
         * mdatoms is not filled with atom data,
         * as this can not be done now with domain decomposition.
         */
        mdatoms = init_mdatoms(fplog, mtop, inputrec->efep != efepNO);

        /* Initialize the virtual site communication */
        vsite = init_vsite(mtop, cr, FALSE);

        calc_shifts(box, fr->shift_vec);

        /* With periodic molecules the charge groups should be whole at start up
         * and the virtual sites should not be far from their proper positions.
         */
        if (!inputrec->bContinuation && MASTER(cr) &&
            !(inputrec->ePBC != epbcNONE && inputrec->bPeriodicMols))
        {
            /* Make molecules whole at start of run */
            if (fr->ePBC != epbcNONE)
            {
                do_pbc_first_mtop(fplog, inputrec->ePBC, box, mtop, state->x);
            }
            if (vsite)
            {
                /* Correct initial vsite positions are required
                 * for the initial distribution in the domain decomposition
                 * and for the initial shell prediction.
                 */
                construct_vsites_mtop(vsite, mtop, state->x);
            }
        }

        if (EEL_PME(fr->eeltype) || EVDW_PME(fr->vdwtype))
        {
            ewaldcoeff_q  = fr->ewaldcoeff_q;
            ewaldcoeff_lj = fr->ewaldcoeff_lj;
            pmedata       = &fr->pmedata;
        }
        else
        {
            pmedata = NULL;
        }
    }
    else
    {
        /* This is a PME only node */

        /* We don't need the state */
        done_state(state);

        ewaldcoeff_q  = calc_ewaldcoeff_q(inputrec->rcoulomb, inputrec->ewald_rtol);
        ewaldcoeff_lj = calc_ewaldcoeff_lj(inputrec->rvdw, inputrec->ewald_rtol_lj);
        snew(pmedata, 1);
    }

    if (hw_opt->thread_affinity != threadaffOFF)
    {
        /* Before setting affinity, check whether the affinity has changed
         * - which indicates that probably the OpenMP library has changed it
         * since we first checked).
         */
        gmx_check_thread_affinity_set(fplog, cr,
                                      hw_opt, hwinfo->nthreads_hw_avail, TRUE);

        /* Set the CPU affinity */
        gmx_set_thread_affinity(fplog, cr, hw_opt, hwinfo);
    }

    /* Initiate PME if necessary,
     * either on all nodes or on dedicated PME nodes only. */
    if (EEL_PME(inputrec->coulombtype) || EVDW_PME(inputrec->vdwtype))
    {
        if (mdatoms)
        {
            nChargePerturbed = mdatoms->nChargePerturbed;
            if (EVDW_PME(inputrec->vdwtype))
            {
                nTypePerturbed   = mdatoms->nTypePerturbed;
            }
        }
        if (cr->npmenodes > 0)
        {
            /* The PME only nodes need to know nChargePerturbed(FEP on Q) and nTypePerturbed(FEP on LJ)*/
            gmx_bcast_sim(sizeof(nChargePerturbed), &nChargePerturbed, cr);
            gmx_bcast_sim(sizeof(nTypePerturbed), &nTypePerturbed, cr);
        }

        if (cr->duty & DUTY_PME)
        {
            status = gmx_pme_init(pmedata, cr, npme_major, npme_minor, inputrec,
                                  mtop ? mtop->natoms : 0, nChargePerturbed, nTypePerturbed,
                                  (Flags & MD_REPRODUCIBLE), nthreads_pme);
            if (status != 0)
            {
                gmx_fatal(FARGS, "Error %d initializing PME", status);
            }
        }
    }


    if (integrator[inputrec->eI].func == do_md)
    {
        /* Turn on signal handling on all nodes */
        /*
         * (A user signal from the PME nodes (if any)
         * is communicated to the PP nodes.
         */
        signal_handler_install();
    }

    if (cr->duty & DUTY_PP)
    {
        /* Assumes uniform use of the number of OpenMP threads */
        walltime_accounting = walltime_accounting_init(gmx_omp_nthreads_get(emntDefault));

        if (inputrec->bPull)
        {
            /* Initialize pull code */
            inputrec->pull_work =
                init_pull(fplog, inputrec->pull, inputrec, nfile, fnm,
                          mtop, cr, oenv, inputrec->fepvals->init_lambda,
                          EI_DYNAMICS(inputrec->eI) && MASTER(cr), Flags);
        }

        if (inputrec->bRot)
        {
            /* Initialize enforced rotation code */
            init_rot(fplog, inputrec, nfile, fnm, cr, state->x, box, mtop, oenv,
                     bVerbose, Flags);
        }

        if (inputrec->eSwapCoords != eswapNO)
        {
            /* Initialize ion swapping code */
            init_swapcoords(fplog, bVerbose, inputrec, opt2fn_master("-swap", nfile, fnm, cr),
                            mtop, state->x, state->box, &state->swapstate, cr, oenv, Flags);
        }

        constr = init_constraints(fplog, mtop, inputrec, ed, state, cr);

        if (DOMAINDECOMP(cr))
        {
            GMX_RELEASE_ASSERT(fr, "fr was NULL while cr->duty was DUTY_PP");
            dd_init_bondeds(fplog, cr->dd, mtop, vsite, inputrec,
                            Flags & MD_DDBONDCHECK, fr->cginfo_mb);

            set_dd_parameters(fplog, cr->dd, dlb_scale, inputrec, &ddbox);

            setup_dd_grid(fplog, cr->dd);
        }

        /* Now do whatever the user wants us to do (how flexible...) */
        integrator[inputrec->eI].func(fplog, cr, nfile, fnm,
                                      oenv, bVerbose, bCompact,
                                      nstglobalcomm,
                                      vsite, constr,
                                      nstepout, inputrec, mtop,
                                      fcd, state,
                                      mdatoms, nrnb, wcycle, ed, fr,
                                      repl_ex_nst, repl_ex_nex, repl_ex_seed,
                                      membed,
                                      cpt_period, max_hours,
                                      imdport,
                                      Flags,
                                      walltime_accounting);

        if (inputrec->bPull)
        {
            finish_pull(inputrec->pull_work);
        }

        if (inputrec->bRot)
        {
            finish_rot(inputrec->rot);
        }

    }
    else
    {
        GMX_RELEASE_ASSERT(pmedata, "pmedata was NULL while cr->duty was not DUTY_PP");
        /* do PME only */
        walltime_accounting = walltime_accounting_init(gmx_omp_nthreads_get(emntPME));
        gmx_pmeonly(*pmedata, cr, nrnb, wcycle, walltime_accounting, ewaldcoeff_q, ewaldcoeff_lj, inputrec);
    }

    wallcycle_stop(wcycle, ewcRUN);

    /* Finish up, write some stuff
     * if rerunMD, don't write last frame again
     */
    finish_run(fplog, cr,
               inputrec, nrnb, wcycle, walltime_accounting,
               fr ? fr->nbv : NULL,
               EI_DYNAMICS(inputrec->eI) && !MULTISIM(cr));


    /* Free GPU memory and context */
    free_gpu_resources(fr, cr, &hwinfo->gpu_info, fr ? fr->gpu_opt : NULL);

    if (opt2bSet("-membed", nfile, fnm))
    {
        sfree(membed);
    }

    gmx_hardware_info_free(hwinfo);

    /* Does what it says */
    print_date_and_time(fplog, cr->nodeid, "Finished mdrun", gmx_gettime());
    walltime_accounting_destroy(walltime_accounting);

    /* PLUMED */
    if(plumedswitch){
      plumed_finalize(plumedmain);
    }
    /* END PLUMED */

    /* Close logfile already here if we were appending to it */
    if (MASTER(cr) && (Flags & MD_APPENDFILES))
    {
        gmx_log_close(fplog);
    }

    rc = (int)gmx_get_stop_condition();

    done_ed(&ed);

#ifdef GMX_THREAD_MPI
    /* we need to join all threads. The sub-threads join when they
       exit this function, but the master thread needs to be told to
       wait for that. */
    if (PAR(cr) && MASTER(cr))
    {
        tMPI_Finalize();
    }
#endif

    return rc;
}
Ejemplo n.º 7
0
int main(int argc,char *argv[])
{
  static char *desc[] = {
    "The ffscan program performs a single point energy and force calculation",
    "in which the force field is modified. This way a range of parameters can",
    "be changed and tested for reproduction of e.g. quantum chemical or",
    "experimental data. A grid scan over the parameters is done as specified",
    "using command line arguments. All parameters that reproduce the energy",
    "within a given absolute tolerance are printed to a log file.[PAR]",
    "Obviously polarizable models can be used, and shell optimisation is",
    "performed if necessary. Also, like in [TT]mdrun[tt] table functions can be used",
    "for user defined potential functions.[PAR]",
    "If the option -ga with appropriate file is passed, a genetic algorithm will",
    "be used rather than a grid scan."
  };
  t_commrec    *cr;
  static t_filenm fnm[] = {
    { efTPX, NULL,      NULL,       ffREAD  },
    { efLOG, "-g",      "md",       ffWRITE },
    { efXVG, "-table",  "table",    ffOPTRD },
    { efDAT, "-parm",   "params",   ffREAD  },
    { efDAT, "-ga",     "genalg",   ffOPTRD },
    { efGRO, "-c",      "junk",     ffWRITE },
    { efEDR, "-e",      "junk",     ffWRITE },
    { efTRN, "-o",      "junk",     ffWRITE }
  };
#define NFILE asize(fnm)

  /* Command line options !                         */
  static t_ffscan ff = {
    /* tol      */   0.1,
    /* f_max    */ 100.0,
    /* npow     */  12.0,
    /* epot     */   0.0,
    /* fac_epot */   1.0,
    /* fac_pres */   0.1,
    /* fac_msf  */   0.1,
    /* pres     */   1.0,
    /* molsize  */   1,
    /* nmol     */   1,
    /* bComb    */   TRUE,
    /* bVerbose */   FALSE,
    /* bLogEps  */   FALSE
  };
  static char *loadx=NULL,*loady=NULL,*loadz=NULL;
  static t_pargs pa[] = {
    { "-tol",   FALSE, etREAL, {&ff.tol},   "Energy tolerance (kJ/mol) (zero means everything is printed)" },
    { "-fmax",  FALSE, etREAL, {&ff.f_max},  "Force tolerance (zero means everything is printed)" },
    { "-comb",  FALSE, etBOOL, {&ff.bComb},    "Use combination rules" },
    { "-npow",  FALSE, etREAL, {&ff.npow},     "Power for LJ in case of table use" },
    { "-logeps",FALSE, etBOOL, {&ff.bLogEps},  "Use a logarithmic scale for epsilon" },
    { "-v",     FALSE, etBOOL, {&ff.bVerbose}, "Be loud and noisy" },
    { "-epot",  FALSE, etREAL, {&ff.epot},     "Target energy (kJ/mol)" },
    { "-fepot", FALSE, etREAL, {&ff.fac_epot}, "Factor for scaling energy violations (0 turns energy contribution off)" },
    { "-pres",  FALSE, etREAL, {&ff.pres},     "Value for reference pressure" },
    { "-fpres", FALSE, etREAL, {&ff.fac_pres}, "Factor for scaling pressure violations (0 turns pressure contribution off)" },
    { "-fmsf",  FALSE, etREAL, {&ff.fac_msf},  "Factor for scaling mean square force violations (0 turns MSF contribution off)" },
    { "-molsize",FALSE,etINT,  {&ff.molsize},  "Number of atoms per molecule" },
    { "-nmol",  FALSE, etINT,  {&ff.nmol},     "Number of molecules (Epot is divided by this value!)" }
  };
#define NPA asize(pa)
  unsigned  long Flags = 0;
  gmx_edsam_t ed=NULL;
  FILE      *fplog;

  ivec ddxyz = { 1,1,1 };

  cr = init_par(&argc,&argv);
  
  ff.bVerbose = ff.bVerbose && MASTER(cr);
#if 0
  snew(ed,1);
  ed->eEDtype=eEDnone;
#endif
  
  if (MASTER(cr))
    CopyRight(stderr,argv[0]);

  parse_common_args(&argc,argv,PCA_BE_NICE,NFILE,fnm,
		    NPA,pa,asize(desc),desc,0,NULL);

  if (ff.npow <= 6.0)
    gmx_fatal(FARGS,"Can not have repulsion with smaller exponent than 6");
  if (ff.nmol < 1)
    gmx_fatal(FARGS,"Can not fit %d molecules",ff.nmol);
    
  gmx_log_open(ftp2fn(efLOG,NFILE,fnm),cr,FALSE,0,&fplog);

  if (MASTER(cr)) {
    CopyRight(fplog,argv[0]);
    please_cite(fplog,"Lindahl2001a");
    please_cite(fplog,"Berendsen95a");
  }
  
  set_ffvars(&ff);
  
  Flags = (Flags | MD_FFSCAN);

  mdrunner(fplog,cr,NFILE,fnm,ff.bVerbose,FALSE,
	   ddxyz,0,0,0,loadx,loady,loadz,1,
	   0,0,Flags);
  if (gmx_parallel_env_initialized())
    gmx_finalize(cr);

  gmx_log_close(fplog);

  return 0;
}